Is there any documentation on how to build a transformer encoder from scratch in matlab?
You are now following this question
- You will see updates in your followed content feed.
- You may receive emails, depending on your communication preferences.
An Error Occurred
Unable to complete the action because of changes made to the page. Reload the page to see its updated state.
Show older comments
3 votes
I am building a transformer encoder, and I came accross the following exchange: https://www.mathworks.com/matlabcentral/fileexchange/107375-transformer-models
However, in the exchange there are examples on how to use a pretrained transformer model. I just need an example on how to build a model. Something to give a general idea so I can build on it. I have studied the basics of transformers but I am having some difficulty building the model from scratch.
Thank you in advance.
1 Comment
Shubham
on 8 Sep 2023
Hi,
You can refer to this documentation:
This article is Tensorflow, but you can replicate this in MATLAB
Accepted Answer
Ben
on 18 Sep 2023
The general structure of the intermediate encoder blocks is like:
selfAttentionLayer(numHeads,numKeyChannels) % self attention
additionLayer(2,Name="attention_add") % residual connection around attention
layerNormalizationLayer(Name="attention_norm") % layer norm
fullyConnectedLayer(feedforwardHiddenSize) % feedforward part 1
reluLayer % nonlinear activation
fullyConnectedLayer(attentionHiddenSize) % feedforward part 2
additionLayer(2,Name="feedforward_add") % residual connection around feedforward
layerNormalizationLayer() % layer norm
You would need to hook up the connections to the addition layers appropriately.
Typically you would have multiple copies of this encoder block in a transformer encoder.
You also typically need an embedding at the start of the model. For text data it's common to use wordEmbeddingLayer whereas image data you would use patchEmbeddingLayer.
Also the above encoder block makes no use of positional information, so if your training task requires positional information to be used, you would typically inject the position information via a positionEmbeddingLayer or sinusoidalPositionEncodingLayer.
Finally the last encoder block will typically feed into a model "head" to map the encoder output back to the dimensions of the training targets. Typically this can just be some simple fullyConnectedLayer-s.
Note that for both image and sequence input data the output of the encoder is still an image or sequence, so for image classification and sequence-to-one tasks you need some way to map that sequence of encoder ouptuts to a fixed-size representation. For this you could use indexing1dLayer or pooling layers like globalMaxPooling1dLayer.
Here's a demonstration of the general architecture for a toy task. Given a sequence
where each
we can specify a task
. For example
would have
and
, then
. This is a toy problem that requires positional information to solve and can be easily implemented in code. You can train a transformer encoder to predict y from x as follows:
we can specify a task
. This is a toy problem that requires positional information to solve and can be easily implemented in code. You can train a transformer encoder to predict y from x as follows:% Create model
% We will use 2 encoder layers.
numHeads = 1;
numKeyChannels = 20;
feedforwardHiddenSize = 100;
modelHiddenSize = 20;
% Since the values in the sequence can be 1,2, ..., 10 the "vocabulary" size is 10.
vocabSize = 10;
inputSize = 1;
encoderLayers = [
sequenceInputLayer(1,Name="in") % input
wordEmbeddingLayer(modelHiddenSize,vocabSize,Name="embedding") % embedding
positionEmbeddingLayer(modelHiddenSize,vocabSize) % position embedding
additionLayer(2,Name="embed_add") % add the data and position embeddings
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 1
additionLayer(2,Name="attention_add") %
layerNormalizationLayer(Name="attention_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward_add") %
layerNormalizationLayer(Name="encoder1_out") %
selfAttentionLayer(numHeads,numKeyChannels) % encoder block 2
additionLayer(2,Name="attention2_add") %
layerNormalizationLayer(Name="attention2_norm") %
fullyConnectedLayer(feedforwardHiddenSize) %
reluLayer %
fullyConnectedLayer(modelHiddenSize) %
additionLayer(2,Name="feedforward2_add") %
layerNormalizationLayer() %
indexing1dLayer %
fullyConnectedLayer(inputSize)]; % output head
net = dlnetwork(encoderLayers,Initialize=false);
net = connectLayers(net,"embed_add","attention_add/in2");
net = connectLayers(net,"embedding","embed_add/in2");
net = connectLayers(net,"attention_norm","feedforward_add/in2");
net = connectLayers(net,"encoder1_out","attention2_add/in2");
net = connectLayers(net,"attention2_norm","feedforward2_add/in2");
net = initialize(net);
% analyze the network to see how data flows through it
analyzeNetwork(net)
% create toy training data
% We will generate 10,000 sequences of length 10
% with values that are random integers 1-10
numObs = 10000;
seqLen = 10;
x = randi([1,10],[seqLen,numObs]);
% Loop over to create y(i) = x(x(1),i) + x(x(2),i)
y = zeros(numObs,1);
for i = 1:numObs
idx = x(1:2,i);
y(i) = sum(x(idx,i));
end
x = num2cell(x,1);
% specify training options and train
opts = trainingOptions("adam", ...
MaxEpochs = 200, ...
MiniBatchSize = numObs/10, ...
Plots="training-progress", ...
Shuffle="every-epoch", ...
InitialLearnRate=1e-2, ...
LearnRateDropFactor=0.9, ...
LearnRateDropPeriod=10, ...
LearnRateSchedule="piecewise");
net = trainnet(x,y,net,"mse",opts);
% test the network on a new input
x = randi([1,10],[seqLen,1]));
ypred = predict(net,x)
yact = x(x(1)) + x(x(2))
Obviously this is a toy task, but I think it demonstrates the parts of the standard transformer architecture. Two additional things you would likely need to deal with in real tasks is:
- For sequence data often the observations have different sequence lengths. For this you need to pad the data and pass padding masks to the selfAttentionLayer so that no attention is paid to padding elements.
- Often the encoder will be initially pre-trained on a self-supervised task, e.g. masked-language-modeling for natural language encoders.
Hope that helps.
10 Comments
Mibang
on 9 Dec 2023
Thank you Ben,
I am wondering if you could apply the encoder for the wave extraction problem below where basically I would like to replace LSTM by a trransformer encoder, but I got error message that layer output format "TCB" is not consistent with the input data.
Thanks,
JL
Ben
on 11 Dec 2023
Could you provide code to reproduce the issue with the output format?
I am able to get this example to run with a transformer encoder in place of the LSTM. I would say there are a handful of considerations to make when doing this:
- The input data appear to be integers - do these integers have meaningful values or are they simply class labels? In the latter case you typically use an embedding, like a wordEmbeddingLayer above, to create initial vector embeddings of those class labels. However the original example passes them directly to LSTM so perhaps this is unnecessary.
- The input data appear to be all sequences of length 5000. If the sequences to be used at test time will always have length at most 5000 then you can use positionEmbeddingLayer to provide positional information, but if the sequences might have arbitrary length at test time you might want to use sinusoidalPositionEmbedding.
- The sequence length of 5000 is quite large for selfAttentionLayer, the computation scales quadratically with sequence length. This caused my 12GB GPU to go out of memory. A potential workaround would be to use convolution, pooling, and transposed convolution to downsample initially before the selfAttentionLayer, then up sample after the transformer encoder.
I don't know the ECG data that well, so the following choices may not be appropriate, however for 1. I chose to not use a class embedding because the LSTM case doesn't. For 2. I chose to use sinusoidal position embeddings. For 3. I chose downsample from 5000 -> 1000 -> 250 by 2 conv-activation-pool blocks with a stride of 5 in the pooling layers, and upsample using transposed convolution. Additionally I concatenated the inputs to the conv-activation-pool blocks with their counterparts after the transformer encoder. That lead to me trying this architecture:
modelHiddenSize = 50;
filterSize = 10;
layers = [ ...
sequenceInputLayer(1,MinLength=5000)
fullyConnectedLayer(modelHiddenSize,Name="emb")
sinusoidalPositionEncodingLayer(modelHiddenSize)
additionLayer(2,Name="add")
convolution1dLayer(filterSize,modelHiddenSize,Padding="same")
reluLayer
maxPooling1dLayer(filterSize,Stride=5,Padding="same",Name="pool_1")
convolution1dLayer(filterSize,modelHiddenSize,Padding="same")
reluLayer
maxPooling1dLayer(filterSize,Stride=5,Name="downsample_out",Padding="same")
selfAttentionLayer(5,modelHiddenSize)
additionLayer(2,Name="attn_add")
layerNormalizationLayer
fullyConnectedLayer(modelHiddenSize*2)
geluLayer
fullyConnectedLayer(modelHiddenSize)
concatenationLayer(1,2,Name="cat_1")
transposedConv1dLayer(filterSize,modelHiddenSize,Cropping="same",Stride=5)
geluLayer
concatenationLayer(1,2,Name="cat_2")
transposedConv1dLayer(filterSize,modelHiddenSize,Cropping="same",Stride=5)
geluLayer
concatenationLayer(1,2,Name="cat_3")
fullyConnectedLayer(4)
softmaxLayer
classificationLayer];
lg = layerGraph(layers);
lg = lg.connectLayers("emb","add/in2");
lg = lg.connectLayers("downsample_out","attn_add/in2");
lg = lg.connectLayers("add","cat_3/in2");
lg = lg.connectLayers("pool_1","cat_2/in2");
lg = lg.connectLayers("downsample_out","cat_1/in2");
This beats the LSTM on the raw data and the filtered signals. Note however the above model is quite a bit more complex, so you need to consider what metrics to compare it to LSTM with.
For the time-frequency representation signals that have passed through the FSST, the LSTM seems to perform better - I tried a number of adaptations of the above model but didn't have any luck. This suggests to me that either the FSST extracted features are quite useful representations on their own, and it takes time for the convolution layers to learn how to use these, or that the downsampling in time used to make the transformer feasible to train destroys too much information.
In the above the hyperparameters are more-or-less arbitrary choices, for real tasks you might want to experiment with various values for each hyperparameter to see which affects the model performance, in which case Experiment Manager could be useful.
Mibang
on 13 Dec 2023
Great, Ben,
I will try your code soon although transformer can't beat FSST based approach, and actually I posted the question in the linnk below about the issue, please take a look at the error for the feedback. Thanks!
So, based on your work, transformer encoder may not be said the best approach for seq2seq classification method or so.
https://www.mathworks.com/matlabcentral/answers/2059854-how-to-fix-the-error-error-using-trainnetwork-input-data-indices-must-be-nonnegative-integers
xingxingcui
on 6 Jan 2024
Ben
on 9 Jan 2024
@cui,xingxing regarding vision transformers we now have the visionTransformer function to load a pretrained vision transfomer. Here's an example fine tuning a vision transformer.
DGM
on 5 Mar 2024
Useful answer.
haohaoxuexi1
on 27 Jul 2024
@Ben Hi Ben, Is it possible for you to provide me an example of applying Transformer network for classification task?
Idir
on 10 Dec 2024
I am sorry to ask you this here but I have a question for you regarding one of your Github projects (https://github.com/bwdGitHub/CurveShorteningFlow) in curve shortening flow. Is there any way I can send you a message or an email?
Thank you in advance.
Ben
on 19 Dec 2024
@haohaoxuexi1 - as an example, you could take the sequence-to-one classification task in this documentation example and swap out layers for the following:
layers = [
sequenceInputLayer(numChannels)
fullyConnectedLayer(numHiddenUnits,Name="embed")
positionEmbeddingLayer(numHiddenUnits,200)
additionLayer(2,Name="add")
selfAttentionLayer(1,8)
indexing1dLayer
layerNormalizationLayer
fullyConnectedLayer(numHiddenUnits)
geluLayer
fullyConnectedLayer(numClasses)
softmaxLayer];
layers = dlnetwork(layers,Initialize=false);
layers = connectLayers(layers,"embed","add/in2");
This seems to be able to perform the classification task in the example. This is a simplified transformer - there are no residual connections around the multi-head attention or multi-layer perceptron (MLP) parts of the layer. You can add those with additionLayer and connectLayers. This network demonstrates using selfAttentionLayer, positionEmbeddingLayer, indexing1dLayer. The positionEmbeddingLayer creates a representation of positional information via a learnt embedding that is then added to linear embedding of the data from the first fullyConnectedLayer. The selfAttentionLayer performs the multi-head self attention. The indexing1dLayer takes a sequence as input and returns just the first sequence element - in a sense this is "pooling" the sequence by just disregarding everything except the first sequence element, which is common in sequence-to-one transformer-encoders, since the first sequence element (and any other sequence element) can pay attention to all other sequence elements via the selfAttentionLayer. Other types of pooling are common too, such as global maximum and average pooling. Typically a transformer will additionally have residual connections around the self attention and MLP parts of the network, with additional layerNormalizationLayer instances, multiple heads in the selfAttentionLayer, and multiple instances of the transformer layer(s) in sequence. For sequence-to-sequence classification, you would remove the indexing1dLayer as you want the model to output a sequence of classes.
Ben
on 19 Dec 2024
@Idir I believe you could leave an issue on the GitHub repo if you have a GitHub account, and I could reply there. That is quite an old project that I haven't looked at for some time, and I note in the notebook that I was following the code from https://github.com/acarapetis/curve-shortening-demo. I was using this example to get familiar with programming, in particular how numeric methods can be used to approximate solutions to PDEs, since curve shortening flow is the 1D case of some of the things I was studying at the time.
More Answers (1)
Mehernaz Savai
on 6 Dec 2024
In addition to Ben's suggestions, we have new articles that can be a good source for getting started with Transformers in MATLAB:
Categories
Find more on Communications Toolbox in Help Center and File Exchange
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)