A transformer is a deep learning architecture developed by researchers at Google and based on the multi-head attention mechanism, proposed in a 2017 paper "Attention Is All You Need".[1] Text is converted to numerical representations called tokens, and each token is converted into a vector via looking up from a word embedding table. At each layer, each token is then contextualized within the scope of the context window with other (unmasked) tokens via a parallel multi-head attention mechanism allowing the signal for key tokens to be amplified and less important tokens to be diminished.
Transformers have the advantage of having no recurrent units, and therefore require less training time than earlier recurrent neural architectures (RNNs) such as long short-term memory (LSTM).[2] Later variations have been widely adopted for training large language models (LLM) on large (language) datasets, such as the Wikipedia corpus and Common Crawl.[3]
Transformers were first developed as an improvement over previous architectures for machine translation,[4] [5] but have found many applications since then. They are used in large-scale natural language processing, computer vision (vision transformers), reinforcement learning,[6] audio,[7] multi-modal processing, robotics,[8] and even playing chess.[9] It has also led to the development of pre-trained systems, such as generative pre-trained transformers (GPTs)[10] and BERT[11] (Bidirectional Encoder Representations from Transformers).
See also: Timeline of machine learning.
For many years, sequence modelling and generation was done by using plain recurrent neural networks (RNNs). A well-cited early example was the Elman network (1990). In theory, the information from one token can propagate arbitrarily far down the sequence, but in practice the vanishing-gradient problem leaves the model's state at the end of a long sentence without precise, extractable information about preceding tokens.
A key breakthrough was LSTM (1995), a RNN which used various innovations to overcome the vanishing gradient problem, allowing efficient learning of long-sequence modelling. One key innovation was the use of an attention mechanism which used neurons that multiply the outputs of other neurons, so-called multiplicative units.[12] Neural networks using multiplicative units were called sigma-pi networks[13] or higher-order networks,[14] but they faced high computational complexity. LSTM became the standard architecture for long sequence modelling until the 2017 publication of Transformers.
However, LSTM still used sequential processing, like most other RNNs. Specifically, RNNs operate one token at a time from first to last; they cannot operate in parallel over all tokens in a sequence. An early attempt to overcome this was the fast weight controller (1992) which computed the weight matrix for further processing depending on the input.[15] It used the fast weights architecture (1987),[16] where one neural network outputs the weights of another neural network. It was later shown to be equivalent to the linear Transformer without normalization.[17] [18]
The idea of encoder-decoder sequence transduction had been developed in the early 2010s (see [19] [20] for previous papers). The papers most commonly cited as the originators that produced seq2seq are two concurrently published papers from 2014.
(Sutskever et al, 2014) was a 380M-parameter model for machine translation using two long short-term memory (LSTM). The architecture consists of two parts. The encoder is an LSTM that takes in a sequence of tokens and turns it into a vector. The decoder is another LSTM that converts the vector into a sequence of tokens. Similarly, (Cho et al, 2014) was 130M-parameter model that used gated recurrent units (GRU) instead of LSTM. Later research showed that GRUs are neither better nor worse than LSTMs for seq2seq.[21]
These early seq2seq models had no attention mechanism, and the state vector is accessible only after the last word of the source text was processed. Although in theory such a vector retains the information about the whole original sentence, in practice the information is poorly preserved, since the input is processed sequentially by one recurrent network into a fixed-size output vector, which was then processed by another recurrent network into an output. If the input is long, then the output vector would not be able to contain all relevant information, and the output quality degrades. As evidence, reversing the input sentence improved seq2seq translation.[22]
(Bahdanau et al, 2014) introduced an attention mechanism to seq2seq for machine translation to solve the bottleneck problem, allowing the model to process long-distance dependencies more easily. They called their model RNNsearch, as it "emulates searching through a source sentence during decoding a translation".
(Luong et al, 2015)[23] compared the relative performance of global (that of (Bahdanau et al, 2014)) and local (sliding window) attention model architectures for machine translation, and found that a mixed attention architecture had higher quality than global attention, while the use of a local attention architecture reduced translation time.
In 2016, Google Translate was revamped to Google Neural Machine Translation, which replaced the previous model based on statistical machine translation. The new model was a seq2seq model where the encoder and the decoder were both 8 layers of bidirectional LSTM.[24] It took nine months to develop, and it achieved a higher level of performance than the statistical approach, which took ten years to develop.[25]
Seq2seq models with attention still suffered from the same issue with recurrent networks, which is that they are hard to parallelize, which prevented them to be accelerated on GPUs. In 2016, decomposable attention applied attention mechanism to the feedforward network, which are easy to parallelize.[26] One of its authors, Jakob Uszkoreit, suspected that attention without recurrence is sufficient for language translation, thus the title "attention is all you need".[27]
In 2017, the original (100M-sized) encoder-decoder transformer model was proposed in the "Attention is all you need" paper. At the time, the focus of the research was on improving seq2seq for machine translation, by removing its recurrence to process all tokens in parallel, but preserving its dot-product attention mechanism to keep its text processing performance. Its parallelizability was an important factor to its widespread use in large neural networks.
Transformers are used in many models that contribute to the ongoing AI boom.
In language modelling, ELMo (2018) was a bi-directional LSTM that produces contextualized word embeddings, improving upon the line of research from bag of words and word2vec. It was followed by BERT (2018), an encoder-only Transformer model.[28] In 2019 October, Google started using BERT to process search queries.[29] In 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model.[30]
Starting in 2018, the OpenAI GPT series of decoder-only Transformers became state of the art in natural language generation. In 2022, a chatbot based on GPT-3, ChatGPT, became unexpectedly popular,[31] triggering a boom around large language models.[32]
Since 2020, Transformers have been applied in modalities beyond text, including the vision transformer,[33] speech recognition, robotics, and multimodal. The vision transformer, in turn, stimulated new developments in convolutional neural networks.[34] Image and video generators like DALL-E (2021), Stable Diffusion 3 (2024), and Sora (2024), are based on the Transformer architecture.
The plain transformer architecture had difficulty converging. In the original paper the authors recommended using learning rate warmup. That is, the learning rate should linearly scale up from 0 to maximal value for the first part of the training (usually recommended to be 2% of the total number of training steps), before decaying again.
A 2020 paper found that using layer normalization before (instead of after) multiheaded attention and feedforward layers stabilizes training, not requiring learning rate warmup.[35]
Transformers typically are first pretrained by self-supervised learning on a large generic dataset, followed by supervised fine-tuning on a small task-specific dataset. The pretrain dataset is typically an unlabeled large corpus, such as The Pile. Tasks for pretraining and fine-tuning commonly include:
The T5 transformer report[36] documents a large number of natural language pretraining tasks. Some examples are:
Note that while each of these tasks is trivial or obvious for human native speakers of the language (or languages), they have typically proved challenging for previous generations of machine learning architecture.
In general, there are 3 classes of language modelling tasks: "masked",[38] "autoregressive",[39] and "prefixLM". These classes are independent of a specific modeling architecture such as Transformer, but they are often discussed in the context of Transformer.
In a masked task, one or more of the tokens is masked out, and the model would produce a probability distribution predicting what the masked-out tokens are based on the context. The loss function for the task is typically sum of log-perplexities for the masked-out tokens: and the model is trained to minimize this loss function. The BERT series of models are trained for masked token prediction and another task.
In an autoregressive task, the entire sequence is masked at first, and the model produces a probability distribution for the first token. Then the first token is revealed and the model predicts the second token, and so on. The loss function for the task is still typically the same. The GPT series of models are trained by autoregressive tasks.
In a prefixLM task, the sequence is divided into two parts. The first part is presented as context, and the model predicts the first token of the second part. Then that would be revealed, and the model predicts the second token, and so on. The loss function for the task is still typically the same. The T5 series of models are trained by prefixLM tasks.
Note that "masked" as in "masked language modelling" is not "masked" as in "masked attention", and "prefixLM" (prefix language modeling) is not "prefixLM" (prefix language model).
All transformers have the same primary components:
The following description follows exactly the Transformer as described in the original paper. There are variants, described in the following section.
By convention, we write all vectors as row vectors. This, for example, means that pushing a vector through a linear layer means multiplying it by a weight matrix on the right, as
xW
See main article: Tokenization and Lexical analysis.
As the Transformer architecture natively processes numerical data, not text, there must be a translation between text and tokens. A token is an integer that represents a character, or a short segment of characters. On the input side, the input text is parsed into a token sequence. Similarly, on the output side, the output tokens are parsed back to text. The module doing the conversion between token sequences and texts is a tokenizer.
The set of all tokens is the vocabulary of the tokenizer, and its size is the vocabulary size
nvocabulary
Some commonly used tokenizers are byte pair encoding, WordPiece, and SentencePiece.
See also: Word embedding. Each token is converted into an embedding vector via a lookup table. Equivalently stated, it multiplies a one-hot representation of the token by an embedding matrix
M
3
[0,0,0,1,0,0,...]
The number of dimensions in an embedding vector is called hidden size or embedding size and written as
demb
An un-embedding layer is almost the reverse of an embedding layer. Whereas an embedding layer converts a token into a vector, an un-embedding layer converts a vector into a probability distribution over tokens.
The un-embedding layer is a linear-softmax layer:The matrix has shape
(demb,nvocabulary)
A positional encoding is a fixed-size vector representation of the relative positions of tokens within a sequence: it provides the transformer model with information about where the words are in the input sequence. Without positional encoding, the model would be unable to process input sequence as more than a bag of words, as for example, both "man bites dog" and "dog bites man" would be processed exactly the same way.
The positional encoding is defined as a function of type
f:\R\to\Rd;d\inZ,d>0
d
\theta=
t | |
rk |
,r=N2/d
Here,
N
k
N=10000
The function is in a simpler form when written as a complex function of type
f:\R\toCd/2
r=N2/d
The main reason for using this positional encoding function is that using it, shifts are linear transformations:where
\Deltat\in\R
By taking a linear sum, any convolution can also be implemented as linear transformations:for any constants
cj
In typical implementations, all operations are done over the real numbers, not the complex numbers, but since complex multiplication can be implemented as real 2-by-2 matrix multiplication, this is a mere notational difference.
Like earlier seq2seq models, the original transformer model used an encoder-decoder architecture. The encoder consists of encoding layers that process all the input tokens together one layer after another, while the decoder consists of decoding layers that iteratively process the encoder's output and the decoder's output tokens so far.
The purpose of each encoder layer is to create contextualized representations of the tokens, where each representation corresponds to a token that "mixes" information from other input tokens via self-attention mechanism. Each decoder layer contains two attention sublayers: (1) cross-attention for incorporating the output of encoder (contextualized input token representations), and (2) self-attention for "mixing" information among the input tokens to the decoder (i.e. the tokens generated so far during inference time).[41] [42]
Both the encoder and decoder layers have a feed-forward neural network for additional processing of their outputs and contain residual connections and layer normalization steps. These feed-forward layers contain most of the parameters in a Transformer model.
The feedforward network (FFN) modules in a Transformer are 2-layered multilayer perceptrons:where
\phi
The number of neurons in the middle layer is called intermediate size (GPT),[43] filter size (BERT), or feedforward size (BERT). It is typically larger than the embedding size. For example, in both GPT-2 series and BERT series, the intermediate size of a model is 4 times its embedding size:
dffn=4demb
The attention mechanism used in the Transformer architecture are scaled dot-product attention units. For each unit, the transformer model learns three weight matrices: the query weights
WQ
WK
WV
The module takes three sequences, a query sequence, a key sequence, and a value sequence. The query sequence is a sequence of length
\ellseq,query
demb,query
For each vector
xi,
WQ
qi=xi,WQ
K=XkeyWK
V=XvalueWV
It is usually the case that all
WQ,WK,WV
demb,query=dquery
Attention weights are calculated using the query and key vectors: the attention weight
aij
i
j
qi
kj
\sqrt{dk}
WQ
WK
i
j
qi ⋅ kj
j
i
qj ⋅ ki
i
aij
i
The attention calculation for all tokens can be expressed as one large matrix calculation using the softmax function, which is useful for training due to computational matrix operation optimizations that quickly compute matrix operations. The matrices
Q
K
V
i
qi
ki
vi
where the softmax is applied over each of the rows of the matrix.
The number of dimensions in a query vector is query size
dquery
dkey
dvalue
dhead
If the attention head is used in a self-attention fashion, then
Xquery=Xkey=Xvalue
Xquery ≠ Xkey=Xvalue
One set of
\left(WQ,WK,WV\right)
Concretely, let the multiple attention heads be indexed by
i
X
Q | |
W | |
i, |
K | |
W | |
i, |
V | |
W | |
i |
i
WO
It is theoretically possible for each attention head to have a different head dimension
dhead
As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:Since
12 x 64=768
WO\in\R(64 x
It may be necessary to cut out attention links between some word-pairs. For example, the decoder, when decoding for the token position
t
t+1
M
-infty
0
For example, the following matrix is commonly used in decoder self-attention modules, called "causal masking":In words, it means that each token can pay attention to itself, and every token before it, but not any after it. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form
PMcausalP-1
P
An encoder consists of an embedding layer, followed by multiple encoder layers.
Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have:
\begin{aligned} giveninputvectors&h0,h1,...\\ combinethemintoamatrixH&=\begin{bmatrix}h0\ h1\ \vdots\end{bmatrix}\\ EncoderLayer(H)&=\begin{bmatrix}FFN(MultiheadedAttention(H,H,H)0)\ FFN(MultiheadedAttention(H,H,H)1)\ \vdots\end{bmatrix}\\ \end{aligned}
where
FFN
FFN
The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decoder.
As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking.
A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer.
Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention.
Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow. This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked.
In contrast, the cross-attention mechanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism.
Schematically, we have:where
HE
The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text.
Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network.The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence. Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector.
There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is where
Sublayer(x)
In the pre-LN convention, the output of each sublayer isThe original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention was developed in 2020, which was found to be easier to train, requiring no warm-up, leading to faster convergence.
The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from input: Encoder input t_e Decoder input t_d output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence)) /* encoder */ z_e ← encoder.tokenizer(t_e) for each t in 1:length(z_e) do z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t) for each l in 1:length(encoder.layers) do layer ← encoder.layers[l] /* first sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.multiheaded_attention(z_e, z_e, z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] /* second sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.feedforward(z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] for each t in 1:length(z_e) do z_e[t] ← encoder.final_layer_norm(z_e[t]) /* decoder */ z_d ← decoder.tokenizer(t_d) for each t in 1:length(z_d) do z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t) for each l in 1:length(decoder.layers) do layer ← decoder.layers[l] /* first sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* second sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.multiheaded_attention(z_d, z_e, z_e) for each i in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* third sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.feedforward(z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] z_d ← decoder.final_layer_norm(z_d) output_distributions ← [] for each t in 1:length(z_d) do output_distributions.append(decoder.unembed(z_d[t])) return output_distributions
The Transformer architecture, being modular, allows variations. Several common variations are described here.[46]
An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.
A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only.
An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder.
A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the formwhere the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.
There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressively.[47]
The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series used SwiGLU;[48] both GPT-1 and BERT used GELU.[49]
The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm[50] which is used in the Llama series. Other examples include ScaleNorm,[51] or FixNorm.
Transformers may use other positional encoding methods than sinusoidal.[52]
The original Transformer paper reported using a learned positional encoding,[53] but finding it not superior to the sinusoidal one. Later, found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module.
RoPE (rotary positional embedding),[54] is best explained by considering a list of 2-dimensional vectors
(1) | |
[(x | |
1, |
(2) | |
x | |
1), |
(1) | |
(x | |
2, |
(2) | |
x | |
2), |
(1) | |
(x | |
3, |
(2) | |
x | |
3), |
...]
\theta
zm:=
(1) | |
x | |
m |
+i
(2) | |
x | |
m |
2n
\theta(1),...,\theta(n)
The benefit of RoPE is that the dot-product between two vectors depends on their relative location only:for any integer
k
ALiBi (Attention with Linear Biases)[55] is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism isHere,
s
B
Bi,=j-i
0
-infty
ALiBi allows pretraining on short context windows, then finetuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located).
Relative Position Encodings[56] is similar to ALiBi, but more generic:where
B
Bi,=Bi',
i-j=i'-j'
The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models.
FlashAttention[57] is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow).
An improved version, FlashAttention-2,[58] [59] [60] was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention.
Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).
Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8.
Multi-Query Attention changes the multiheaded attention mechanism.[61] Whereas normally,
with Multi-Query Attention, there is just one
WK,WV
This has a neutral effect on model quality and training speed, but increases inference speed.
When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching.[62] [63]
If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots.
Transformers are used in large language models for autoregressive sequence generation: generating a stream of text, one token at a time. However, in most settings, decoding from language models is memory-bound, meaning that we have spare compute power available. Speculative decoding[64] uses this spare compute power by computing several tokens in parallel. Similarly to speculative execution in CPUs, future tokens are computed concurrently, by speculating on the value of previous tokens, and are later discarded if it turns out the speculation was incorrect.
Specifically, consider a transformer model like GPT-3 with a context window size of 512. To generate an entire context window autoregressively with greedy decoding, it must be run for 512 times, each time generating a token
x1,x2,...,x512
xt
t
In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose a small model generated four speculative tokens:
\tilde{x1},\tilde{x2},\tilde{x3},\tilde{x4}
\tilde{x1}
\tilde{x2}
x3
\tilde{x3}
\tilde{x4}
For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.
Training transformer-based architectures can be expensive, especially for long inputs.[65] Many methods have been developed to attempt to address the issue. Long Range Arena (2020)[66] is a standard benchmark for comparing the behavior of transformer architectures over long inputs.
The standard attention graph is either all-to-all or causal, both of which scales as
O(N2)
N
Reformer (2020)[67] reduces the computational load from
O(N2)
O(NlnN)
Sparse attention uses attention graphs that grows slower than
O(N2)
O(N)
Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers[70] reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value.
Random Feature Attention (2021)[71] uses Fourier random features:where
w1,...,wD
N(0,\sigma2I)
E[\langle\varphi(x),\varphi(y)\rangle]=
| ||||
e |
\sigma=
1/4 | |
d | |
K |
This approximation can be computed in linear time, as we can compute the matrix
\varphi(ki)
T | |
v | |
i |
w1,...,wD
N(0,\sigma2I)
Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality.
Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning.[73] The LLaVA was a vision-language model composed of a language model (Vicuna-13B)[74] and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.[75]
Vision transformers adapt the transformer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer.
Conformer[76] and later Whisper[77] follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer.
Perceivers[78] [79] are a variant of Transformers designed for multimodality.
For image generation, two notable architectures are DALL-E 1 (2021) and Parti (2022).[80] Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image. Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image.
The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Claude, BERT, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world or practical applications, including:
Beyond traditional NLP, the transformer architecture has had success in other applications, such as: