Attention (machine learning)


In machine learning, attention is a method that determines the importance of each component in a sequence relative to the other components in that sequence. In natural language processing, importance is represented by "soft" weights assigned to each word in a sentence. More generally, attention encodes vectors called token embeddings across a fixed-width sequence that can range from tens to millions of tokens in size.
Unlike "hard" weights, which are computed during the backwards training pass, "soft" weights exist only in the forward pass and therefore change with every step of the input. Earlier designs implemented the attention mechanism in a serial recurrent neural network language translation system, but a more recent design, namely the transformer, removed the slower sequential RNN and relied more heavily on the faster parallel attention scheme.
Inspired by ideas about attention in humans, the attention mechanism was developed to address the weaknesses of using information from the hidden layers of recurrent neural networks. Recurrent neural networks favor more recent information contained in words at the end of a sentence, while information earlier in the sentence tends to be attenuated. Attention allows a token equal access to any part of a sentence directly, rather than only through the previous state.

History

1950s–1960sPsychology and biology of attention. Cocktail party effect — focusing on content by filtering out background noise. Filter model of attention, partial report paradigm, and saccade control.
1980sSigma-pi units, higher-order neural networks.
1990sFast weight controllers and dynamic links between neurons, anticipating key-value mechanisms in attention.
1998The bilateral filter was introduced in image processing. It uses pairwise affinity matrices to propagate relevance across elements.
2005Non-local means extended affinity-based filtering in image denoising, using Gaussian similarity kernels as fixed attention-like weights.
2014seq2seq with RNN + Attention. Attention was introduced to enhance RNN encoder-decoder translation, particularly for long sentences. See Overview section.
Attentional Neural Networks introduced a learned feature selection mechanism using top-down cognitive modulation, showing how attention weights can highlight relevant inputs.
2015Attention was extended to vision for image captioning tasks.
2016Self-attention was integrated into RNN-based models to capture intra-sequence dependencies.
Self-attention was explored in decomposable attention models for natural language inference and structured self-attentive sentence embeddings.
2017The Transformer architecture introduced in the research paper Attention is All You Need formalized scaled dot-product self-attention:
Relation networks and set Transformers applied attention to unordered sets and relational reasoning, generalizing pairwise interaction models.
2018Non-local neural networks extended attention to computer vision by capturing long-range dependencies in space and time. Graph attention networks applied attention mechanisms to graph-structured data.
2019–2020Efficient Transformers, including Reformer, Linformer, and Performer, introduced scalable approximations of attention for long sequences.
2019+Hopfield networks were reinterpreted as associative memory-based attention systems, and vision transformers achieved competitive results in image classification.
Transformers were adopted across scientific domains, including AlphaFold for protein folding, CLIP for vision-language pretraining, and attention-based dense segmentation models like CCNet and DANet.

Additional surveys of the attention mechanism in deep learning are provided by Niu et al. and Soydaner.
The major breakthrough came with self-attention, where each element in the input sequence attends to all others, enabling the model to capture global dependencies. This idea was central to the Transformer architecture, which replaced recurrence with attention mechanisms. As a result, Transformers became the foundation for models like BERT, T5 and generative pre-trained transformers.

Overview

The modern era of machine attention was revitalized by grafting an attention mechanism to an Encoder-Decoder.

Figure 2 shows the internal step-by-step operation of the attention block in Fig 1.

Interpreting attention weights

In translating between languages, alignment is the process of matching words from the source sentence to words of the translated sentence. Networks that perform verbatim translation without regard to word order would show the highest scores along the diagonal of the matrix. The off-diagonal dominance shows that the attention mechanism is more nuanced.
Consider an example of translating I love you to French. On the first pass through the decoder, 94% of the attention weight is on the first English word I, so the network offers the word je. On the second pass of the decoder, 88% of the attention weight is on the third English word you, so it offers t. On the last pass, 95% of the attention weight is on the second English word love, so it offers aime.
In the
I love you example, the second word love is aligned with the third word aime. Stacking soft row vectors together for je, t
, and aime yields an alignment matrix:
Iloveyou
je0.940.020.04
t'0.110.010.88
aime0.030.950.02

Sometimes, alignment can be multiple-to-multiple. For example, the English phrase look it up corresponds to cherchez-le. Thus, "soft" attention weights work better than "hard" attention weights, as we would like the model to make a context vector consisting of a weighted sum of the hidden vectors, rather than "the best one", as there may not be a best hidden vector.

Variants

Many variants of attention implement soft weights, such as
  • fast weight programmers, or fast weight controllers. A "slow" neural network outputs the "fast" weights of another neural network through outer products. The slow network learns by gradient descent. It was later renamed as "linearized self-attention".
  • Bahdanau-style attention, also referred to as additive attention,
  • Luong-style attention, which is known as multiplicative attention,
  • Early attention mechanisms similar to modern self-attention were proposed using recurrent neural networks. However, the highly parallelizable self-attention was introduced in 2017 and successfully used in the Transformer model,
  • positional attention and factorized positional attention.
For convolutional neural networks, attention mechanisms can be distinguished by the dimension on which they operate, namely: spatial attention, channel attention, or combinations.
These variants recombine the encoder-side inputs to redistribute those effects to each target output. Often, a correlation-style matrix of dot products provides the re-weighting coefficients. In the figures below, W is the matrix of context attention weights, similar to the formula in Overview section above.
1. encoder-decoder dot product2. encoder-decoder QKV3. encoder-only dot product4. encoder-only QKV5. Pytorch tutorial

LabelDescription
Variables X, H, S, TUpper case variables represent the entire sentence, and not just the current word. For example, H is a matrix of the encoder hidden state—one word per column.
S, TS, decoder hidden state; T, target word embedding. In the Pytorch Tutorial variant training phase, T alternates between 2 sources depending on the level of teacher forcing used. T could be the embedding of the network's output word; i.e. embedding. Alternatively with teacher forcing, T could be the embedding of the known correct word which can occur with a constant forcing probability, say 1/2.
X, HH, encoder hidden state; X, input word embeddings.
WAttention coefficients
Qw, Kw, Vw, FCWeight matrices for query, key, value respectively. FC is a fully-connected weight matrix.
⊕, ⊗⊕, vector concatenation; ⊗, matrix multiplication.
corrColumn-wise softmax. The dot products are xi * xj in variant #3, hi* sj in variant 1, and column i * column j in variant 2, and column i * column j in variant 4. Variant 5 uses a fully-connected layer to determine the coefficients. If the variant is QKV, then the dot products are normalized by the where is the height of the QKV matrices.

Optimizations

Flash attention

The size of the attention matrix is proportional to the square of the number of input tokens. Therefore, when the input is long, calculating the attention matrix requires a lot of GPU memory. Flash attention is an implementation that reduces the memory needs and increases efficiency without sacrificing accuracy. It achieves this by partitioning the attention computation into smaller blocks that fit into the GPU's faster on-chip memory, reducing the need to store large intermediate matrices and thus lowering memory usage while increasing computational efficiency.