Vision transformer


A vision transformer is a transformer designed for computer vision. A ViT decomposes an input image into a series of patches, serializes each patch into a vector, and maps it to a smaller dimension with a single matrix multiplication. These vector embeddings are then processed by a transformer encoder as if they were token embeddings.
ViTs were designed as alternatives to convolutional neural networks in computer vision applications. They have different inductive biases, training stability, and data efficiency. Compared to CNNs, ViTs are less data efficient, but have higher capacity. Some of the largest modern computer vision models are ViTs, such as one with 22B parameters.
Subsequent to its publication, many variants were proposed, with hybrid architectures with both features of ViTs and CNNs. ViTs have found application in image recognition, image segmentation, weather prediction, and autonomous driving.

History

Transformers were introduced in Attention Is All You Need, and have found widespread use in natural language processing. A 2019 paper applied ideas from the Transformer to computer vision. Specifically, they started with a ResNet, a standard convolutional neural network used for computer vision, and replaced all convolutional kernels by the self-attention mechanism found in a Transformer. It resulted in superior performance. However, it is not a Vision Transformer.
In 2020, an encoder-only Transformer was adapted for computer vision, yielding the ViT, which reached state of the art in image classification, overcoming the previous dominance of CNN. The masked autoencoder extended ViT to work with unsupervised training. The vision transformer and the masked autoencoder, in turn, stimulated new developments in convolutional neural networks.
Subsequently, there was cross-fertilization between the previous CNN approach and the ViT approach.
In 2021, some important variants of the Vision Transformers were proposed. These variants are mainly intended to be more efficient, more accurate or better suited to a specific domain. Two studies improved efficiency and robustness of ViT by adding a CNN as a preprocessor. The Swin Transformer achieved state-of-the-art results on some object detection datasets such as COCO, by using convolution-like sliding windows of attention mechanism, and the pyramid process in classical computer vision.

Overview

The basic architecture, used by the original 2020 paper, is as follows. In summary, it is a BERT-like encoder-only Transformer.
The input image is of type, where are height, width, channel. It is then split into square-shaped patches of type.
For each patch, the patch is pushed through a linear operator, to obtain a vector. The position of the patch is also transformed into a vector by "position encoding". The two vectors are added, then pushed through several Transformer encoders.
The attention mechanism in a ViT repeatedly transforms representation vectors of image patches, incorporating more and more semantic relations between image patches in an image. This is analogous to how in natural language processing, as representation vectors flow through a transformer, they incorporate more and more semantic relations between words, from syntax to semantics.
The above architecture turns an image into a sequence of vector representations. To use these for downstream applications, an additional head needs to be trained to interpret them.
For example, to use it for classification, one can add a shallow MLP on top of it that outputs a probability distribution over classes. The original paper uses a linear-GeLU-linear-softmax network.

Variants

Original ViT

The original ViT was an encoder-only Transformer supervise-trained to predict the image label from the patches of the image. As in the case of BERT, it uses a special token in the input side, and the corresponding output vector is used as the only input of the final output MLP head. The special token is an architectural hack to allow the model to compress all information relevant for predicting the image label into one vector.
Transformers found their initial applications in natural language processing tasks, as demonstrated by language models such as BERT and GPT-3. By contrast the typical image processing system uses a convolutional neural network. Well-known projects include Xception, ResNet, EfficientNet, DenseNet, and Inception.
Transformers measure the relationships between pairs of input tokens, termed attention. The cost is quadratic in the number of tokens. For images, the basic unit of analysis is the pixel. However, computing relationships for every pixel pair in a typical image is prohibitive in terms of memory and computation. Instead, ViT computes relationships among pixels in various small sections of the image, at a drastically reduced cost. The sections are placed in a sequence. The embeddings are learnable vectors. Each section is arranged into a linear sequence and multiplied by the embedding matrix. The result, with the position embedding is fed to the transformer.

Architectural improvements

Pooling

After the ViT processes an image, it produces some embedding vectors. These must be converted to a single class probability prediction by some kind of network. In the original ViT and Masked Autoencoder, they used a dummy token, in emulation of the BERT language model. The output at is the classification token, which is then processed by a LayerNorm-feedforward-softmax module into a probability distribution.
Global average pooling does not use the dummy token, but simply takes the average of all output tokens as the classification token. It was mentioned in the original ViT as being equally good.
Multihead attention pooling applies a multiheaded attention block to pooling. Specifically, it takes as input a list of vectors, which might be thought of as the output vectors of a layer of a ViT. The output from MAP is, where is a trainable query vector, and is the matrix with rows being. This was first proposed in the Set Transformer architecture.
Later papers demonstrated that GAP and MAP both perform better than BERT-like pooling. A variant of MAP was proposed as class attention, which applies MAP, then feedforward, then MAP again.
Re-attention was proposed to allow training deep ViT. It changes the multiheaded attention module.

Masked Autoencoder

The Masked Autoencoder took inspiration from denoising autoencoders and context encoders. It has two ViTs put end-to-end. The first one takes in image patches with positional encoding, and outputs vectors representing each patch. The second one takes in vectors with positional encoding and outputs image patches again.
Training
During training, input images are split along a designated number of lines on each axis, producing image patches. A certain percentage of patches are selected to be masked out by mask tokens, while all others are retained in the image. The network is tasked with reconstructing the image from the remaining unmasked patches. Mask tokens in the original implementation are learnable vector quantities. A linear projection with positional embeddings is then applied to the vector of unmasked patches. Experiments varying mask ratio on networks trained on the ImageNet-1K dataset found 75% mask ratios achieved high performance on both finetuning and linear-probing of the encoder's . The MAE processes only unmasked patches during training, increasing the efficiency of data processing in the encoder and lowering the memory usage of the Transformer
A less computationally-intensive ViT is used for the decoder in the original implementation of the MAE. Masked patches are added back to the output of the encoder block as mask tokens and both are fed into the decoder. A reconstruction loss is computed for the masked patches to assess network performance.
Prediction
In prediction, the decoder architecture is discarded entirely. The input image is split into patches by the same algorithm as in training, but no patches are masked out. A linear projection with a positional embedding is applied to each patch, and the resulting embedding vector representations of each patch are fed to the encoder.
Uses and Derivatives
Several derivatives of the original MAE have been explored. The MAE has been applied for self-supervised pretraining in medical contexts, including for chest X-ray interpretation. Derivatives of the MAE have been applied in this context to better serve as pretraining in medical contexts.
  • Medically Supervised MAE
  • Gray Level Co-occurrence Matrix MAE : GCLM-MAE uses GCLM to extract texture information from images in order to preserve texture information. It addresses an issue in which a classic MAE oversmooths images, causing a loss of granular detail that may be important in medical contexts. GLCM-MAE achieves state-of-the-art performance on the identification of gallbladder cancer, breast cancer imaged from ultrasound, pneumonia imaged from X-rays, and COVID-19 imaged from computed tomography as of Jul. 2025.
  • Region-aware MAE R-MAE: R-MAE replaces patch-generating step in the original MAE with an algorithm for assigning individual pixels to regions of interest in an image, which are masked out together. The region encoding architecture is standalone, but can be combined with the MAE for region reconstruction.
  • Siamese MAEs
A similar architecture was BERT ViT, published concurrently.

DINO

Like the Masked Autoencoder, the DINO method is a way to train a ViT by self-supervision. DINO is a form of teacher-student self-distillation. In DINO, the student is the model itself, and the teacher is an exponential average of the student's past states. The method is similar to previous works like momentum contrast and bootstrap your own latent.
The loss function used in DINO is the cross-entropy loss between the output of the teacher network and the output of the student network. The teacher network is an exponentially decaying average of the student network's past parameters:. The inputs to the networks are two different crops of the same image, represented as and, where is the original image. The loss function is written asOne issue is that the network can "collapse" by always outputting the same value, regardless of the input. To prevent this collapse, DINO employs two strategies:
  • Sharpening: The teacher network's output is sharpened using a softmax function with a lower temperature. This makes the teacher more "confident" in its predictions, forcing the student to learn more meaningful representations to match the teacher's sharpened output.
  • Centering: The teacher network's output is centered by averaging it with its previous outputs. This prevents the teacher from becoming biased towards any particular output value, encouraging the student to learn a more diverse set of features.
In January 2024, Meta AI Research released an updated version called DINOv2 with improvements in architecture, loss function, and optimization technique. It was trained on a larger and more diverse dataset. The features learned by DINOv2 were more transferable, meaning it had better performance in downstream tasks.
In August 2025, Meta AI Research released DINOv3, an update to DINOv2. It introduced image-text alignment like CLIP. It scaled up the model to 7B parameters and the training dataset to 1.7B images. Architecturally, it introduced two improvements: Gram anchoring and axial RoPE with jittering. Gram anchoring applies teacher-student self-distillation for the Gram matrix between the feature vectors of the patches of an image. It avoids the previously observed problem of degradation of dense feature maps: While performance on global tasks continued to improve, performance on dense tasks would peak early and then decline, with feature maps becoming noisy. Axial RoPE makes the model more robust to varying image resolutions, scales, and aspect ratios.