Vision Transformer | A Paradigm Shift in Computer Vision

Vision Transformer | A Paradigm Shift in Computer Vision

Wed Dec 20 2023

Convolutional neural networks (CNNs) have been the dominant approach for computer vision tasks. However, CNNs have some inherent limitations—they impose strong inductive biases and struggle to model long-range dependencies in image data efficiently. Recently, Vision Transformers (ViT) have emerged as an alternative to CNNs, showing promising results on image tasks. This article provides a comprehensive overview of Vision Transformers, delving into their architecture, training procedures, strengths, limitations, improvements, applications, and future research directions.

Deep Learning service
Deep Learning service
Improve your machine learning with Saiwa deep learning service! Unleash the power of neural networks for advanced AI solutions. Get started now!
Vision Transformer.webp

What is a Vision Transformer?

The transformative power of vision transformers, is revolutionizing image recognition tasks like object detection and deep learning image segmentation with unparalleled versatility.

At the core of ViTs are stacking multiple self-attention layers that associate representations across all spatial regions of an input image. This allows modeling global contexts more effectively compared to hierarchical CNN models. ViTs splits input images into small patches which get embedded into tokens like words. These token sequences are what transformer encoder blocks take as inputs for modeling relationships through multi-headed self-attention calculations across them. A classification head finally uses these learned global image representations to predict output image categories. So, without convolution operations whatsoever, ViTs achieve state-of-the-art accuracies across vision benchmarks, presenting an exciting alternative to established CNNs.

With active research ongoing into optimizations and hybrids, vision transformers display immense future potential at the intersection of computer vision and NLP.

 

 

 

 

Vision Transformer Architecture

The overall architecture of a Vision Transformer is like the original Transformer model in NLP. The input image is split into small non-overlapping patches which act as “words” or “tokens”. A patch embedding layer then maps these patches into a sequence of vector representations. Positional encodings are added to provide information about the spatial ordering of patches. The encoded patch sequence then goes through a series of transformer encoder blocks consisting of multi-headed self-attention layers and MLP layers. Self-attention allows every input element to interact with each other, enabling the modeling of global contexts. The Transformer encoder outputs are aggregated by averaging and fed into a classifier head—usually just a linear layer—for predicting image labels. Over the entire architecture, Vision Transformers do not employ any CNN layers or convolutional operations, thus completely eschewing the inductive biases inherent to CNNs.

Vision Transformer Architecture

Training

Like most deep neural networks, Vision Transformers require extensive data and compute resources for training. They are commonly pre-trained on large image datasets like ImageNet-22k or even larger proprietary datasets from industry labs. The model is then fine-tuned on downstream computer vision tasks. Regularization methods like stochastic depth help to prevent overfitting given the high model capacity. Extensive data augmentation techniques like random crops, color distortions, etc. are also used. However, a key advantage over CNNs is that Vision Transformers can also be trained reasonably well with less data since they make fewer assumptions and learn representations more freely from the data itself.

Advantages Over CNNs

The stand-out quality of Vision Transformers is their use of self-attention layers instead of convolutional operations. This provides several advantages:

  1. By attending to all input elements, the model can learn relationships across distant spatial regions. This allows implicitly modeling long-range dependencies without information flow bottlenecks present in hierarchical CNN models. 

  2. The stack of transformer layers brings increased depth, which enhances representation ability. Without any hand-designed components, the model has less in-built inductive bias allowing it to learn directly from data.

  3. Without convolution constraints, Vision Transformers can take inputs at much higher resolutions. Recent models can handle full-sized ImageNet resolution or more. State-of-the-art CNNs struggle with such resolutions.  

  4. Training transformers can succeed with far fewer data compared to data-hungry CNNs, perhaps due to having a simple, uniform architecture vs specialized modules in CNNs needing more support from data.

  5. Better handling of occlusion, clutter, and pixelation. Again owing to learning global contexts using self-attention over the full image area.

  6. Increased interpretability—attention heatmaps can identify image regions that the model attends to the most for recognizing concepts.

Advantages Over CNNs

Limitations

While Vision Transformers opens exciting opportunities, it also comes with some limitations: 

  1. The computational overhead of self-attention grows quadratically with image size, becoming prohibitively expensive for high resolutions without using efficient approximations. Convolutional complexity grows only linearly.

  2. They still underperform CNNs when training data is very limited. Without sufficient data, the attention model tends to overfit. Pre-training on large external datasets becomes imperative.  

  3. Stochastic depth and extensive augmentation are needed to prevent overfitting, due to unconstrained, free-form learning of representations by self-attention over images.

  4. Image-specific structural inductive biases still help CNN performance, especially benefiting tasks relying on local texture cues. Global modeling with transformers can miss nuanced textural knowledge.

Variants and Improvements

Many Vision Transformer variants now exist improving efficiency, reducing complexity, and enhancing performance:

  1. Hybrid models introduce a single CNN layer to get the best of both worlds—some translation equivariance properties while retaining the global modeling capacity of transformers.

  2. Efficient attention approximations like linear attention, axial attention, and sparse attention circumvent the quadratic scaling issue. 

  3. Architectural innovations like shifting window approaches for local self-attention reduce memory and compute requirements for high resolutions.

  4. Advanced pre-training methods like masked image modeling, and contrastive self-supervision provide useful regularization and combat overfitting.

  5. Architectural modifications like injecting convolutional token mixers increase connectivity across patches leading to meaningful local interactions.

  6. Shallow transformer encoders reduce computing while getting most of the gains, as shown by models like Data-efficient Image Transformers.

Applications of Vision Transformers

Owing to their strong performance and reduced reliance on large datasets, Vision Transformers have quickly become pervasive across computer vision:

Image Classification

ViT matches or exceeds state-of-the-art CNN accuracy on ImageNet while training on far fewer examples and generalizing well to other datasets.

Object Detection

Transformer-based object detectors like DETR use attention instead of NMS post-processing allowing end-to-end learning. This modernizes object detection pipelines.

Semantic Segmentation

SETR introduced a hierarchically structured transformer encoder to capture both local and global context enabling excellent performance on segmentation tasks.

Image Generation

VQGAN showed that automated image generation using transformers can produce high-resolution, diverse, and realistic images showcasing their modeling capacities. 

Video Recognition

Space-time transformer variants apply self-attention along both spatial and temporal dimensions leading to good video understanding performance.

Applications of Vision Transformers

Future Research

While Vision Transformers have caused a paradigm shift for computer vision in recent years, there remain several important open questions and opportunities for future research:

  • One major research direction is understanding the ideal blend of convolutional and self-attention principles to maximize vision model performance. Current architectures tend to either be pure transformers or CNN-transformer hybrids with simple concatenation. However, more complex mixes integrating the two methodologies at multiple levels of the architecture could yield benefits. Additional theoretical analysis can shed light on the inherent efficiencies and limitations of different connectivity patterns used by convolutions vs self-attention. Such studies can guide designs for next-generation hybrids optimized for accuracy, efficiency, and scalability needs.

  • Another promising area is pre-training ever-larger Vision Transformer models on massive, continually updated image datasets over time. This could replicate the success of gigantic language models in NLP which show remarkable generalization, transfer learning abilities, and emergent intelligence. Competing in model scale and dataset size could unlock new levels of performance across challenging computer vision tasks beyond the capabilities of current models. However, efficiency and carbon impact would need special attention.

  • Specialized transformer architectures tailored for videos, 3D point cloud data, and other non-image modalities offer much scope for research too. Early works have validated transformers for spatiotemporal modeling by extending self-attention across space and time. But more innovations catered exactly for these data types integrating complex domain constraints and equivariant could realize greater benefits.

Conclusion

In just a few years, Vision Transformers have rapidly advanced the state-of-the-art across multiple computer vision domains. Their reliance primarily on simple attention mechanisms rather than data-hungry convolutions points towards Transformer-based architectures becoming a foundational pillar for visual recognition alongside CNNs in the years ahead. However further innovations in transformer architectures, attention mechanisms, model training, and applications will be needed to fully cement their position as a ubiquitous tool for computer vision.

Share:
Follow us for the latest updates
Comments:
No comments yet!

saiwa is an online platform which provides privacy preserving artificial intelligence (AI) and machine learning (ML) services

© 2024 saiwa. All Rights Reserved.
All the images have free licenses from Freepik