Open In App

Vision Transformers (ViT) in Image Recognition

Last Updated : 08 Oct, 2024
Comments
Improve
Suggest changes
Like Article
Like
Report

Convolutional neural networks (CNNs) have been at the forefront of the revolutionary progress in image recognition in the last ten years. Nonetheless, the field has been transformed by the introduction of Vision Transformers (ViT) which have implemented transformer architecture principles with image data. ViTs have shown outstanding success in different image recognition tasks, offering a new viewpoint on the processing of visual information. This article delves into the structure, functionality, benefits, teaching methods, uses, hurdles, and upcoming developments of Vision Transformers in image detection.

Understanding the Architecture of Vision Transformers

The core idea behind Vision Transformers (ViTs) is to treat images as sequences, similar to how words are treated in natural language processing (NLP). This innovative approach allows for the application of transformer architectures to image recognition tasks, fundamentally changing how visual data is processed.. The structure is comprised of a number of essential elements:

1. Image Patching

Image Patching is the initial step in the Vision Transformer process. This involves dividing images into smaller patches of a predetermined size. For example, a 224x224 pixel image can be segmented into 16x16 pixel patches, resulting in 196 patches. Each patch is then flattened into a vector, enabling the model to work with these smaller, manageable pieces of the image.

2. Positional Encoding

To maintain the positional information of the patches, positional encodings are added to the patch embeddings. This crucial step ensures that the model understands where each patch is located in the original image, allowing it to capture spatial relationships effectively.

3. Multi-Layer Transformer Encoder

The heart of the Vision Transformer is its multi-layer transformer encoder. This structure consists of:

  • Self-Attention Layers: These layers allow the model to evaluate the relationships between different patches, helping it to understand how they interact with one another.
  • Feed-Forward Layers: These layers apply non-linear transformations to the output of the self-attention mechanism, enhancing the model's ability to capture complex patterns in the data.

4. Classification Head

The classification head is a critical component of ViTs, utilized to generate predictions for image recognition tasks. A special token, often referred to as the classification token (CLS), consolidates information from all patches, producing the final predictions. This aggregation of data ensures that the model leverages insights from the entire image rather than isolated patches.

How Vision Transformers Work?

Vision Transformers (ViTs) employ a unique architecture to process images by treating them as sequences of patches. This approach enables the model to leverage the power of transformer designs, particularly through the use of self-attention mechanisms.

Vision Transformers begin by dividing an image into smaller, fixed-size patches. Each patch is then processed individually as part of a sequence, allowing the model to analyze the entire image through its components.

  • The self-attention mechanism is fundamental to how ViTs operate. This mechanism allows each patch to influence the representation of other patches. Specifically, it computes attention scores that determine how much focus each patch should have on every other patch.
  • This ability to weigh the importance of different patches enables Vision Transformers to understand complex connections and interdependencies throughout the entire image. As a result, ViTs can create more comprehensive and nuanced feature representations, capturing intricate patterns that might be missed by traditional convolutional networks.

The training process for Vision Transformers involves adjusting the model's parameters to minimize the prediction error on labeled datasets. This is similar to the training process of other neural network architectures, where:

  1. Loss Function: A loss function is defined to quantify the difference between the predicted outputs and the actual labels.
  2. Backpropagation: The model uses backpropagation to update its weights based on the calculated loss, refining its ability to make accurate predictions.
  3. Optimization: Various optimization algorithms (e.g., Adam, SGD) are employed to enhance the learning process, ensuring that the model converges effectively.

Training Vision Transformers for Image Recognition

Training Vision Transformers demands substantial computational resources and large datasets. We will showcase how to train a Vision Transformer on the CIFAR-10 dataset, a commonly used standard for tasks involving image classification. The CIFAR-10 dataset contains 60,000 color images of size 32x32 divided into 10 classes, each with 6,000 images.

1. Importing Necessary Libraries

The code brings in essential modules from torch and torchvision for tasks such as loading the CIFAR-10 dataset, timm for defining the ViT model, and managing optimizers and loss functions.

Python
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from timm import create_model

2. Data Preparation

Due to the fact that Vision Transformers require larger images, the CIFAR-10 images (32x32) are adjusted to 224x224 in size. We also adjust them based on ImageNet data because we utilize a pre-trained ViT model.

Python
# Define Transformations for the dataset
transform = transforms.Compose([
    transforms.Resize((224, 224)),  # Resizing images to 224x224 as ViT expects larger images
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])

# Load CIFAR-10 dataset
train_dataset = datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
test_dataset = datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)

train_loader = DataLoader(dataset=train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=64, shuffle=False)

3. Defining Model

The function timm.create_model generates a Vision Transformer model (vit_base_patch16_224) using pre-trained weights from ImageNet. The value of num_classes is established at 10 to align with the amount of classes in CIFAR-10.

Python
# Define Vision Transformer Model
model = create_model('vit_base_patch16_224', pretrained=True, num_classes=10)  # Using a pretrained ViT model
model = model.to(device)

# Loss Function and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=3e-4)

4. Loop Training

The loop training handles input images in batches, executes a forward pass, computes the loss, and adjusts the model weights through backpropagation.

Python
def train_model(model, train_loader, criterion, optimizer, device, epochs=10):
    model.train()
    for epoch in range(epochs):
        running_loss = 0.0
        correct, total = 0, 0
        for images, labels in train_loader:
            images, labels = images.to(device), labels.to(device)

            # Zero the gradients
            optimizer.zero_grad()

            # Forward pass
            outputs = model(images)
            loss = criterion(outputs, labels)

            # Backward pass and optimization
            loss.backward()
            optimizer.step()

            running_loss += loss.item()
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()

        print(f'Epoch [{epoch + 1}/{epochs}], Loss: {running_loss / len(train_loader)}, Accuracy: {100 * correct / total:.2f}%')

Output:

Epoch [1/1], Loss: 1.2247312366962433, Accuracy: 55.62%

5. Assessment

The model that has been trained is tested on the test dataset in order to measure its accuracy in classification.

Python
def evaluate_model(model, test_loader, device):
    model.eval()
    correct, total = 0, 0
    with torch.no_grad():
        for images, labels in test_loader:
            images, labels = images.to(device), labels.to(device)
            outputs = model(images)
            _, predicted = torch.max(outputs.data, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()

    print(f'Accuracy on test data: {100 * correct / total:.2f}%')

# Train and Evaluate the Model
train_model(model, train_loader, criterion, optimizer, device, epochs=1)
evaluate_model(model, test_loader, device)

Output:

Accuracy on test data: 72.06%

Applications of Vision Transformers in Real-World Scenarios

Vision Transformers have been utilized in a variety of fields.

  1. Image Classification: ViTs have demonstrated superior performance on standard datasets like ImageNet, establishing their effectiveness for various image classification assignments.
  2. Object Detection: ViTs outperform in object detection due to their capability to capture global contexts, which is essential for tasks such as autonomous driving and surveillance.
  3. Semantic Segmentation: ViTs show potential in medical applications when used for pixel-level classification tasks.
  4. Recent studies are investigating the utilization of ViTs in generative models for the development of new applications in art and design.

Advantages and Disadvantages of Vision Transformers Over CNNs

Advantages of Vision Transformers Over CNNs

Vision Transformers bring multiple benefits compared to conventional CNNs:

  1. Global Context Awareness: In contrast to CNNs that prioritize local features, ViTs are able to understand the global relationships within an image, resulting in better performance for challenging image recognition assignments.
  2. Scalability: Vision Transformers perform efficiently with data, frequently surpassing CNNs when trained on extensive datasets, as they can efficiently make use of additional computational resources.
  3. Versatility: ViTs can adapt to different applications due to their flexibility in handling varying input sizes and lack of rigid architectures.
  4. Transfer Learning: ViTs demonstrate proficiency in transfer learning by utilizing knowledge from pre-existing models to achieve high performance on similar tasks with minimal labeled data available.

Challenges in Implementing Vision Transformers

Even though Vision Transformers offer benefits, there are difficulties when trying to implement them.

  1. Data Needs: Vision Transformers typically need a significant amount of labeled data to achieve their best performance, which can be a restricting factor in certain situations.
  2. The computational expense of the self-attention mechanism, especially for high-resolution images, can cause longer training durations.
  3. Risks of Overfitting: ViTs have a higher likelihood of overfitting when they have more parameters, especially if they are trained on smaller datasets. Utilizing regularization techniques is essential in order to reduce this risk.

As research advances, various patterns are starting to surface in the field of Vision Transformers.

  1. Blending CNNs and ViTs in hybrid models can result in improved performance in a wide range of tasks by capitalizing on the strengths of both architectures.
  2. Research is currently concentrated on creating transformer models that are more efficient in order to lower computational costs while maintaining performance.
  3. Domain Adaptation aims to improve ViTs' ability to adjust to various domains using limited data, expanding their usefulness in a range of real-world situations.
  4. Incorporating Vision Transformers with different data modalities such as text or audio could result in more powerful and all-encompassing models for challenging tasks.

Conclusion

Vision Transformers are changing the way image recognition works, disrupting the traditional reign of convolutional neural networks. Their distinctive structure, which relies on self-attention mechanisms, enables the detection of intricate patterns and relationships in images. Despite facing obstacles, Vision Transformers are seen as a crucial technology for the future of computer vision due to their advantages and adaptability. With ongoing research development, we anticipate further creative uses and enhancements in the capabilities of Vision Transformers for image recognition.


Next Article

Similar Reads