Vision Transformer, or ViT, tries to perform image recognition in a sequence modeling way. By dividing the image into patches and feeding them to the revered model in language modeling, a.k.a. Transformer, ViT shows that over-reliance on the spatial assumption is not obligatory. However, a study shows that giving more cues about spatial information, i.e., subjecting consecutive convolutions to the image before funneling it to the Transformer, aids the ViT in learning better. Since ViT employs the Transformer block, we can easily receive the attention map explaining what the network sees. In this project, we will be using the CIFAR-100 dataset to examine ViT performance. Here, the validation set is fixed to be the same as the test set of CIFAR-100. Online data augmentations, e.g., RandAugment, CutMix, and MixUp, are utilized during training. The learning rate is adjusted by following the triangular cyclical policy.
Experience the journey of training, testing, and inference image classification with ViT by jumping to this notebook.
Here are the quantitative results of ViT performance:
Test Metric | Score |
---|---|
Loss | 1.353 |
Top1-Acc. | 64.92% |
Top5-Acc. | 87.29% |
Accuracy curves of ViT on the CIFAR-100 train and validation sets.
Loss curves of ViT on the CIFAR-100 train and validation sets.
The predictions and the corresponding attention maps are served in this collated image.
Several prediction results of ViT and their attention map.
- An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale
- TorchVision's ViT
- Image classification with Vision Transformer
- Train a Vision Transformer on small datasets
- Learning Multiple Layers of Features from Tiny Images
- The CIFAR-100 dataset
- RandAugment: Practical automated data augmentation with a reduced search space
- RandAugment for Image Classification for Improved Robustness
- CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features
- CutMix data augmentation for image classification
- mixup: Beyond Empirical Risk Minimization
- MixUp augmentation for image classification
- Early Convolutions Help Transformers See Better
- Cyclical Learning Rates for Training Neural Networks
- How to use CutMix and MixUp
- PyTorch Lightning