APViT is a simple and efficient Transformer-based method for facial expression recognition (FER). It builds on the TransFER, but introduces two attentive pooling (AP) modules that do not require any learnable parameters. These modules help the model focus on the most expressive features and ignore the less relevant ones. You can read more about our method in our paper.
- 2023-03-31: Added an notebook demo for inference.
This project is based on MMClassification and PaddleClas, please refer to their repos for installation and dataset preparation.
Notable, our method does not rely on custome cuda operations in mmcv-full.
The pre-trained weight of IR-50 weight was downloaded from face.evoLVe, and ViT-Small was downloaded from pytorch-image-models.
The PaddlePaddle version of TransFER is included in the Paddle folder.
To train an APViT model with two GPUs, use:
python -m torch.distributed.launch --nproc_per_node=2 \
train.py configs/apvit/RAF.py \
--launcher pytorch
To evaluate the model with a given checkpoint, use:
PYTHONPATH=$(pwd):$PYTHONPATH \
python -m torch.distributed.launch --nproc_per_node=2 \
tools/test.py configs/apvit/RAF.py \
weights/APViT_RAF-3eeecf7d.pth \ # your checkpoint
--launcher pytorch
Model | RAF-DB | Config | Download |
---|---|---|---|
APViT | 91.98% | config | model |
This project is released under the Apache 2.0 license.
If you use APViT or TransFER, please cite the paper:
@article{xue2022vision,
title={Vision Transformer with Attentive Pooling for Robust Facial Expression Recognition},
author={Xue, Fanglei and Wang, Qiangchang and Tan, Zichang and Ma, Zhongsong and Guo, Guodong},
journal={IEEE Transactions on Affective Computing},
year={2022},
publisher={IEEE}
}
@inproceedings{xue2021transfer,
title={Transfer: Learning Relation-aware Facial Expression Representations with Transformers},
author={Xue, Fanglei and Wang, Qiangchang and Guo, Guodong},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={3601--3610},
year={2021}
}