Skip to content

Latest commit

 

History

History
93 lines (71 loc) · 5.28 KB

File metadata and controls

93 lines (71 loc) · 5.28 KB

by Nicolas Richet1, Soufiane Belharbi1, Haseeb Aslam1, Meike Emilie Schadt3, Manuela González-González2,3, Gustave Cortal4,6, Alessandro Lameiras Koerich1, Marco Pedersoli1, Alain Finkel4,5, Simon Bacon2,3, Eric Granger1

1 LIVIA, Dept. of Systems Engineering, ÉTS, Montreal, Canada
2 Dept. of Health, Kinesiology & Applied Physiology, Concordia University, Montreal, Canada
3 Montreal Behavioural Medicine Centre, Montreal, Canada
4 Université Paris-Saclay, CNRS, ENS Paris-Saclay, LMF, 91190, Gif-sur-Yvette, France
5 Institut Universitaire de France, France
6 Université Paris-Saclay, CNRS, LISN, 91400, Orsay, France

outline

arXiv Github

Github

Abstract

Systems for multimodal emotion recognition (ER) are commonly trained to extract features from different modalities (e.g., visual, audio, and textual) that are combined to predict individual basic emotions. However, compound emotions often occur in real-world scenarios, and the uncertainty of recognizing such complex emotions over diverse modalities is challenging for feature-based models. As an alternative, emerging multimodal large language models (LLMs) like BERT and LLaMA rely on explicit non-verbal cues that may be translated from different non-textual modalities (e.g., audio and visual) into text. Textualization of modalities augments data with emotional cues to help the LLM encode the interconnections between all modalities in a shared text space. In such text-based models, prior knowledge of ER tasks is leveraged to textualize relevant nonverbal cues such as audio tone from vocal expressions, and action unit intensity from facial expressions. Since the pre-trained weights are publicly available for many LLMs, training on large-scale datasets is unnecessary, allowing fine-tuning for downstream tasks such as compound ER (CER). This paper compares the potential of text- and feature-based approaches for compound multimodal ER in videos. Experiments were conducted on the challenging C-EXPR-DB dataset in the wild for CER, and contrasted with results on the MELD dataset for basic ER. Our results indicate that multimodal textualization provides lower accuracy than feature-based models on C-EXPR-DB, where text transcripts are captured in the wild. However, higher accuracy can be achieved when the video data has rich transcripts. Our code for the Feature-based approach can be found at: github.com/sbelharbi/feature-vs-text-compound-emotion. This repository is the text-based approach.

This code is the text-based approach presented in the paper.

Code: Pytorch 2.2.2, made for the 7th-ABAW challenge.

Citation:

@article{Richet-abaw-24,
  title={Textualized and Feature-based Models for Compound Multimodal Emotion Recognition in the Wild},
  author={Richet, N. and Belharbi, S. and Aslam, H. and Zeeshan, O. and Belharbi, S. and
  Koerich, A. L. and Pedersoli, M. and Bacon, S. and Granger, E.},
  journal={CoRR},
  volume={abs/2407.12927}
  year={2024}
}

Installation of the environment

./setup_env.sh

Supported modalities:

  • Vision
  • Audio
  • Text

Datasets:

For MELD, download the videos into the corresponding videos folder (for example ./MELD/train/videos/ for train). For C-EXPR-DB, download the video into ./C-EXPR-DB. Then the videos need to be trimmed according to the annotations, which can be done using the pre-processing described in the Feature-based Repo. The trimmed videos should then be put into ./C-EXPR-DB/trimmed_videos.

Pre-processing and Feature Extraction:

  1. First, the paths in ./constants.py need to be set correctly. FFMPEG needs to be installed. An access token to the Llama-3 HuggingFace Repo and a Hume API key are needed for the preprocessing.

  2. Feature Extraction Set at the end of ./preprocessing.py the dataset, device and splits to preprocess. splits are used only for MELD, MELD is large so it is recommended to preprocess each splits separately.

source venv/bin/activate

python3 preprocessing.py

Training:

Training arguments are defined in parser.py, and the training can then be launched with :

source venv/bin/activate

python run_train.sh