fer2013 is the dataset used for training the model.
It is recommended to work in a virtual environment, that can be created and activated with Anaconda using the following commands:
conda create -n emotion_detection python=3.4
source activate emotion_detection
conda install scikit-learn
conda install -c menpo opencv3=3.1.0
pip install --upgrade keras
conda install pandas
conda install h5py
pip install SpeechRecognition
apt-get install portaudio19-dev
pip install pyaudio
Create the file ~/.keras/keras.json
with the following content:
{
"image_dim_ordering": "th",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}
git clone https://github.com/SergioML9/emotion_recogniser.git
Run face_analyser/run.py
script, and the emotion detection process will start, showing the output in the terminal. Press q
in order to stop the detection.
Run face_analyser/gui.py
script, and a very simple GUI will be shown. The GUI has five buttons:
- Start emotion detection: starts the emotion recognition, printing the outputs in the terminal.
- Train model: starts the model training with the data specified at
configuration/data_settings.py
. - Evaluate model: evaluates the model with the data specified at
configuration/data_settings.py
. - Get data from dataset: converts the data from the csv dataset to npy files.
- Exit: closes the application.
Run speech_analyser/run.py
script, and the emotion detection process will start, showing the output in the terminal.