Code for the paper: Detection of Morphed Face, Body, Audio signals using Deep Neural Networks
- 3 different neural networks are used to detect any deformity/irregularity in media based on the person's face, audio and body language.
- The face deepfake model uses a Maximum Margin Object Detector (to extract the face) followed by a Temporal Neural Network for classification.
- Input audio from media is converted into a spectrogram using the librosa library, and then fed to the model which comprises of ResNet50V2 followed by a Temporal Convolutional Network, which predicts whether the given audio is deepfake or not.
- For body language, the entire body of a person is extracted using YOLOv3 followed by a Temporal Neural Network for classification.
Install all the dependencies
pip3 install -r requirements.txt
- Sample images, audio and video files are present in the predict folder to test the model
- It is better to run the python programs on a GPU as it may require high computation while running the deep learning models.
Download the project from the GitHub repository
git clone https://github.com/manojpissay/Deepfake-Detection.git
A. Deepfake Detection for Images:
python test_image.py -f "path to your file"
B. Deepfake Detection for Audio:
python test_audio.py -f "path to your file"
C. Deepfake Detection for Videos:
python test_video.py -f "path to your file"