Our paper aims to reconstruct obfuscated facial images. We have used a convolutional autoencoder that takes noisy, pixelated, or blurred images as input and attempts to reconstruct recognizable facial images. For training, pixel, perceptual and a weighted combination of the two have been used as loss functions. The results demonstrate that our trained model can successfully reconstruct facial images from highly obfuscated images, with some limitations in clarity, relative to the ground truth. Obfuction techniques are widely used to protect the privacy of any individual. However, using the technique we employ, it is possible to reconstruct the original image. Therefore, we need to think more deeply about the techniques we employ to hide an individual's indentity.
The first part of this project is data cleaning. We clean the data by standardizing each image to a particular size and also create the training dataset by inducing different types of noice in the dataset. 'data_preparation.ipynb' handles the data cleaning part. We train the autoencoder and evaluate its performance in 'autoencoder_training.ipynb'. The detailed project report is in the attached pdf file.
In this project, we used 3 obfucation techniques- speckle noise, gaussian blur, and pixelation. The details of each of these noises are mentioned in the image. However, we can see here how these noises can alter an image.
Obfuscation techniques |
Our model input and output are of the shape
Model Summary |
While detailed results can be read in the project report, the result of training using perceptual loss function is attached. As we can see in these table, the model is able to reconstruct highly obfucated images. Therefore, this result should be seen with a pinch of salt. Maybe the model is overfitting because of which the results are spectacular. This claim can be supported by high PSNR and SSIM scores of the reconstructed images.
Model predictions of high obfuscation inputs when trained with Perceptual loss function |