Skip to content

This repository provides a PyTorch implementation of FFDNet image denoising https://arxiv.org/abs/1710.04026. First implemented by Matias Tassano https://doi.org/10.5201/ipol.2019.231, the FFDNet source code has been adapted so as to extract cameras PRNU.

License

Notifications You must be signed in to change notification settings

samuelebortolotti/neural-prnu-extractor

Repository files navigation

Neural-PRNU-Extractor

A modified version of the PyTorch implementation of FFDNet image denoising, created for the Signal, Image, and Video course of the master's degree program in Artificial Intelligence System and Computer Science at the University of Trento.

About

Original author

The original FFDNET implementation was provided by

Later authors

OVERVIEW

Introduction

This source code provides a modified version of the "FFDNet image denoising, as in Zhang, Kai, Wangmeng Zuo, and Lei Zhang. FFDNet: Toward a fast and flexible solution for CNN based image denoising." FFDNet paper.

This version, unlike the original, concentrates on detecting the cameras' PRNU.

It includes the option of training the network using the Wiener filter as a strategy to detect and extract noise from images, in addition to the original method provided in the paper.

Objective

Noise reduction is the process, which consists in removing noise from a signal. Images, taken with both digital cameras and conventional film cameras, will pick up noise from a variety of sources, which can be (partially) removed for practical purposes such as computer vision. Neural-PRNU-Extractor aims at predicting the noise from an image, provided a noise level \sigma \in \left[0, 75 \right].

In addition to noise extraction, Neural-PRNU-Extractor can compute and evaluate PRNU, given a dataset of flat images, and evaluate the natural images. PRNU, the acronym for photo response non-uniformity, is a form of fixed-pattern noise related to digital image sensors, as used in cameras and optical instruments, and is used with the purpose of identifying which device generated an image.

Schema

https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/prnu_extraction_pipeline.pdf

USER GUIDE

The code as-is runs in Python 3.9 with the following dependencies:

Dependencies

Usage

To facilitate the use of the application, a Makefile has been provided; to see its functions, simply call the appropriate help command with GNU/Make

make help

0. Set up

For the development phase, the Makefile provides an automatic method to create a virtual environment.

If you want a virtual environment for the project, you can run the following commands:

pip install --upgrade pip

Virtual environment creation in the venv folder

make env

Virtual environment activation

source ./venv/ffdnet/bin/activate

Install the requirements listed in requirements.txt

make install

Note: if you have Tesla K40c GPU, you can use dependency file for MMlab GPU [requirements.mmlabgpu.txt]

make install-mmlab

1. Documentation

The documentation is built using Sphinx v4.3.0.

If you want to build the documentation, you need to enter the project folder first:

cd neural-prnu-extractor

Install the development dependencies [requirements.dev.txt]

make install-dev

Build the Sphinx layout

make doc-layout

Build the documentation

make doc

Open the documentation

make open-doc

2. Data preparation

In order to train the provided model, it is necessary to prepare the data first.

To this purpose, a set of commands has been created. It must be specified, however, that such commands work while considering the syntax of the VISION dataset.

This code does not include image datasets, however, you can retrieve one from: VISION Dataset

Split into train and validation

First of all, you will need to split the original dataset into training and validation.

You can learn more about how to perform this operation by executing

python -m ffdnet prepare_vision --help

Generally, any dataset with a similar structure (no subfolders and images with experiment_name <camera_model_number>_<I|V>_<resource_type>_<resource_number>.jpg) can be split by executing the following command:

python -m ffdnet prepare_vision \
  SOURCE_DIR \
  DESTINATION_DIR \
  --train_frac 0.7

NOTES

  • Use the -m option to move files instead of copying them
  • --train_frac is used to specify the proportion of elements in training/validation
Prepare the patches

At this point, you will need to prepare the dataset composed of patches by executing prepare_patches.py indicating the paths to the directories containing the training and validation datasets by specifying as arguments --trainset_dir and --valset_dir, respectively.

You can learn more about how to perform this operation by executing

python -m ffdnet prepare_patches --help

EXAMPLE

To prepare a dataset of patches 44x44 with stride 20, you can execute

python -m ffdnet prepare_patches \
  SOURCE_DIR \
  DESTINATION_DIR \
  --patch_size 44 \
  --stride 20

NOTES

  • To prepare a grayscale dataset: python prepare_patches.py --gray
  • --max_number_patches can be used to set the maximum number of patches contained in the database

3. Training

Train a model

A model can be trained after having built the training and validation databases (i.e. train_rgb.h5 and val_rgb.h5 for color denoising, and train_gray.h5 and val_gray.h5 for grayscale denoising). Only training on GPU is supported.

python -m ffdnet train --help

EXAMPLE

python -m ffdnet train \
  --batch_size 128 \
  --val_batch_size 128 \
  --epochs 80 \
  --filter wiener \
  --experiment_name en \
  --gray

NOTES

  • The training process can be monitored with TensorBoard as logs get saved in the experiments/experiment_name folder
  • By default, noise added at validation is set to 25 (--val_noiseL flag)
  • A previous training can be resumed passing the --resume_training flag
  • It is possible to specify a different dataset location for training (validation) with --traindbf (--valdbf)
  • Resource can be limited by users (when using torch 1.10.0) with the option --gpu_fraction
  • Training was performed by considering a file containing 50160 patches 100x100 with 50px of stride, while for the validation we considered a file containing 16080 patches.

4. Testing

You can learn more about the test function by calling the help of the test sub-parser

python -m ffdnet test --help

If you want to denoise an image using one of the pre-trained models found under the models folder, you can execute

python -m ffdnet test \
  INPUT_IMG1 INPUT_IMG2 ... INPUT_IMGK \
  models/WEIGHTS \
  DST_FOLDER

To run the algorithm on CPU instead of GPU:

python -m ffdnet test \
  INPUT_IMG1 INPUT_IMG2 ... INPUT_IMGK \
  models/WEIGHTS \
  DST_FOLDER \
  --device cpu

Or just change the flags' values within the Makefile and run

make test

Ouput example

Original image

https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/original.pdf

Histogram equalized predicted noise

https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/histogram_equalized_prediction_noise.jpg

Denoised image

https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/prediction_denoised.jpg

NOTES

  • Models have been trained for values of noise in [0, 5]
  • Models have been trained with the Wiener filter as a denoising method

5. PRNU data preparation

In order to evaluate the model according to PRNU, it is necessary first to prepare the data.

To this purpose, a set of commands has been created. It must be specified, however, that such commands work while considering the syntax of the VISION dataset.

This code does not include image datasets, however, you can retrieve one from: VISION Dataset

Split into flat and nat

For this purpose, you will need to split the original dataset into flat and nat images. In particular, it is required a dataset structure as follows:

.
├── flat
│   ├── D04_I_0001.jpg
.....
│   └── D06_I_0149.jpg
└── nat
    ├── D04_I_0001.jpg
   ...
    └── D06_I_0132.jpg

You can learn more about how to perform this operation by executing

python -m ffdnet prepare_prnu --help

Generally, any dataset with a similar structure (no subfolders and images with experiment_name <camera_model_number>_<I|V>_<flat|nat>_<resource_number>.jpg) can be split by executing the following

python -m ffdnet prepare_prnu \
  SOURCE_DIR

NOTES

  • Use the -m option to move files instead of copying them
  • Use the --dst option to specify a different destination folder

6. PRNU evaluation

To evaluate a model according to the PRNU, a set of commands with various options was created. You can learn more about how to perform this operation by executing

python -m ffdnet prnu --help

The evaluation uses a dataset, generated as described in the previous section, to evaluate a specific model.

python -m ffdnet prnu \
  PREPARED_DATASET_DIR \
  models/WEIGHTS

Output example

Estimated PRNU

https://github.com/samuelebortolotti/neural-prnu-extractor/blob/master/presentation/imgs/prnu.jpg

Statistics

{
   'cc': {
      'auc': 0.9163367807608622,
      'eer': 0.19040247678018576,
      'fpr': array([
         ...
      ]),
      'th': array([
         ...
      ])
   },
   'pce': {
      'auc': 0.8582477067737637,
      'eer': 0.22678018575851394,
      'fpr': array([
         ...
      ]),
      'th': array([
         ...
      ]),
      'tpr': array([
         ...
      ])
   }
}

Where:

NOTES

  • Use the --sigma option to specify a set noise value for the dataset (if not specified, this is calculated for every image)
  • Use the --gray option if using a gray dataset
  • Use the --cut_dim option to specify the size of the cut of the images used for the estimation of the PRNU
  • For the fingerprints extraction, we considered a set of 3 camera models with 130 (flat) images per model

ABOUT THIS FILE

Copyright 2018 IPOL Image Processing On Line http://www.ipol.im/

Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

ACKNOWLEDGEMENTS

Some of the code is based on code by Yiqi Yan yanyiqinwpu@gmail.com

About

This repository provides a PyTorch implementation of FFDNet image denoising https://arxiv.org/abs/1710.04026. First implemented by Matias Tassano https://doi.org/10.5201/ipol.2019.231, the FFDNet source code has been adapted so as to extract cameras PRNU.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •