Skip to content
/ LWDN Public

Exploration of Lightweight Single Image Denoising with Transformers and Truly Fair Training (ICMR 2023)

License

Notifications You must be signed in to change notification settings

rami0205/LWDN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 

Repository files navigation

LWDN

Exploration of Lightweight Single Image Denoising with Transformers and Truly Fair Training (ICMR 2023, ACM International Conference on Multimedia Retrieval)

Haram Choi*, Cheolwoong Na, Jinseop Kim, and Jihoon Yang+

*: This work has been done during my 3rd semester of Master Course in Sogang University.

+: Corresponding author.

ArXiv | Visual Results

  • Proposes seven lightweight image denosing (LWDN) Transformer baselines.
    • Three come from lightweight super-resolution and four are downsized versions of the current best large denoising models.
    • Lightweight Super-Resolution Models: SwinIR-light (ICCVW21), ELAN-light (ECCV22), NGswin (CVPR23)
    • Large Denosing Models: Restormer (CVPR22), Uformer (CVPR22), CAT (NeurIPS22), ART (ICLR23)
  • Trains and compares the baselines in a truly fair manner.
    • Identical randomly cropped patches are selectd every iteration in each epoch.
    • Records and provides cropped areas (starting point of height (top) and width (left) to be cropped) and random data augmentation (horizontal flip and angle of rotation)
    • Robust to random seed in terms of reproducibility.
  • Emperically analyzes various aspects of our baselines.
    • Hierarchical Structure.
    • Spatial vs. Channel Self-Attention.
    • Excessive Weight Sharing.
    • Still Useful CNN.

News

May 03, 2023: Visual results shared

Apr 04, 2023: Codes released publicly (will be re-available soon due to several issues.)

Apr 02, 2023: Our paper accepted at ICMR 2023

Visual Results

Comparison of Our Lightweight Baselines (Please Click)

001denoising_results

Comparison with Large Models (Please Click)

002comp_with_large

Main Results

Comparison of Our Lightweight Baselines for Color Gaussian Blind Denoising (Please Click)

003color_denoising

Comparison of Our Lightweight Baselines for Grayscale Gaussian Blind Denoising (Please Click)

004gray_denoising

Requirements

Libraries

  • Python 3.6.9
  • PyTorch >= 1.10.1+cu102
  • timm >= 0.6.1
  • torchvision >= 0.11.2+cu102
  • einops 0.3.0
  • numpy 1.19.5
  • OpenCV 4.6.0
  • tqdm 4.61.2

Datasets (names and path)

TBD

Testing with pre-trained models

You can get the results in the Tables of our paper.

If you have multi gpus that can be used for Distributed Data Parallel (DDP), follow the commands below.

Please properly edit the first five arguments to work on your devices.

TBD

Training from scratch: x2 task

with DDP

  • NOTE: argument batch_size means the size of mini-batch assigned per gpu
TBD

Training by warm-start: x3, x4 tasks

with DDP

TBD

Citation

(preferred)
@inproceedings{choi2023exploration,
  title={Exploration of Lightweight Single Image Denoising with Transformers and Truly Fair Training},
  author={Choi, Haram and Na, Cheolwoong and Kim, Jinseop and Yang, Jihoon},
  booktitle={Proceedings of the 2023 International Conference on Multimedia Retrieval},
  year={2023}
}

@article{choi2023exploration,
  title={Exploration of Lightweight Single Image Denoising with Transformers and Truly Fair Training},
  author={Choi, Haram and Na, Cheolwoong and Kim, Jinseop and Yang, Jihoon},
  journal={arXiv preprint arXiv:2304.01805},
  year={2023}
}

Credits

About

Exploration of Lightweight Single Image Denoising with Transformers and Truly Fair Training (ICMR 2023)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published