The official implementation of the paper Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement
Start LED with few lines
from led.pipelines.led_pipeline import LEDPipeline
led = LEDPipeline()
led.cuda()
led_enhancement = led('./doc/example.jpeg')[0]
Furthermore, you can combine LED with any existing SOTA methods as external backend. Current supported backends include:
Try I-SECRET backend with only one line
led = LEDPipeline(backend='I-SECRET', num_cond_steps=200)
For more details, please read example.ipynb. Please feel free to pull your proposed fundus enhancement methods as backend.
- Training guidance
- Support for ArcNet and SCRNet
- Add related codes for data-driven degradation
- Inference pipeline
For training your own LED, you need to update few lines in configs/train_led.yaml
train_good_image_dir: # update to training hq images directrory
train_bad_image_dir: # update to training lq images directrory
train_degraded_image_dir: # update to training degraded images directrory
val_good_image_dir: # update to validation hq images directrory
val_bad_image_dir: # update to validation lq images directrory
Please note that train_degraded_image_dir
should contain degraded high-qualty images by any data-driven methods. We will inculde related codes in our future workspace. However, you can consider using some existing repos instead, like CUT or CycleGAN.
To train LED, simply run
accelerate launch --mixed_precision fp16 --gpu_ids 0 --num_processes 1 script/train.py
More GPUs will take significant performance improvement.
Thanks for PCENet, ArcNet and SCRNet for sharing their powerful pre-trained weights! Thansk for diffusers for sharing codes.
If this work is helpful for your research, please consider citing the following BibTeX entry.
@article{cheng2023learning,
title={Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement},
author={Cheng, Pujin and Lin, Li and Huang, Yijin and He, Huaqing and Luo, Wenhan and Tang, Xiaoying},
journal={arXiv preprint arXiv:2303.04603},
year={2023}
}
This repository is released under the Apache 2.0 license as found in the LICENSE file.