Skip to content

[CVPR 2024] Official Implementation of Learning to Remove Wrinkled Transparent Film with Polarized Prior

License

Notifications You must be signed in to change notification settings

jqtangust/FilmRemoval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[CVPR 2024] Film Removal

Problem of Film Removal

This is the official repository for Learning to Remove Wrinkled Transparent Film with Polarized Prior (CVPR 2024)

Jiaqi Tang, Ruizheng Wu, Xiaogang Xu, Sixing Hu and Ying-Cong Chen*

*: Corresponding Author

GitHub license made-for-VSCode Visits Badge

Here is our Project Page !

🔍 New Problem in Low-level Vision: Film Removal

  • 🚩 Goal: Film Removal (FR) aims to remove the interference of wrinkled transparent films and reconstruct the original information under the films.
  • 🚩This technique is used in industrial recognition systems.
Problem of Film Removal

📢 News and Updates

  • ✅ May 28, 2024. We release the pre-trained model of Film Removal in K-ford. Check this Google Cloud link for DOWNLOAD.
  • ✅ May 06, 2024. We release the code of Film Removal.
  • ✅ May 06, 2024. We release the dataset of Film Removal. Check this Google Cloud link for DOWNLOAD.

▶️ Getting Started

🪒 Installation

  • Python >= 3.8.2
  • PyTorch >= 1.8.1
  • Install Polanalyser for processing polarization image
    pip install git+https://github.com/elerac/polanalyser
    
  • Install other dependencies by
    pip install -r requirements.txt
    

💾 Dataset Preparation

  • Google Drive Link for DOWNLOAD dataset.

  • Data Structure: each K* directory contains the data for one fold of the dataset. The GT directory contains the ground truth images, and the input directory contains the input images at different polarized angles.

  • The dataset is organized as follows:

    ├── K1
    │   ├── GT
    │   │   └── 2DCode
    │   │       └── 1_gt_I.bmp
    │   └── input
    │       └── 2DCode
    │           ├── 1_input_0.bmp
    │           ├── 1_input_45.bmp
    │           ├── 1_input_90.bmp
    │           └── 1_input_135.bmp
    ├── K2
    │   ├── ...
    ├── ...
    └── K10
        ├── ...
    

🏰 Pretrained Model

🔨 Configuration

  • The configuration files for testing and training.

  • The Test_K_ford option specifies the number of folds for K-fold cross-validation during testing. The dataroot option specifies the root directory for the dataset, which is set to Dataset. Other configuration settings include learning rate schemes, loss functions, and logger options.

    datasets:
      train:
        name: Reconstruction
        mode: LQGT_condition
        Test_K_ford: K10 # remove from training
        dataroot: /remote-home/share/jiaqi2/Dataset
        dataroot_ratio: ./
        use_shuffle: true
        n_workers: 0
        batch_size: 1
        GT_size: 0
        use_flip: true
        use_rot: true
        condition: image
      val:
        name: Reconstruction
        mode: LQGT_condition_Val
        Test_K_ford: K10 # for testing
        dataroot: /remote-home/share/jiaqi2/Dataset
        dataroot_ratio: ./
        condition: image
    

Testing

  • Modify dataroot, Test_K_ford and pretrain_model_G in testing configuration, then run
    python test.py -opt ./codes/options/test/test.yml
    
  • The test results will be saved to ./results/testset_name, including Restored Image and Prior.

🖥️ Training

  • Modify dataroot and Test_K_ford in training configuration, then run

    python train.py -opt ./codes/options/train/train.yml
    
  • The logs, models and training states will be saved to ./experiments/name. You can also use tensorboard for monitoring for the ./tb_logger/name.

  • Restart Training (To add checkpoint in training configuration)

    path:
      root: ./
      pretrain_model_G: .../experiments/K1/models/XX.pth
      strict_load: false
      resume_state: .../experiments/K1/training_state/XX.state
    

Performance

Compared with other baselines, our model achieves state-of-the-art performance:

[Table 1] Quantitative evaluation in image reconstruction with 10-fold cross-validation.

Methods PSNR SSIM
SHIQ 21.58 0.7499
Polar-HR 22.19 0.7176
Uformer 31.68 0.9426
Restormer 34.32 0.9731
Ours 36.48 0.9824

[Figure 1] Qualitative Evaluation in image reconstruction.

[Figure 2-3] Qualitative Evaluation in Industrial Environment. (QR Reading & Text OCR)

🌐 Citations

The following is a BibTeX reference:

@inproceedings{tang2024learning,
  title = {Learning to Remove Wrinkled Transparent Film with Polarized Prior},
  author = {Tang, Jiaqi and Wu, Ruizheng and Xu, Xiaogang and Hu, Sixing and Chen, Ying-Cong},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2024}
}

📧 Connecting with Us?

If you have any questions, please feel free to send email to jtang092@connect.hkust-gz.edu.cn.

📜 Acknowledgment

This work is supported by the National Natural Science Foundation of China (No. 62206068) and the Natural Science Foundation of Zhejiang Province, China under No. LD24F020002.

About

[CVPR 2024] Official Implementation of Learning to Remove Wrinkled Transparent Film with Polarized Prior

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages