Hao Wei, Chenyang Ge, Zhiyuan Li, Xin Qiao, Pengchao Deng.
Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University.
if VQIR is helpful for you, please help star this repo. Thanks.
- 2024-09-08: The code is released.
- 2023-12-25: Repo is released.
- 2023-12-24: The paper is accepted by IEEE Transactions on Circuits and Systems for Video Technology.
pip install -r requirements.txt
Download the pretrained weights of VQIR (16x and 32x) and put them into `vqir/pretrained'.
python vqir/test.py -opt vqir/options/test/test_vqir_stage2.yml
Note: If you test VQIR with scale 32, you will need to modify the lines 120-124 of `vqir/archs/layers.py' (replace N//2 with N).
Download the training dataset DIV2K and pretrained weights of VQGAN which put into `vqir/pretrained/vqgan'.
- stage 1: train the IFRM
python vqir/train.py -opt vqir/options/train/train_vqir_stage1.yml
- stage 2: train the MSRM
python vqir/train.py -opt vqir/options/train/train_vqir_stage2.yml
This work is based on VQGAN, IRN, and BasicSR. Thanks for their awesome work.
If you have any questions, please feel free to reach me out at haowei@stu.xjtu.edu.cn.