this project is a reproduction of the famous paper Learning a Deep Convolutional Network for Image Super-Resolution
which was publish in 2014 using CNN to do super resolution work
low-resolution | high-resolution |
---|---|
- A CSDN blog that describes details about what is super resolution
- An excellent github, this repository was based on it, it offers me great help.
- great job!!! strongly recommend
-
pytorch
-
pip install -r requirements.txt
download the standard dataset The 91-image(train set), Set5(test set) dataset converted to HDF5 can be downloaded from the links below.
Dataset | Scale | Type | Link |
---|---|---|---|
91-image | 2 | Train | Download |
91-image | 3 | Train | Download |
91-image | 4 | Train | Download |
Set5 | 2 | Eval | Download |
Set5 | 3 | Eval | Download |
Set5 | 4 | Eval | Download |
Download any one of 91-image and Set5 in the same Scale and then move them under ./datasets
as ./datasets/91-image_x2.h5
and ./datasets/Set5_x2.h5
-
easy run
python train.py
-
about arguments
train-file
eval-file
2/3/4, different datasets to choosebatch-size
num-workers
lr
learning rateepoch
f
frequency to test the modelmodel-dir
where the model was saved
python train.py --train-file 4 --eval-file 4 --batch-size 64 --lr 1e-5 --num-workers 8 --epoch 500 --f 10
All model will be saved under
./model
and the best model is./model/best.pth
Move the model weights under ./model
as ./model/best.pth
or ./model/b.pth
python use.py --weights-file ./model/best.pth --image x/xx/xxx.jpg
By default, the weights file is ./model/best.pth
, if you want to use b.pth
please replace it.
A picture will be created named as xxx_srcnn.jpg
This is just a CNN try for me, to familiar with basic steps of machine learning, almost all code comes from here,I just follow his step and reapperance his job. The final result couldn't satisfy me, it actually should be called little super resolution haha. However, there's no doubt that idea of SRCNN was novelty in that period. If you want to improve the neural network, build a deeper model and try Resnet model.
If you are interested in it, you could see this great job and i strongly recommend it.
-
OSError: Unable to open file (file locking disabled on this file system (use HDF5_USE_FILE_LOCKING
Solution:
#linux nano ~/.bashrc
Add
export HDF5_USE_FILE_LOCKING='FALSE'
in a line, useCtrl-X
to exitsource ~/.bashrc
-
OSError: [WinError 1455] 页面文件太小,无法完成操作。 Error loading "C:\ProgramData\Anaconda3\envs\study_and_test\lib\site-packages\torch\lib\caffe2_detectron_ops_gpu.dll" or one of its dependencies. Solution:
python train.py --num-workers 0