Implementation of DeepLandscape: Adversarial Modeling of Landscape Video in PyTorch
Official repository for the paper E. Logacheva, R. Suvorov, O. Khomenko, A. Mashikhin, and V. Lempitsky. "DeepLandscape: Adversarial Modeling of Landscape Videos" In 2020 European Conference on Computer Vision (ECCV).
pip3 install -r requirements.txt
Download everything from here and put in the results
directory.
Use homographies/manual_homographies
to reproduce the paper; use homographies/manual_homographies_x2.5
if you want the speed to match the speed of real videos in test data; use homographies/selected_homographies
to get the best visual results.
Move images you like to animate to results/test_images
. Then run
PYTHONPATH=`pwd`:$PYTHONPATH runfiles/encode_and_animate_test_all256_with_style.sh <homography_dir>
Results will be saved in results/encode_and_animate_results
.
To use the 256x256 generator run
./generate.py config/train/256.yaml --homography_dir <homography_dir>
Results will be saved in results/generated
TBD
You should prepare an lmdb dataset:
./prepare_data.py <data type (images or videos)> <input data path> --out <output lmdb directory> --n_worker <number of workers>
To train the 256x256 generator:
./train.py configs/train/256.yaml --restart -i <path to image data> -v <path to video data>
You should prepare a dataset:
PYTHONPATH=`pwd`:$PYTHONPATH runfiles/gen_encoder_train_data_256.sh
To train the 256x256 encoder:
PYTHONPATH=`pwd`:$PYTHONPATH runfiles/train_encoder_256.sh
vid_dl/main.py <output directory>
This repository is based on Kim Seonghyeon's Implementation A Style-Based Generator Architecture for Generative Adversarial Networks in PyTorch
The Superresolution part is based on https://github.com/xinntao/BasicSR
Mean optical flow calculation is taken from https://github.com/avinashpaliwal/Super-SloMo
Segmentation is taken form https://github.com/CSAILVision/semantic-segmentation-pytorch
- LPIPS metric https://github.com/richzhang/PerceptualSimilarity
- SSIM https://github.com/Po-Hsun-Su/pytorch-ssim
- FID https://github.com/mseitzer/pytorch-fid
If you found our work useful, please don't forget to cite
@inproceedings{Logacheva_2020_ECCV,
author = {Logacheva, Elizaveta and
Suvorov, Roman and
Khomenko, Oleg and
Mashikhin, Anton and
Lempitsky, Victor
},
title = {DeepLandscape: Adversarial Modeling of Landscape Videos},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {August},
year = {2020},
}