This repository is the official pytorch implementation of our paper, ColorMNet: A Memory-based Deep Spatial-Temporal Feature Propagation Network for Video Colorization.
Yixin Yang,
Jiangxin Dong,
Jinhui Tang,
Jinshan Pan
Nanjing University of Science and Technology
- [2024-11-14] Add matrics evaluation code, see evaluation.py. Demo command
pip install lpips && python evaluation_matrics/evaluation.py
. - [2024-09-09] Add training code, see train.py.
- [2024-09-09] Colab demo for ColorMNet is available at .
- [2024-09-07] Add inference code and pretrained weights, see test.py.
- [2024-04-13] Project page released at ColorMNet Project. Please be patient and stay updated.
- Python 3.8+
- PyTorch 1.11+ (See PyTorch for installation instructions)
torchvision
corresponding to the PyTorch version- OpenCV (try
pip install opencv-python
) - Others:
pip install -r requirements.txt
# git clone this repository
conda create -n colormnet python=3.8
conda activate colormnet
pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 --index-url https://download.pytorch.org/whl/cu118
# install py-thin-plate-spline
git clone https://github.com/cheind/py-thin-plate-spline.git
cd py-thin-plate-spline && pip install -e . && cd ..
# install Pytorch-Correlation-extension
git clone https://github.com/ClementPinard/Pytorch-Correlation-extension.git
cd Pytorch-Correlation-extension && python setup.py install && cd ..
pip install -r requirements.txt
Download the pretrained models manually and put them in ./saves
(create the folder if it doesn't exist).
Name | URL |
---|---|
ColorMNet | model |
-
Test on Images:
For Windows users, please follow RuntimeError to avoid multiprocessor Runtime error in data loader. Thanks to @UPstud.
CUDA_VISIBLE_DEVICES=0 python test.py
# Add --FirstFrameIsNotExemplar if the reference frame is not exactly the first input image. Please make sure the ref frame and the input frames are of the same size.
# Specify --davis_root and --validation_root
data_root/
├── 001/
│ ├── 00000.png
│ ├── 00001.png
│ ├── 00002.png
│ └── ...
├── 002/
│ ├── 00000.png
│ ├── 00001.png
│ ├── 00002.png
│ └── ...
└── ...
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.run \
--master_port 25205 \
--nproc_per_node=1 \
train.py \
--exp_id DINOv2FeatureV6_LocalAtten_DAVISVidevo \
--davis_root /path/to/your/training/data/\
--validation_root /path/to/your/validation/data\
--savepath ./wandb_save_dir
- Release training code
- Release testing code
- Release pre-trained models
- Release demo
If our work is useful for your research, please consider citing:
@inproceedings{yang2024colormnet,
author = {Yang, Yixin and Dong, Jiangxin and Tang, Jinhui and Pan Jinshan},
title = {ColorMNet: A Memory-based Deep Spatial-Temporal Feature Propagation Network for Video Colorization},
booktitle = {ECCV},
year = {2024}
}
This project is licensed under BY-NC-SA 4.0, while some methods adopted in this project are with other licenses. Please refer to LICENSES.md for the careful check. Redistribution and use should follow this license.
This project is based on XMem. Some codes are brought from DINOv2. Thanks for their awesome works.
This repo is currently maintained by Yixin Yang (@yyang181) and is for academic research use only.