This is the official code for the paper "Exemplar Based 3D Portrait Stylization". You can check the paper on our project website.
The entire framework consists of four parts, landmark translation, face reconstruction, face deformation, and texture stylization.
The code for landmark translation can be found here.
Environment
These two parts require Windows with GPU. They also require a simple Python environment with opencv
, imageio
and numpy
for automatic batch file generation and execution. Python code in the two parts is tested using Pycharm, instead of command lines.
Please download the regressor_large.bin
and tensorMale.bin
and put them in ./face_recon_deform/PhotoAvatarLib_exe/Data/
.
Inputs
These two parts require inputs in the format given below.
Path | Description |
---|---|
dirname_data | Directory of all inputs |
└ XXX | Directory of one input pair |
├ XXX.jpg | Content image |
├ XXX.txt | Landmarks of the content image |
├ XXX_style.jpg | Style image |
├ XXX_style.txt | Landmarks of the style image |
├ XXX_translated.txt | Translated landmarks |
└ YYY | Directory of one input pair |
├ ... | ... |
Some examples are given in ./data_demo/
.
Uasge
Directly run main_recon_deform.py
is OK, and you can also check the usage from the code.
In ./face_recon_deform/PhotoAvatarLib_exe/
is a compiled reconstruction program which takes one single image as input, automatically detects the landmarks and fits a 3DMM model towards the detected landmarks. The source code can be downloaded here.
In ./face_recon_deform/LaplacianDeformerConsole/
is a compiled deformation program which deforms a 3D mesh towards a set of 2D/3D landmark targets. You can find the explanation of the parameters by runing LaplacianDeformerConsole.exe
without adding options. Please note that it only supports one mesh topology and cannot be used for deforming random meshes. The source code is not able to provide, and some other Laplacian or Laplacian-like deformations can be found in SoftRas and libigl.
Outputs
Please refer to ./face_recon_deform/readme_output.md
Environment
The environment for this part is built with CUDA 10.0, python 3.7, and PyTorch 1.2.0, using Conda. Create environment by:
conda create -n YOUR_ENV_NAME python=3.7
conda activate YOUR_ENV_NAME
conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch
conda install scikit-image tqdm opencv
The code uses neural-renderer, which is already compiled. However, if anything go wrong (perhaps because of the environment difference), you can re-compile it by
python setup.py install
mv build/lib.linux-x86_64-3.7-or-something-similar/neural_renderer/cuda/*.so neural_renderer/cuda/
Please download vgg19_conv.pth
and put it in ./texture_style_transfer/transfer/models/
.
Inputs
You can directly use the outputs (and inputs) from the previous parts.
Usage
cd texture_style_transfer
python transfer/main_texture_transfer.py -dd ../data_demo_or_your_data_dir
This code is built based heavliy on Neural 3D Mesh Renderer and STROTSS.
@ARTICLE{han2021exemplarbased,
author={Han, Fangzhou and Ye, Shuquan and He, Mingming and Chai, Menglei and Liao, Jing},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={Exemplar-Based 3D Portrait Stylization},
year={2021},
doi={10.1109/TVCG.2021.3114308}}