This repository contains the code for the paper "Neural Implicit 3D Shapes from Single Images with Spatial Patterns".
- Clone this repo:
git clone https://github.com/yixin26/SVR-SP.git
cd SVR-SP & cd code
- Python 3.6
- CPU or NVIDIA GPU + CUDA CuDNN
- Pytorch > (1.4.x)
Install via conda environment conda env create -f environment.yml
(creates an environment called spatial_pattern
)
For a quick demo, please use the pre-trained model. Please download the model from Google Drive,
and exact the model to code/all/model
.
For generating all the testing samples from a category of ShapeNet Core Dataset, e.g., Chair, please use
python sdf2obj.py --category chair --ckpt 30 --batch_size 4 -g 0,1
The generated mesh files will be stored at code/all/results/30/test_objs/...
.
To train the model from scratch, please use
python train.py --category all --exp all -g 0,1 --batch_size 20 --nr_epochs 30
To prepare SDF files, images and cameras, we use the code from preprocessing. To generate meshes from predicted SDFs, we use the executable file from isosurface.
During training, we use Furthest Point Sampling algorithm to downsample the input point cloud. Please download and compile the code from sampling_cuda.
Please use the trained model to generate spatial patterns. The visualization codes and materials can be found at folder code/visualization/
.
Please cite our work if you find it useful:
@article{zhuang2021neural,
title={Neural Implicit 3D Shapes from Single Images with Spatial Patterns},
author={Zhuang, Yixin and Liu, Yunzhe and Wang, Yujie and Chen, Baoquan},
journal={arXiv preprint arXiv:2106.03087},
year={2021}
}