This project is the implementation of a novel view systhesis which aims to generate/systhesize a target view with an arbitrary camera pose from a given source view and its camera pose as shown in figure below[1].
Calibrate the camera
$ python calibrate.py
Then to generate the point cloud
$ python disparity1.py
After this, the point cloud is generated, which needs to be transformed(rotated to the target view). To do this, head over to MATLAB, and run the pcd_transformation.m
You now have the transformed point cloud of the target view.
This point cloud further needs to be projected to 2D. To do this,
$ python 3D_to_2D_open3d_part2.py
After the point cloud is rendered to 2D, a respective mask needs to be generated to perform Inpainting
$ python mask_generator.py
$ python Inpainting.py
After this stage, we now have the inpainted image, which is the target view of the given input image.
Check out some other work in novel view synthesis
- Multi-view 3D Models from Single Images with a Convolutional Network in CVPR 2016
- View Synthesis by Appearance Flow in ECCV 2016
- Transformation-Grounded Image Generation Network for Novel 3D View Synthesis in CVPR 2017
- Neural scene representation and rendering in Science 2018
- Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis in NIPS 2015
- DeepStereo: Learning to Predict New Views From the World's Imagery in CVPR 2016
- Learning-Based View Synthesis for Light Field Cameras in SIGGRAPH Asia 2016
- Novel View Synthesis in TensorFlow
References
[1] Multi-view to Novel View: Synthesizing Novel Views with Self-Learned Confidence,Sun, Shao-Hua et. al.,2018]