Skip to content

Latest commit

 

History

History
228 lines (169 loc) · 9.08 KB

README.md

File metadata and controls

228 lines (169 loc) · 9.08 KB

Building Rearticulable Models for Arbitrary 3D Objects from 4D Point Clouds

Logo

CVPR, 2023
Shaowei Liu · Saurabh Gupta* · Shenlong Wang* ·

Paper PDF Project Page Google Colab Youtube Video


This repository contains a pytorch implementation for the paper: Building Rearticulable Models for Arbitrary 3D Objects from 4D Point Clouds. In this paper, we build animatable 3D models from arbitrary articulated object point cloud sequence.

Overview

overview

Installation

  • Clone this repository:
    git clone https://github.com/stevenlsw/reart
    cd reart
  • Install requirements in a virtual environment:
    sh setup_env.sh

The code is tested on Python 3.6.13 and Pytorch 1.10.2+cu113.

Colab notebook

Run our Colab notebook for quick start!

Demo

demo_data folder contains data and pretrained model of Nao robot. We provide two pretrained models, base-2 is the relaxation model and kinematic-2 is the projection model. Postfix 2 is the canonical frame index. Canonical frame index is selected by the lowest energy.

Evaluate and visualization

Canonical frame index cano_idx should be consistent with postfix in pretrained model name.

  • projection model

    python run_robot.py --seq_path=demo_data/data/nao --save_root=exp --cano_idx=2 --evaluate --resume=demo_data/pretrained/nao/kinematic-2/model.pth.tar --model=kinematic
  • relaxation model

    python run_robot.py --seq_path=demo_data/data/nao --save_root=exp --cano_idx=2 --evaluate --resume=demo_data/pretrained/nao/base-2/model.pth.tar --model=base

After running the command, results are stored in ${save_root}/${robot name}. input.gif visualize the input sequence, recon.gif visualize the reconstruction, gt.gif visualize the GT. seg.html visualize the pred segmentation, structure.html visualize the infered topology. result.txt contains the evaluation result.

Input Recon GT

Data and pretrained model

Download data

Download the data from here and save as data folder.

data
├──  robot
│     └── nao   - robot name
│     └── ...       
├──  category_normalize_scale.pkl  - center and scale of each category
├──  real
│     └── toy   - real scan object
│     └── switch  

Download pretrained model

Download pretrained models from here and save as pretrained folder.

pretrained
├──  robot
│     └── nao   - robot name
│       └── base-{cano_idx}       - pretrained relaxation model			    
│       └── kinematic-{cano_idx}  - pretrained projection model  
├──  real
├──  corr_model.pth.tar  - pretrained correspondence model

Robot Experiment

Take nao as an example.

Train relaxation model

corr_model.pth.tar is needed for training. Recommend set cano_idx same as our release pretrained model to get the reported performance for each category.

python run_robot.py --seq_path=data/robot/nao --save_root=exp --cano_idx=2 --use_flow_loss --use_nproc --use_assign_loss --downsample 4 --n_iter=15000

The relaxation results are stored at ${save_root}/${robot name}/result.pkl and needed for training projection model.

Train projection model

Set the relaxation result base_result_path as above.

python run_robot.py --seq_path=data/robot/nao --save_root=exp --cano_idx=2  --use_flow_loss --use_nproc --use_assign_loss --model=kinematic --base_result_path=exp/nao/result.pkl --assign_iter=0 --downsample=2 --assign_gap=1 --snapshot_gap=10

Evaluate pretrained model

python run_robot.py  --seq_path=data/robot/nao --save_root=exp --cano_idx=2 --evaluate --resume=pretrained/robot/nao/kinematic-2/model.pth.tar --model=kinematic

See all robots and pretrained models in pretrained/robot, Take spot as another example, you could get

Input Recon GT

Real-world experiment

Follow instructions similar to robot. Take toy as an example.

Inference

python run_real.py --seq_path=data/real/toy --evaluate --model=kinematic --save_root=exp --cano_idx=0  --resume=pretrained/real/toy/kinematic-0/model.pth.tar

Train relaxation model

python run_real.py --seq_path=data/real/toy --save_root=exp --cano_idx=0 --use_flow_loss --use_nproc --use_assign_loss --assign_iter=1000 

Train projection model

python run_real.py --seq_path=data/real/toy --cano_idx=0 --save_root=exp --n_iter=200 --use_flow_loss --use_nproc --use_assign_loss --model=kinematic --assign_iter=0 --assign_gap=1 --snapshot_gap=10 --base_result_path=exp/toy/result.pkl  

We provide real-scan toy and switch from Polycam app in iPhone. Take toy as an example, you could get

Input Recon

Sapien Experiment

Setup

Train relaxation model

Specify sapien_idx to select different sapien objects, all experiments use canonical frame 0 cano_idx=0.

python run_sapien.py --sapien_idx=212 --save_root=exp --n_iter=2000 --cano_idx=0 --use_flow_loss --use_nproc --use_assign_loss

The relaxation results are stored at ${save_root}/sapien_{sapien_idx}/result.pkl and needed for training projection model.

Train projection model

Set the relaxation result base_result_path as above.

python run_sapien.py --sapien_idx=212 --save_root=exp --n_iter=200 --cano_idx=0 --model=kinematic --use_flow_loss --use_nproc --use_assign_loss  --assign_iter=0 --assign_gap=1 --snapshot_gap=10 --base_result_path=exp/sapien_212/result.pkl

After training, results are stored in ${save_root}/sapien_{sapien_idx}/. result.txt contains the evaluation result.

Take sapien_idx=212as an example, you could get

Input Recon GT

Citation

If you find our work useful in your research, please cite:

@inproceedings{liu2023building,
  title={Building Rearticulable Models for Arbitrary 3D Objects from 4D Point Clouds},
  author={Liu, Shaowei and Gupta, Saurabh and Wang, Shenlong},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={21138--21147},
  year={2023}
}

Acknowledges

We thank: