Yulong Li,
Shubham Agrawal,
Jen-shuo Liu,
Steven Feiner,
Shuran Song
Columbia University
Project Page | Video | arXiv
conda env create -f environment.yml
conda activate telerobot
pip install -e .
Install other dependencies for visualization and system tests:
We create our dataset by procedurally processing ABC Dataset. The following scripts default to generate the same dataset we used for all model trainings, but you could always generate diifferent data by either
- download different ABC data chunks by modifying get_abc_data.sh;
- modify train_sc.txt, val_sc.txt, the training and validation data paths for scene completion, and/or train.txt, val.txt, the training and validation data paths for action snapping.
- modify the random seeds,
data_split
and/ordata_gen
, in config.yaml.
We enable ray for parallel processsing if your machine support multiple GPUs/CPUs, but ray is default to disabled. To enable ray, you can run commands with additional flags. For example
# Without ray
python data_generation.py data_gen=seg_sc data_gen.dataset_size=2000 data_gen.dataset_split=train_sc data_gen.scene_type=kit
# With ray, using 4 GPUs, with CUDA IDs 0,1,2,3, and 48 CPUs.
CUDA_VISIBLE_DEVICES=0,1,2,3 python data_generation.py data_gen=seg_sc data_gen.dataset_size=2000 data_gen.dataset_split=train_sc data_gen.scene_type=kit ray.num_cpus=48 ray.num_gpus=4
./get_abc_data.sh
Process ABC Dataset to generate scaled objects and respective kits:
python prepare_dataset.py
Optionally evaluate processed dataset:
python evaluate.py evaluate=prepared_data evaluate.path=dataset/ABC_CHUNK
# To visualize:
cd dataset/ABC_CHUNK
python -m http.server 8000 # go to localhost:8000
# Training set:
python data_generation.py data_gen=seg_sc data_gen.dataset_size=2000 data_gen.dataset_split=train_sc data_gen.scene_type=kit
python data_generation.py data_gen=seg_sc data_gen.dataset_size=2000 data_gen.dataset_split=train_sc data_gen.scene_type=object
Optionally, visualize the generated data using:
# kit
python evaluate.py evaluate=data evaluate.dataset_split=train_sc evaluate.scene_type=kit evaluate.num_samples=1
# To visualize:
cd dataset/sc_abc/train_sc; python -m http.server 8000 # go to localhost:8000
# object
python evaluate.py evaluate=data evaluate.dataset_split=train_sc evaluate.scene_type=object evaluate.num_samples=1
# To visualize:
cd dataset/sc_abc/train_sc; python -m http.server 8000 # go to localhost:8000
# Training set:
python data_generation.py data_gen=vol_match_6DoF vol_match_6DoF.dataset_size=1000 vol_match_6DoF.dataset_split=train
# Validation set:
python data_generation.py data_gen=vol_match_6DoF vol_match_6DoF.dataset_size=100 vol_match_6DoF.dataset_split=val
Optionally, visualize the generated data using:
python evaluate.py evaluate=vol_match_6DoF vol_match_6DoF.dataset_split=val vol_match_6DoF.dataset_size=3
# To visualize:
cd dataset/vol_match_abc/val; python -m http.server 8000 # go to localhost:8000
python train.py train=seg
To evaluate segmentation model, define relevant parameters such as model_path
in conf/evaluate/seg.yaml (you can also pass them in as flags), and then run following comments.
python evaluate.py evaluate=seg evaluate.save_path=logs/evaluate_seg
cd logs/evaluate_seg; python -m http.server 8000 # go to localhost:8000
python train.py train=shape_completion train.scene_type=kit train.log_path=logs/sc_kit train.batch_size=2
python train.py train=shape_completion train.scene_type=object train.log_path=logs/sc_object train.batch_size=60
To evaluate model, define relevant parameters such as model_path
in sc_model.yaml (you can also pass them in as flags), and then run following comments.
python evaluate.py evaluate=sc_model evaluate.save_path=logs/evaluate_sc
cd logs/evaluate_sc; python -m http.server 8000 # go to localhost:8000
Before train the models with shape completed volumes, make sure shape completion models are trained, define relevant model paths in sc_volumes.yaml. Then get the shape completed volumes:
python data_generation.py data_gen=sc_volumes data_gen.datadir="dataset/vol_match_abc/train" data_gen.num=1000
python data_generation.py data_gen=sc_volumes data_gen.datadir="dataset/vol_match_abc/val" data_gen.num=100
Train models:
# With GT volumes
python train.py train=vol_match_transport vol_match_6DoF.vol_type=oracle
python train.py train=vol_match_rotate vol_match_6DoF.vol_type=oracle
# With Partial Volumes
python train.py train=vol_match_transport vol_match_6DoF.vol_type=raw
python train.py train=vol_match_rotate vol_match_6DoF.vol_type=raw
# With shape completed volumes
python train.py train=vol_match_transport vol_match_6DoF.vol_type=sc
python train.py train=vol_match_rotate vol_match_6DoF.vol_type=sc
To evaluate models, define relevant parameters such as model_path
in config.yaml (you can also pass them in as flags), and then run following comments.
python evaluate.py evaluate=vol_match_6DoF_model vol_match_6DoF.dataset_split=val vol_match_6DoF.evaluate_size=100 vol_match_6DoF.evaluate_save_path=logs/evaluate_snap vol_match_6DoF.evaluate_size=100
cd logs/evaluate_snap; python -m http.server 8000 # go to localhost:8000
You can download our real-world dataset for local system tests. Put the dataset at real_world/dataset
and start the server and the client as follows.
Start the server and the client:
bash visualizer/start_app.sh
The script will run the server in the background at port 52000, and the client at port 8001. You can visit the client at http://0.0.0.0:8001/sysa.html.
Finally, run the main script and follow the instructions:
python real_world/main.py
We store unit object-kit pairs used for our user study at assets/test_real_3dprint.
To generate more alike files for your own experiment procedurally, create a list of files in similar format as val_real.txt. Note that these should already be preprocessed as shown above. To prepare models for 3D printing, run:
python scripts/create_test_real_3dprint_objs.py
By default, this will generate 3D print models inside directory assets/test_real_3dprint. You may need to use software like blender to invert the generated files if the model surfaces are inverted in the 3D printing software.
@inproceedings{yulong2022scene,
title={Scene Editing as Teleoperation: A Case Study in 6DoF Kit Assembly},
author={Li, Yulong, and Agrawal, Shubham, and Liu, Jen-Shuo, and Feiner, Steven, and Song, Shuran},
booktitle={2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
year={2022},
organization={IEEE}
}
- Andy Zeng et. al. tsdf-fusion-python
- UR5-Controller: Python-URX and @jkur's fork of it
- Kevin Zakka: Walle
- Zhenjia Xu: Html-Visualization