[ACMMM 2022] Official PyTorch Implementation of "Action-conditioned On-demand Motion Generation". ACM MultiMedia 2022.
This repo contains the official implementation of our paper:
Action-conditioned On-demand Motion Generation Qiujing Lu*, Yipeng Zhang*, Mingjian Lu, Vwani Roychowdhury ACMMM 2022
If you find our project is useful in your research, please cite:
@inproceedings{lu2022action,
title={Action-conditioned On-demand Motion Generation},
author={Lu, Qiujing and Zhang, Yipeng and Lu, Mingjian and Roychowdhury, Vwani},
booktitle={Proceedings of the 30th ACM International Conference on Multimedia},
year={2022}
}
Anaconda is recommended to create the virtual environment
conda env create -f environment.yml
conda activate ODMO
conda create -n ODMO python=3.8.8
conda activate ODMO
pip install -r requirements.txt
(for cudnn11) pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
sh ./scripts/cleanup.sh
sh ./scripts/download/download_dataset.sh
sh ./scripts/download/download_pretrain.sh
If download (gdown) failed, please download from the link in the scripts and unzip it in to the home directory
- sample real motion (this may take a while, for about 5 minutes)
sh ./scripts/model/sample_realdata.sh
Please go to the ./logs/ folder to check the status and wait for "Sample real data from {dataset_name} is accomplished"
- sample the pretrained model (please be aware of the cuda device in the script)
sh ./scripts/model/pretrain_inference.sh
Please go to the ./logs/ folder to check the status and wait for "Sample data from {model_name} is accomplished"
- generating metric based on classifier
sh ./scripts/model/pretrain_metric.sh
Please see logs/log_{task}_{dataset_name}_{model_name} for numbers
- modes discovery
sh ./scripts/model/pretrain_modes_discovery.sh
- trajectory customization
sh ./scripts/model/pretrain_trajectory_cus.sh
The dist_e for 10 different seeds can be found in the csv under ./results/endloc_customization/
It is better to use wandb to track it, or you can add the following line in the beginning of the main function
os.environ['WANDB_MODE'] = 'offline'
sh ./scripts/model/train_odmo_mocap.sh
sh ./scripts/model/train_odmo_humanact12.sh
sh ./scripts/model/train_odmo_uestc.sh
Each time of training, it will generate a folder with unique name (we call it model name) under the ckpt/{dataset_name} folder, please keep tracking the most recent ones or you can use wandb to track it
python inference_odmo_best.py {model_name} {sampling strategy} {device}
For example we want to inference mocap_abc using mode_preserve_sampling on cuda:0
python ./src/inference_odmo_best.py mocap_abc MM cuda:0
or by using the conventional sampling
python ./src/inference_odmo_best.py mocap_abc standard cuda:0
It's a cpu task. we need the folder name of the sampled data in ./sampled_data folder
python ./src/metrics/calculate_metric.py mocap_end_release_MM
If we want to calculate metric on UESTC dataset (it has both train/test set) we can use
python ./src/metrics/calculate_metric.py uestc_end_release_MM train
python ./src/metrics/calculate_metric.py uestc_end_release_MM test
You also can use
./src/metrics/calculate_metric_ext.py
to calculate metric from any specific sampled data
This is also a CPU task
python src/metrics/calculate_apd.py mocap_end_release_MM
Similarly, we can use the following command for uestc dataset
python ./src/metrics/calculate_apd.py uestc_end_release_MM train
python ./src/metrics/calculate_apd.py uestc_end_release_MM test
The modes_discovery_interpolation.py can handle both pretrained model and the customized trained model.
For pretrained model
python ./src/customization/modes_discovery_interpolation.py {dataset_name} pretrained {use_end} {device}
For customized trained model, the model name is the name randomly generated in your ckpt/{dataset} folder
python ./src/customization/modes_discovery_interpolation.py {dataset_name} {model_name} {use_end} {device}
The trajectory_customization.py can handle both pretrained model and the customized trained model.
For pretrained model
python ./src/customization/trajectory_customization.py {dataset_name} pretrained {device}
For customized trained model, the model name is the name randomly generated in your ckpt/{dataset} folder
python ./src/customization/trajectory_customization.py {dataset_name} {model_name} {device}
The dist_e for 10 different seeds can be found in the csv under ./results/endloc_customization/
If we want to plot the sampled motion from the inference*.py, we can use the
./src/draw_func/draw_gif_from_np_multi.py
in that file you can specify the datasetname, npz file name, output folder, downsample_rate and n_jobs.