DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization
- Clone the repository onto your local system.
- Navigate to the
python
directory. - Create a virtual environment with:
python -m venv env
(tested on Python 3.9). - Activate the created virtual environment.
- Install the necessary packages from the requirements file with:
pip install -r requirements.txt
. - Download and install PyTorch.
One BVH file
python .\src\eval_drag.py .\models\model_dancedb .\data\example\eval\example.bvh --config .\config\6_trackers_config.json
A directory with BVH files
python .\src\eval_drag.py .\models\model_dancedb .\data\example\eval\ --config .\config\6_trackers_config.json
Results will be saved in .\data\
Training and evaluation data should be in a directory similar to .\data\example\
Inside that directory there must be two files (eval and train) with the .bvh files for training.
Note that the included .\data\example\
does not have enough data for training, it only includes an example file from the preprocessed AMASS dataset.
1. Train the VAE
python .\src\train.py .\data\example\ name --fk
2. Train the temporal predictor
python .\src\train_temporal.py .\data\example\ name
-
The code in this repository is released under the MIT license. Please, see the LICENSE for further details.
-
The model weights in this repository and the data are licensed under CC BY-SA 4.0 license as found in the LICENSE_data file.
-
The webpage under the
.docs\
directory was built using the Academic Project Page Template. This website is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.