Skip to content

Latest commit

 

History

History
51 lines (35 loc) · 2.04 KB

README.md

File metadata and controls

51 lines (35 loc) · 2.04 KB

DragPoser: Motion Reconstruction from Variable Sparse Tracking Signals via Latent Space Optimization

⚠️ Under construction... 🚧 🚧

Getting Started

  1. Clone the repository onto your local system.
  2. Navigate to the python directory.
  3. Create a virtual environment with: python -m venv env (tested on Python 3.9).
  4. Activate the created virtual environment.
  5. Install the necessary packages from the requirements file with: pip install -r requirements.txt.
  6. Download and install PyTorch.

Evaluate

One BVH file

python .\src\eval_drag.py .\models\model_dancedb .\data\example\eval\example.bvh --config .\config\6_trackers_config.json

A directory with BVH files

python .\src\eval_drag.py .\models\model_dancedb .\data\example\eval\ --config .\config\6_trackers_config.json

Results will be saved in .\data\

Train

Training and evaluation data should be in a directory similar to .\data\example\ Inside that directory there must be two files (eval and train) with the .bvh files for training. Note that the included .\data\example\ does not have enough data for training, it only includes an example file from the preprocessed AMASS dataset.

1. Train the VAE

python .\src\train.py .\data\example\ name --fk

2. Train the temporal predictor

python .\src\train_temporal.py .\data\example\ name

License