The official code of MotionLab, whose core lies in ./rfmotion/models/modeltype/rfmotion.py.
- [2025/01/23] release demo code
- [2025/01/23] release training code
- [2025/01/23] release evaluating code
- [2025/02/01] release codes of specialist models
- [2025/02/03] release checkpoints
- [2025/02/04] 🔥🔥 Our unified model outperforms the specialist models on all task metrics by modifying the CFG parameters
Method | text gen. (FID) | traj. gen. (avg. err.) | text edit (R@1) | traj. edit (R@1) | in-between (avg. err.) | style transfer (SRA) | style transfer (CRA) |
---|---|---|---|---|---|---|---|
Ours-specialist models | 0.209 | 0.0398 | 41.44 | 59.86 | 0.0371 | 67.55 | 43.53 |
Ours-in paper | 0.223 | 0.0334 | 56.34 | 72.65 | 0.0273 | 64.97 | 47.86 |
🔥🔥 Ours-new | 0.167 | 0.0334 | 56.34 | 72.65 | 0.0273 | 69.21 | 44.62 |
├── checkpoints
│ ├── motionflow
│ │ ├── motionflow.ckpt
│ ├── clip-vit-large-patch14
│ ├── glove
│ ├── mdm-ldm
│ │ ├── motion_encoder.ckpt
│ │ ├── motionclip.pth.tar
│ ├── smpl
│ │ ├── J_regressor_extra.npy
│ │ ├── smplfaces.npy
│ │ ├── kintree_table.pkl
│ │ ├── SMPL_NEUTRAL.pkl
│ ├── smplh
│ │ ├── smplh.faces
│ │ ├── SMPLH_NEUTRAL.npz
│ ├── t2m
│ │ ├── Comp_v6_KLD01
├── datasets
│ ├── all
│ │ ├── new_joint_vecs
│ │ │ ├── 000000.npy
│ │ │ ├── 040000.npy
│ │ ├── new_joints
│ │ │ ├── 000000.npy
│ │ │ ├── 040000.npy
│ │ ├── texts
│ │ │ ├── 000000.txt
│ │ │ ├── 040000.txt
│ │ ├── train_humanml.txt
│ │ ├── train_motionfix.txt
│ │ ├── val_humanml.txt
│ │ ├── val_motionfix.txt
│ │ ├── test_humanml.txt
│ │ ├── test_motionfix.txt
│ ├── mcm-ldm
│ │ ├── content_test_feats
│ │ ├── style_test_feats
├── experiments
│ ├── rfmotion
│ │ ├── SPECIFIED NAME OF EXPERIMENTS
│ │ │ ├── checkpoints
python: 3.9.20; torch: 2.1.1; pytorch-lightning: 1.9.4; cuda: 11.8.0;
conda create python=3.9 --name rfmotion
conda activate rfmotion
conda install pytorch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
python -m spacy download en_core_web_sm
The results should be placed as shown in Folder Structure, including glove, t2m, smpl and clip.
bash prepare/download_smpl_model.sh
bash prepare/download_smpl_file.sh
bash prepare/download_glove.sh
bash prepare/download_t2m_evaluators.sh
bash prepare/download_clip.sh
Download the AMASS dataset and MotionFix dataset.
Follow the instructions in HumanML3D to process the AMASS data into HumanML3D format, then copy the results into "all" as shown in Folder Structure.
Follow the instructions in MotionFix-Retarget to process the MotionFix data into HumanML3D format, then copy the results into "all" as shown in Folder Structure.
The results should be placed as shown in Folder Structure, including motion_encoder.ckpt, motionclip.pth.tar, motionflow.ckpt.
FFMPEG is necessary for exporting videos, otherwise only SMPL mesh can be exported.
You should first check the configure in ./configs/config_rfmotion.yam, to assign the checkpoint and task:
DEMO:
TYPE: "text" # for text-based motion generation; alongside "hint", "text_hint", "inbetween", "text_inbetween", "style", "source_text", "source_hint", "source_text_hint"
CHECKPOINTS: "./checkpoints/motionflow/motionflow.ckpt" # Pretrained model path
cd ./script
bash demo.sh
Notably, rendering the video directly here may result in poor export results, which may cause the video clarity to decrease and the lighting to be unclear. It is recommended to export the mesh and then render the video in professional 3D software like Blender.
You should first check the configure in ./configs/config_rfmotion.yaml
cd ./script
bash train_rfmotion.sh
You should first check the configure in ./configs/config_rfmotion.yam, to assign the checkpoint and task:
TEST:
CHECKPOINTS: "./checkpoints/motionflow/motionflow.ckpt" # Pretrained model path
METRIC:
TYPE: ["MaskedMetrics", "TM2TMetrics", "SourceTextMetrics", "SourceHintMetrics", "SourceTextHintMetrics", "InbetweenMetrics", "TextInbetweenMetrics","TextHintMetrics", "HintMetrics", "StyleMetrics", ]
cd ./script
bash test_rfmotion.sh
If you are intrested in the specialist models focousing on specific task, you can replace ./config/config_rfmotion.yaml with ./config/config_rfmotion_TASK.yaml. And the corresponding core code is the ./rfmotion/models/modeltype/rfmotion_seperate.py.
Some codes are borrowed from MLD, MotionFix, MCM-LDM, diffusers.
If you find MotionLab useful for your work please cite:
@article{guo2025motionlab,
title={MotionLab: Unified Human Motion Generation and Editing via the Motion-Condition-Motion Paradigm},
author={Guo, Ziyan and Hu, Zeyu and Zhao, Na and Soh, De Wen},
journal={arXiv preprint arXiv:2502.02358},
year={2025}
}