Skip to content

SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers

License

Notifications You must be signed in to change notification settings

SkyworkAI/SkyReels-A1

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Skyreels Logo

SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers

Di QiuZhengcong FeiRui WangJialin BaiChangqian Yu
Skywork AI



showcase
🔥 For more results, visit our homepage 🔥

👋 Join our Discord

This repo, named SkyReels-A1, contains the official PyTorch implementation of our paper SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers.

🔥🔥🔥 News!!

  • Feb 18, 2025: 👋 We release the inference code and model weights of SkyReels-A1. Download
  • Feb 18, 2025: 🎉 We have made our technical report available as open source. Read
  • Feb 18, 2025: 🔥 Our online demo of LipSync is available on SkyReels now! Try out LipSync.
  • Feb 18, 2025: 🔥 We have open-sourced I2V video generation model SkyReels-V1. This is the first and most advanced open-source human-centric video foundation model.

Getting Started 🏁

1. Clone the code and prepare the environment 🛠️

First git clone the repository with code:

git clone https://github.com/SkyworkAI/SkyReels-A1.git
cd SkyReels-A1

# create env using conda
conda create -n skyreels-a1 python=3.10
conda activate skyreels-a1

Then, install the remaining dependencies:

pip install -r requirements.txt

2. Download pretrained weights 📥

You can download the pretrained weights is from HuggingFace:

# !pip install -U "huggingface_hub[cli]"
huggingface-cli download SkyReels-A1 --local-dir local_path --exclude "*.git*" "README.md" "docs"

The FLAME, mediapipe, and smirk models are located in the SkyReels-A1/extra_models folder.

The directory structure of our SkyReels-A1 code is formulated as:

pretrained_weights
├── FLAME
├── SkyReels-A1-5B
│   ├── pose_guider
│   ├── scheduler
│   ├── tokenizer
│   ├── siglip-so400m-patch14-384
│   ├── transformer
│   ├── vae
│   └── text_encoder
├── mediapipe
└── smirk

3. Inference 🚀

You can simply run the inference scripts as:

python inference.py

If the script runs successfully, you will get an output mp4 file. This file includes the following results: driving video, input image or video, and generated result.

Gradio Interface 🤗

We provide a Gradio interface for a better experience, just run by:

python app.py

The graphical interactive interface is shown as below:

gradio

Metric Evaluation 👓

We also provide all scripts for automatically calculating the metrics, including SimFace, FID, and L1 distance between expression and motion, reported in the paper.

All codes can be found in the eval folder. After setting the video result path, run the following commands in sequence:

python arc_score.py
python expression_score.py
python pose_score.py

Acknowledgements 💐

We would like to thank the contributors of CogvideoX and finetrainers repositories, for their open research and contributions.

Citation 💖

If you find SkyReels-A1 useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:

@article{qiu2025skyreels,
  title={SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers},
  author={Qiu, Di and Fei, Zhengcong and Wang, Rui and Bai, Jialin and Yu, Changqian and Fan, Mingyuan and Chen, Guibin and Wen, Xiang},
  journal={arXiv preprint arXiv:2502.10841},
  year={2025}
}

About

SkyReels-A1: Expressive Portrait Animation in Video Diffusion Transformers

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages