[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
-
Updated
Sep 5, 2023 - Python
[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with Diffusion Models (IJCAI 2023) | The DiffuseStyleGesture+ entry to the GENEA Challenge 2023 (ICMI 2023, Reproducibility Award)
[CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven Holistic 3D Expression and Gesture Generation
The official implementation for ICMI 2020 Best Paper Award "Gesticulator: A framework for semantically-aware speech-driven gesture generation"
This is the official implementation for IVA '19 paper "Analyzing Input and Output Representations for Speech-Driven Gesture Generation".
PATS Dataset. Aligned Pose-Audio-Transcripts and Style for co-speech gesture research
This is the official implementation of the paper "Speech2AffectiveGestures: Synthesizing Co-Speech Gestures with Generative Adversarial Affective Expression Learning".
Deep Non-Adversarial Gesture Generation
Official Repository for the paper Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach published in ECCV 2020 (https://arxiv.org/abs/2007.12553)
This is the official implementation of the paper "Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents".
This is an official PyTorch implementation of "Gesture2Vec: Clustering Gestures using Representation Learning Methods for Co-speech Gesture Generation" (IROS 2022).
This repository contains the gesture generation model from the paper "Moving Fast and Slow" (https://www.tandfonline.com/doi/full/10.1080/10447318.2021.1883883) trained on the English dataset
Scripts for numerical evaluations for the GENEA Gesture Generation Challenge
This repository contains data pre-processing and visualization scripts used in GENEA Challenge 2022 and 2023. Check the repository's README.md file for instructions on how to use scripts yourself.
Code for CVPR 2024 paper: ConvoFusion: Multi-Modal Conversational Diffusion for Co-Speech Gesture Synthesis
Scripts for numerical evaluations for the GENEA Gesture Generation Challenge
This fork adapts Gesticulator, the semantically-aware speech-driven gesture generation model, for integration with conversational agents in Unity.
Add a description, image, and links to the gesture-generation topic page so that developers can more easily learn about it.
To associate your repository with the gesture-generation topic, visit your repo's landing page and select "manage topics."