[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
-
Updated
Jun 26, 2024 - Python
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".
Code for ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"
A curated list of 'Talking Head Generation' resources. Features influential papers, groundbreaking algorithms, crucial GitHub repositories, insightful videos, and more. Ideal for AI enthusiasts, researchers, and graphics professionals
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
Talking Avatar: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Animated Characters: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
AI Avatar/Anchor: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Build output of the talking heads main ui repo
Implementation of a method to lipreading using landmark from 3D talking head
Add a description, image, and links to the talking-heads topic page so that developers can more easily learn about it.
To associate your repository with the talking-heads topic, visit your repo's landing page and select "manage topics."