Implementation of a method to lipreading using landmark from 3D talking head
-
Updated
Jul 8, 2023 - Python
Implementation of a method to lipreading using landmark from 3D talking head
Build output of the talking heads main ui repo
AI Avatar/Anchor: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Animated Characters: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Talking Avatar: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
A curated list of 'Talking Head Generation' resources. Features influential papers, groundbreaking algorithms, crucial GitHub repositories, insightful videos, and more. Ideal for AI enthusiasts, researchers, and graphics professionals
Code for ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"
ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".
[CVPR 2023] SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
Add a description, image, and links to the talking-heads topic page so that developers can more easily learn about it.
To associate your repository with the talking-heads topic, visit your repo's landing page and select "manage topics."