WarpFusion
-
Updated
Oct 6, 2024 - Batchfile
WarpFusion
ICASSP 2022: "Text2Video: text-driven talking-head video synthesis with phonetic dictionary".
This is a pix2pix demo that learns from pose and translates this into a human. A webcam-enabled application is also provided that translates your pose to the trained pose. Everybody dance now !
Small script for AUTOMATIC1111/stable-diffusion-webui to run video through img2img.
ControlAnimate Library
Audio driven video synthesis
A modified version of vid2vid for Speech2Video, Text2Video Paper
This is a pix2pix demo that learns from edge and translates this into view. A interactive application is also provided that translates edge to view.
Python OSS library that provides vid2vid pipeline by using Hugging Face's diffusers.
Dataset generation pipeline with BeamNG.tech + Visual Experiments with vid2vid models.
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Demo for NVIDIA's Fewshot Vid2vid
vid2vid ai optimization script
Add a description, image, and links to the vid2vid topic page so that developers can more easily learn about it.
To associate your repository with the vid2vid topic, visit your repo's landing page and select "manage topics."