AI Learned How To Make DeepFake Videos in 40 Seconds

Engineers from Stanford University and The Interdisciplinary Center Herzliya presented an AI that creates deepfake talking-heads using neural retargeting.

Xinwei Yao, Ohad Fried, Kayvon Fatahalian, and Maneesh Agrawala from Stanford University and IDC Herzliya presented a text-based tool for editing talking-head video that enables iterative editing workflow. On each iteration, users can edit the wording of the speech, further refine mouth motions if necessary to reduce artifacts, and manipulate non-verbal aspects of the performance by inserting mouth gestures (e.g. a smile) or changing the overall performance style (e.g. energetic, mumble). The entire workflow is shown in the example video. 

The tool requires only 2-3 minutes of the target actor video and it synthesizes the video for each iteration in about 40 seconds, allowing users to quickly explore many editing possibilities as they iterate.

You can learn more about the project by clicking this link. Also, don't forget to join our new Telegram channel, our Discord, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more