AI News: Animating Heads With One-Shot Learning
Subscribe:  iCal  |  Google Calendar
Kyiv UA   22, Sep — 23, Sep
Valletta MT   23, Sep — 29, Sep
24, Sep — 27, Sep
Tokyo JP   25, Sep — 27, Sep
San Diego US   27, Sep — 30, Sep
Latest comments Awesome article!

by Keith Parker
19 hours ago

This is techno-sorcery!

by Bob
23 hours ago

Unite India is here:

AI News: Animating Heads With One-Shot Learning
24 May, 2019

Researchers from Samsung AI Center revealed their new AI system that allows animating heads using only a few static shots.  The team’s paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models, discusses the system that performs lengthy meta-learning on a large dataset of videos, and further frames few and one-shot learning of neural talking head models of previously unseen people with the help of high capacity generators and discriminators.

The system is said to initialize the parameters of both the generator and the discriminator in a person-specific way so the training can be based on just a few images and can be done quickly. It all means that the new approach is capable of learning highly realistic and personalized talking head models of new people and even portrait paintings.

Their aim here was to synthesize video-sequences of speech expressions and mimics of a particular individual.

They’ve studied the problem of synthesizing photorealistic personalized head images with a set of face landmarks to drive the animation of the model. It will be useful for video conferencing, multi-player games, VFX, and more.

You can learn more about the system here.

Leave a Reply

Related articles