AI News: Animating Heads With One-Shot Learning

AI News: Animating Heads With One-Shot Learning

Researchers from Samsung AI Center revealed their new AI system that allows animating heads using only a few static shots.

Researchers from Samsung AI Center revealed their new AI system that allows animating heads using only a few static shots.  The team’s paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models, discusses the system that performs lengthy meta-learning on a large dataset of videos, and further frames few and one-shot learning of neural talking head models of previously unseen people with the help of high capacity generators and discriminators.

The system is said to initialize the parameters of both the generator and the discriminator in a person-specific way so the training can be based on just a few images and can be done quickly. It all means that the new approach is capable of learning highly realistic and personalized talking head models of new people and even portrait paintings.

Their aim here was to synthesize video-sequences of speech expressions and mimics of a particular individual.

They’ve studied the problem of synthesizing photorealistic personalized head images with a set of face landmarks to drive the animation of the model. It will be useful for video conferencing, multi-player games, VFX, and more.

You can learn more about the system here.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more

    AI News: Animating Heads With One-Shot Learning