https://cgifurniture.com/ Awesome article!
This is techno-sorcery!
Unite India is here: https://unity.com/event/unite-india-2019
Researchers from Samsung AI Center revealed their new AI system that allows animating heads using only a few static shots. The team’s paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models, discusses the system that performs lengthy meta-learning on a large dataset of videos, and further frames few and one-shot learning of neural talking head models of previously unseen people with the help of high capacity generators and discriminators.
Another great paper from Samsung AI lab! @egorzakharovdl et al. animate heads using only few shots of target person (or even 1 shot). Keypoints, adaptive instance norms and GANs, no 3D face modelling at all.
📝 https://t.co/SxnVfY72TT pic.twitter.com/GjVrJbejT0
— Dmitry Ulyanov (@DmitryUlyanovML) May 22, 2019
The system is said to initialize the parameters of both the generator and the discriminator in a person-specific way so the training can be based on just a few images and can be done quickly. It all means that the new approach is capable of learning highly realistic and personalized talking head models of new people and even portrait paintings.
Their aim here was to synthesize video-sequences of speech expressions and mimics of a particular individual.
They’ve studied the problem of synthesizing photorealistic personalized head images with a set of face landmarks to drive the animation of the model. It will be useful for video conferencing, multi-player games, VFX, and more.
You can learn more about the system here.