logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Nvidia Shares a New Look at AI for Digital Avatars

The mode allows creating a high-quality avatar using a single photo.

NVIDIA researchers won Best in Show award at SIGGRAPH 2021 for their real-time digital avatar technology. The team proposes using this AI for video conferencing, storytelling, virtual assistants, and more.

The model was originally described in a 2020 paper by Ting-Chun Wang, Arun Mallya, Ming-Yu Liu which described a neural talking-head video synthesis model that was designed for video conferencing. This project allows generating a digital avatar from a single image that can be then used in different cases.

"Our motion is encoded based on a novel keypoint representation, where the identity-specific and motion-related information is decomposed unsupervisedly. Extensive experimental validation shows that our model outperforms competing methods on benchmark datasets," wrote the team in 2020. "Moreover, our compact keypoint representation enables a video conferencing system that achieves the same visual quality as the commercial H.264 standard while only using one-tenth of the bandwidth. Besides, we show our keypoint representation allows the user to rotate the head during synthesis, which is useful for simulating face-to-face video conferencing experiences."

You can find the paper here. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more