Meta's Codec Avatars 2.0 Approach Complete Realism

The researchers have made progress with their VR avatars.

At MIT’s Virtual Beings & Being Virtual workshop in April, Meta presented Codec Avatars 2.0 – the latest version of the VR avatars using advanced machine learning techniques. The tech uses cameras mounted into the headset to observe the person's eyes and mouth and virtually recreate even subtle movements, like moving eyebrows, squinting, scrunching nose, and more.

"I would say a grand challenge of the next decade is to see if we can enable remote interactions that are indistinguishable from in-person interactions," said Yaser Sheikh, who leads the Codec Avatars team.

This is the upgraded version of the Avatars shown last year. Meta promised to make the tech available to everyone. It's hard to say how long we need to wait as Sheikh noted that it’s impossible to predict how far away Codec Avatars is from completion. But he said that when the project started it was “ten miracles away”, and now, he believes it’s “five miracles away”.

Read more about Codec Avatars here and don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more