logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Generating Full-Body Avatars Using A Single Camera

A preview of what's next for human interactions.

In a recent paper, three Facebook members and a researcher from University of Southern California discussed a machine learning system for generating a high-quality 3D representation of a person with clothes using a single 1K image. The new approach doesn't require using a depth sensor or motion capture rig.

PIFuHD is said to downsample the input image and feed it to the machine for the base layer, then a separate network uses the full resolution input for the fine surface details.

The main goal is the avatar body generation meaning that Facebook wants to allow users to exist as their real physical selves in virtual apps. Facebook noted though that the technology is still “years away” for consumer products, so we'll have to wait to get our hands on the tech.

The possibilities are limitless here as we'll get photorealistic representations of people in true scale, fully tracked from real motion, so it could possibly change human interactions. 

You can find the paper here

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more