logo80lv
Articlesclick_arrow
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_login
Log in
0
Save
Copy Link
Share

Creating Digital Humans With Selfies

Check out this novel approach from a team of Tencent developers. 

A team of researchers from Tencent has recently presented a fully automatic system that allows generating high-fidelity, photorealistic 3D digital human characters using a consumer-level RGB-D selfie-camera. The system uses short selfie RGB-D videos (you need to rotate your head when recording the video) to create a high-quality reconstruction in less than 30 seconds.

"Our main contribution is a new facial geometry modeling and reflectance synthesis procedure that significantly improves the state-of-the-art. Specifically, given the input video a two-stage frame selection algorithm is first employed to select a few high-quality frames for reconstruction," wrote the team. "A novel, differentiable renderer based 3D Morphable Model (3DMM) fitting method is then applied to recover facial geometries from multiview RGB-D data, which takes advantage of extensive data generation and perturbation."

The team states that their 3DMM present much larger expressive capacities when compared to conventional 3DMM which means that users can recover more accurate facial geometry using linear bases. As for reflectance synthesis, the team used a hybrid approach that mixes parametric fitting and CNNs to generate high-resolution albedo/normal maps with realistic hair/pore/wrinkle details.

Code and the constructed 3DMM is publicly available. You can find the full paper here. Don't forget to join our new Telegram channel, our Discord, follow us on Instagram and Twitter, where we are sharing breakdowns, latest news, awesome artworks, and more.

Ready to grow your game’s revenue?
Talk to us

Comments

0

arrow
Type your comment here
Leave Comment
Ready to grow your game’s revenue?
Talk to us

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more