Generating Animatable Detailed 3D Face Models From Single Images

Here's a brand-new approach to recreating human faces from single images. 

Сurrent monocular 3D face reconstruction methods present several limitations: some generate faces that cannot be realistically animated because of complex wrinkles, while others are based on high-quality face scans and do not do well with single images. A new paper discusses an approach "that regresses 3D face shape and animatable details that are specific to an individual but change with expression." The model called DECA (Detailed Expression Capture and Animation) is said to generate a UV displacement map from a low-dimensional latent representation featuring person-specific detail parameters and generic expression parameters. A regressor can predict detail, shape, albedo, expression, pose and illumination parameters from a single image.

"To enable this, we introduce a novel detail-consistency loss that disentangles person-specific details from expression-dependent wrinkles," notes the abstract. "This disentanglement allows us to synthesize realistic person-specific wrinkles by controlling expression parameters while keeping person-specific details unchanged." DECA is said to achieve state-of-the-art shape reconstruction accuracy on two benchmarks.

You can learn more, find the paper and the code here. Don't forget to join our new Telegram channel, our Discord, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more