Generating Hair Strands From Single-View Input
Events
Subscribe:  iCal  |  Google Calendar
San Antonio US   18, Jan — 21, Jan
London GB   21, Jan — 23, Jan
Taipei TW   24, Jan — 29, Jan
Zürich CH   31, Jan — 4, Feb
Leamington Spa GB   31, Jan — 3, Feb
Latest comments

I love it !

by chaitanya krishnan
8 hours ago

Wow this is really extensive! Thanks for sharing, I just started with tiltbrush and Masterpiece Vr using a mixed reality kit

after reading this incredible article, im still left with the question..."but, HOW?!"

Generating Hair Strands From Single-View Input
7 October, 2018
News

Let’s check out a novel method that automatically generates 3D hair strands from a variety of single-view input. The approach was originally described in a paper “3D Hair Synthesis Using Volumetric Variational Autoencoders” by Shunsuke Saito, Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 

Instead of using a large collection of 3D hair models directly, the team proposed using a compact latent space of a volumetric variational autoencoder (VAE). This deep neural network, trained with volumetric orientation field representations of 3D hair models, is said to be able to synthesize new hairstyles from a compressed code.

Input image, volumetric representation with color-coded local orientations predicted by the method, the final synthesized hair strands rendered from two viewing points.

“To enable end-to-end 3D hair inference, we train an additional regression network to predict the codes in the VAE latent space from any input image. Strand-level hairstyles can then be generated from the predicted volumetric representation. Our fully automatic framework does not require any ad-hoc face fitting, intermediate classification and segmentation, or hairstyle database retrieval,” said the team.

The researchers state that their hair synthesis approach is significantly more robust than state-of-the-art data-driven hair modeling techniques. The storage requirements here are said to be minimal and you can produce a 3D hair model from an image in a second. The techniques also allows to produce models from highly stylized cartoon images, non-human subjects, and pictures taken from the back of a person.

Make sure to get more details and check out the full paper here

Leave a Reply

Be the First to Comment!

avatar
wpDiscuz
Related articles
Education
Character Art Program
Education
Character Art Program
Education
Environment Art Program