Generating Hair Strands From Single-View Input

Let’s check out a novel method that automatically generates 3D hair strands from a variety of single-view input.

Let’s check out a novel method that automatically generates 3D hair strands from a variety of single-view input. The approach was originally described in a paper “3D Hair Synthesis Using Volumetric Variational Autoencoders” by Shunsuke Saito, Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 

Instead of using a large collection of 3D hair models directly, the team proposed using a compact latent space of a volumetric variational autoencoder (VAE). This deep neural network, trained with volumetric orientation field representations of 3D hair models, is said to be able to synthesize new hairstyles from a compressed code.

Input image, volumetric representation with color-coded local orientations predicted by the method, the final synthesized hair strands rendered from two viewing points.

“To enable end-to-end 3D hair inference, we train an additional regression network to predict the codes in the VAE latent space from any input image. Strand-level hairstyles can then be generated from the predicted volumetric representation. Our fully automatic framework does not require any ad-hoc face fitting, intermediate classification and segmentation, or hairstyle database retrieval,” said the team.

The researchers state that their hair synthesis approach is significantly more robust than state-of-the-art data-driven hair modeling techniques. The storage requirements here are said to be minimal and you can produce a 3D hair model from an image in a second. The techniques also allows to produce models from highly stylized cartoon images, non-human subjects, and pictures taken from the back of a person.

Make sure to get more details and check out the full paper here

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more