Generating Hair Strands From Single-View Input
Events
Subscribe:  iCal  |  Google Calendar
Kyiv UA   22, Sep — 23, Sep
Valletta MT   23, Sep — 29, Sep
24, Sep — 27, Sep
Tokyo JP   25, Sep — 27, Sep
San Diego US   27, Sep — 30, Sep
Latest comments

So what's exactly the advantage? good would be a direct comparison to known renderers

by Fuck off
19 hours ago

Fuck off, Ad. It cost $$$$$$$

by Paul Jonathan
1 days ago

Laura, thank you for taking the time to model the warehouse boxes. I appreciate the enginuity. This could be used for games but as well as that, for businessmen to help showcase floorplans and build site images to their co-workers and employees. I highly respect this level of design. Best Paul.

Generating Hair Strands From Single-View Input
7 October, 2018
News

Let’s check out a novel method that automatically generates 3D hair strands from a variety of single-view input. The approach was originally described in a paper “3D Hair Synthesis Using Volumetric Variational Autoencoders” by Shunsuke Saito, Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 

Instead of using a large collection of 3D hair models directly, the team proposed using a compact latent space of a volumetric variational autoencoder (VAE). This deep neural network, trained with volumetric orientation field representations of 3D hair models, is said to be able to synthesize new hairstyles from a compressed code.

Input image, volumetric representation with color-coded local orientations predicted by the method, the final synthesized hair strands rendered from two viewing points.

“To enable end-to-end 3D hair inference, we train an additional regression network to predict the codes in the VAE latent space from any input image. Strand-level hairstyles can then be generated from the predicted volumetric representation. Our fully automatic framework does not require any ad-hoc face fitting, intermediate classification and segmentation, or hairstyle database retrieval,” said the team.

The researchers state that their hair synthesis approach is significantly more robust than state-of-the-art data-driven hair modeling techniques. The storage requirements here are said to be minimal and you can produce a 3D hair model from an image in a second. The techniques also allows to produce models from highly stylized cartoon images, non-human subjects, and pictures taken from the back of a person.

Make sure to get more details and check out the full paper here

Leave a Reply

avatar
Related articles