Generating Hair Strands From Single-View Input
Events
Subscribe:  iCal  |  Google Calendar
Helsinki FI   17, Oct — 25, Oct
London GB   22, Oct — 23, Oct
Singapore SG   23, Oct — 25, Oct
Paris FR   24, Oct — 27, Oct
San Jose US   25, Oct — 26, Oct
Latest comments
by Nils Arenz
8 hours ago

@Tristan: I studied computergrafics for 5 years. I'm making 3D art now since about half a year fulltime, but I had some experience before that. Its hard to focus on one thing, it took me half a year to understand most of the vegetation creation pipelines. For speeding up your workflow maybe spend a bit time with the megascans library. Making 3D vegetation starts from going outside for photoscanns to profiling your assets. Start with one thing and master this. @Maxime: The difference between my technique and Z-passing on distant objects is quiet the same. (- the higher vertex count) I would start using this at about 10-15m+. In this inner radius you are using (mostly high) cascaded shadows, the less the shader complexety in this areas, the less the shader instructions. When I started this project, the polycount was a bit to high. Now I found the best balance between a "lowpoly" mesh and the less possible overdraw. The conclusion of this technique is easily using a slightly higher vertex count on the mesh for reducing the quad overdraw and shader complexity. In matters visual quality a "high poly" plant will allways look better than a blade of grass on a plane.

by Anthony Thomas Gaines
11 hours ago

Is this not like gear VR or anything else

by Starkemis
13 hours ago

Thank you!

Generating Hair Strands From Single-View Input
7 October, 2018
News

Let’s check out a novel method that automatically generates 3D hair strands from a variety of single-view input. The approach was originally described in a paper “3D Hair Synthesis Using Volumetric Variational Autoencoders” by Shunsuke Saito, Liwen Hu, Chongyang Ma, Linjie Luo, and Hao Li. 

Instead of using a large collection of 3D hair models directly, the team proposed using a compact latent space of a volumetric variational autoencoder (VAE). This deep neural network, trained with volumetric orientation field representations of 3D hair models, is said to be able to synthesize new hairstyles from a compressed code.

Input image, volumetric representation with color-coded local orientations predicted by the method, the final synthesized hair strands rendered from two viewing points.

“To enable end-to-end 3D hair inference, we train an additional regression network to predict the codes in the VAE latent space from any input image. Strand-level hairstyles can then be generated from the predicted volumetric representation. Our fully automatic framework does not require any ad-hoc face fitting, intermediate classification and segmentation, or hairstyle database retrieval,” said the team.

The researchers state that their hair synthesis approach is significantly more robust than state-of-the-art data-driven hair modeling techniques. The storage requirements here are said to be minimal and you can produce a 3D hair model from an image in a second. The techniques also allows to produce models from highly stylized cartoon images, non-human subjects, and pictures taken from the back of a person.

Make sure to get more details and check out the full paper here

Leave a Reply

Be the First to Comment!

avatar
wpDiscuz
Related articles
Education
Character Art Program
Education
Environment Art Program
Education
Character Art Program