This new method enables free view rendering of the dynamic upper head with different hairstyles and hair motions.
A team of researchers from Meta AI and Reality Labs Research has presented a brand new way of capturing and rendering life-like hair. The method, called HVH, turns sparse driving signals like tracked head vertices and guides hair strands into volumetric primitives which could enable free view rendering and animation.
To make it possible, the team used a novel, volumetric hair representation that is composed of thousands of primitives. Each primitive can be rendered efficiently, yet realistically, by building on the latest advances in neural rendering.
They also presented a new way of tracking hair on strand level. To keep the computational effort manageable, they use guide hairs and classic techniques to expand those into a dense head of hair.
To better enforce temporal consistency and generalization ability of the model, the team further optimized the 3D scene flow of the representation with multiview optical flow, using volumetric raymarching.
"Our method can not only create realistic renders of recorded multi-view sequences but also create renderings for new hair configurations by providing new control signals. We compare our method with existing work on viewpoint synthesis and drivable animation and achieve state-of-the-art results," comments the team.
You can learn more here. Also, don't forget to join our new Reddit page, our new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.