logo80lv
Articlesclick_arrow
Professional Services
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
Order outsourcing
Advertiseplayer
profile_loginLogIn

MonoHair Project: High-Fidelity Hair Modeling From A Monocular Video

A group of Ph.D. students specializing in computer graphics and machine learning has proposed a framework capable of reconstructing high-fidelity 3D hair across a range of hair types from a monocular video.

While existing 3D hair modeling methods have achieved impressive performance, the challenge of achieving high-quality hair reconstruction persists: they either require certain capture conditions, making practical applications difficult, or heavily rely on learned prior data. To address these challenges, a group of researchers from several universities has developed a tool framework to achieve high-fidelity hair reconstruction from a monocular video without specific requirements for environments.

Image Credits: Keyu Wu

This approach splits the hair modeling process into two main stages: precise exterior reconstruction and interior structure inference. The exterior is meticulously crafted using the Patch-based Multi-View Optimization (PMVO). This method strategically collects and integrates hair information from multiple views, independent of prior data, to produce a high-fidelity exterior 3D line map. This map not only captures intricate details but also helps to understand the hair's inner structure.

For the interior, they employ a data-driven, multi-view 3D hair reconstruction method. It utilizes 2D structural renderings derived from the reconstructed exterior, mirroring the synthetic 2D inputs used during training. This alignment effectively bridges the domain gap between the training data and real-world data, enhancing the accuracy and reliability of the interior structure inference.

Lastly, the tool generates a stranded model and resolves the directional ambiguity with the hair growth algorithm. The developers say experiments demonstrate the method's robustness across diverse hairstyles and achieve state-of-the-art performance.

Here's the comparison with the SOTA method Neural Haircut:

Image Credits: Keyu Wu

More results below:

Image Credits: Keyu Wu

"The reconstruction results of our method in some challenging cases. We use two rendering styles to showcase our results: colorful strands for displaying hair geometry and Blender for realistic rendering."

Learn more about the MonoHair framework on its official page. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on InstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more