The result could be used in DCC software.
Researchers from Meta Reality Labs, Zhejiang University, and Carnegie Mellon University presented CT2Hair – a cool automatic framework for creating high-fidelity 3D hair using computed tomography (CT).
The method takes real-world hair wigs as input and then reconstructs hair strands for a range of hairstyles. What's amazing about it is that it allows you to see through the hair thanks to CT used to create density volumes of the hair regions. The result looks highly detailed and can then be exported to graphics apps.
The researchers employ a two-step coarse-to-fine approach consisting of guide strands initialization and dense strands optimization. First, a 3D orientation field is computed from the input 3D density volume, and the creators generate guide strands using the orientations. Then, they interpolate the estimated guide strands so that they distribute evenly on the scalp and optimize the hair strands using the source density volume as the target. The optimized hair strands are the final 3D hair model that is ready for other software.
The video provided by the authors shows a very impressive result where you can almost count every hair on the digital head. The research is part of SIGGRAPH 2023, and the repository is available on GitHub.
Find out more here and join our 80 Level Talent platform and our Telegram channel, follow us on Threads, Instagram, Twitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.