
Specialists from the University of Science and Technology of China, Image Derivative Inc., and Zhejiang University presented a clothed human body geometric reconstruction from a monocular self-rotating video called SelfRecon.
According to SelfRecon's devs, the method combines implicit and explicit representations which allow recovering space-time coherent geometries from a recorded monocular video of a person. The technology harnesses the advantages of both representations, making it possible to get a smooth overall shape using differential mask loss of the explicit mesh as well as refined details using differentiable neural rendering.
Although this method of dynamic topology is not innovative, it seems quite interesting as the grid is rebuilt several times per second and is refined when additional information is obtained from the video. According to the devs, "comprehensive evaluations on self-rotating human videos demonstrate that it outperforms existing methods".
The creators of SelfRecon suggest that their technology will be suitable for retail helping the users to create personalized avatars that could be used in AR and VR, anthropometry, and virtual try-on.
You can learn more about SelfRecon by visiting its website. Also, don't forget to join our new Reddit page, our new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.