logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

A Technology that Reconstructs a Clothed Human Body Presented

A method of geometric reconstruction of human bodies in clothes from a monocular self-rotating video was presented by the Chinese devs. The method combines implicit and explicit representations to get a smooth overall shape as well as refined details.

Specialists from the University of Science and Technology of China, Image Derivative Inc., and Zhejiang University presented a clothed human body geometric reconstruction from a monocular self-rotating video called SelfRecon.

According to SelfRecon's devs, the method combines implicit and explicit representations which allow recovering space-time coherent geometries from a recorded monocular video of a person. The technology harnesses the advantages of both representations, making it possible to get a smooth overall shape using differential mask loss of the explicit mesh as well as refined details using differentiable neural rendering.

Although this method of dynamic topology is not innovative, it seems quite interesting as the grid is rebuilt several times per second and is refined when additional information is obtained from the video. According to the devs, "comprehensive evaluations on self-rotating human videos demonstrate that it outperforms existing methods".

The creators of SelfRecon suggest that their technology will be suitable for retail helping the users to create personalized avatars that could be used in AR and VR, anthropometry, and virtual try-on. 

You can learn more about SelfRecon by visiting its website. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more. 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more