logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Neural Network Making Animatable 3D Models from Videos

Reconstruct an animatable 3D model from a video with BANMo.

BANMo is a neural network that creates an animatable 3D model given videos capturing a deformable object. BANMo is a method that doesn't require a specialized sensor or a pre-defined template shape.

BANMo builds high-fidelity, articulated 3D models (including shape and animatable skinning weights) from many monocular casual videos in a differentiable rendering framework. 

The creators aim to merge three schools of thought:

  1. classic deformable shape models that make use of articulated bones and blend skinning,
  2. volumetric neural radiance fields (NeRFs) that are amenable to gradient-based optimization,
  3. canonical embeddings that generate correspondences between pixels and an articulated model.

Based on them, the team introduces neural blend skinning models that allow for differentiable and invertible articulated deformations. Combined with canonical embeddings, such models allow establishing dense correspondences across videos that can be self-supervised with cycle consistency. BANMo can also render realistic images from novel viewpoints and poses.

Find out more about the project here and don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more