Reconstruct an animatable 3D model from a video with BANMo.
BANMo is a neural network that creates an animatable 3D model given videos capturing a deformable object. BANMo is a method that doesn't require a specialized sensor or a pre-defined template shape.
BANMo builds high-fidelity, articulated 3D models (including shape and animatable skinning weights) from many monocular casual videos in a differentiable rendering framework.
The creators aim to merge three schools of thought:
- classic deformable shape models that make use of articulated bones and blend skinning,
- volumetric neural radiance fields (NeRFs) that are amenable to gradient-based optimization,
- canonical embeddings that generate correspondences between pixels and an articulated model.
Based on them, the team introduces neural blend skinning models that allow for differentiable and invertible articulated deformations. Combined with canonical embeddings, such models allow establishing dense correspondences across videos that can be self-supervised with cycle consistency. BANMo can also render realistic images from novel viewpoints and poses.
Find out more about the project here and don't forget to join our new Reddit page, our new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.