NVIDIA's GANverse3D now allows the creation of a 3D animated model with a single 2D picture.
NVIDIA Research released a new deep learning engine that creates 3D object models from standard 2D images. Developed by the NVIDIA AI Research Lab in Toronto, the GANverse3D application inflates flat images into realistic 3D models that can be visualized and controlled in virtual environments. This capability could help architects, creators, game developers, and designers easily add new objects to their mockups without needing expertise in 3D modeling, or a large budget to spend on renderings.
A single photo of a car, for example, could be turned into a 3D model that can drive around a virtual scene, complete with realistic headlights, taillights, and blinkers.
To generate a dataset for training, the researchers harnessed a generative adversarial network, or GAN, to synthesize images depicting the same object from multiple viewpoints. These multi-view images were plugged into a rendering framework for inverse graphics, the process of inferring 3D mesh models from 2D images.
Once trained on multi-view images, GANverse3D needs only a single 2D image to predict a 3D mesh model. This model can be used with a 3D neural renderer that gives developers control to customize objects and swap out backgrounds.
You can learn more about NVIDIA's new deep-learning engine by visiting NVIDIA Blog. Also don't forget to join our new Telegram channel, our Discord, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.