Neuralangelo generates lifelike virtual replicas of buildings, sculptures, and other real-world objects.
NVIDIA Research presented Neuralangelo, a new AI model that transforms 2D videos into detailed 3D structures, "generating lifelike virtual replicas of buildings, sculptures, and other real-world objects."
Artists can import the resulting assets into design apps, editing them to use for art, video game development, robotics, and industrial digital twins.
"The 3D reconstruction capabilities Neuralangelo offers will be a huge benefit to creators, helping them recreate the real world in the digital world," said Ming-Yu Liu, senior director of research and co-author of the research paper. "This tool will eventually enable developers to import detailed objects – whether small statues or massive buildings – into virtual environments for video games or industrial digital twins."
You can see how it works in the demo above, where NVIDIA showcased how the model can recreate Michelangelo’s David and reconstruct building interiors and exteriors.
Neuralangelo works based on instant neural graphics primitives, the technology behind NVIDIA's Instant NeRF, which turns 2D images into 3D models.
Using a 2D video of an object or scene filmed from various angles, the model selects several frames. Once it’s determined the camera position of each frame, Neuralangelo creates a rough 3D representation of the scene and then optimizes the render to sharpen the details. The result is a 3D object or large-scale scene that can be used in other software.
Neuralangelo is one of the many projects by NVIDIA Research to be presented at the Conference on Computer Vision and Pattern Recognition (CVPR) on June 18-22. Its papers span topics including pose estimation, 3D reconstruction, and video generation.
Find out more about it here and don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.