logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

NVIDIA Released Source Code for Video-to-3D AI Tool Neuralangelo

It can generate lifelike virtual replicas of objects.

NVIDIA released the source code for Neuralangelo – an AI model that turns 2D videos into 3D structures, "generating lifelike virtual replicas of buildings, sculptures, and other real-world objects."

You can see a demonstration of its fantastic capabilities in the video above, where NVIDIA recreated Michelangelo’s David with the tool and reconstructed building interiors and exteriors.

Image credit: NVIDIA

Neuralangelo works based on instant neural graphics primitives, the technology behind NVIDIA's Instant NeRF, which turns 2D images into 3D models.

Here is how it's done. Neuralangelo selects several frames from a 2D video of an object or scene filmed from various angles. Once it’s determined the camera position of each frame, it creates a rough 3D representation of the scene and optimizes the render to sharpen the details. The result is a 3D object or large-scale scene that can be used in other software.

The open-source code will allow developers to implement the technology in their projects and create objects for art, video games, robotics, and industrial digital twins.

Find the code here and join our 80 Level Talent platform and our Telegram channel, follow us on ThreadsInstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 1

  • Anonymous user

    Dear Sir Or madam,

    Hello, I am a member of Meshy, an AI tool that can implement text-to-3D, text-to-texture, image-to-3D.  Excuse me, I like your articles very much and hope that you can cooperate with us to contribute some articles about our products on this website. And what are the specific procedures and requirements?

    Thank you very much!

    0

    Anonymous user

    ·a year ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more