logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

NeuralVDB: AI-Powered Version of OpenVDB Revealed

The new tool allows creators, developers, and researchers to interact with extremely large and complex datasets in real-time.

Out of many other new tools and programs revealed by NVIDIA during SIGGRAPH 2022, one of the most prominent is without a doubt the newly-released NeuralVDB, an AI-powered version of OpenVDB, the industry-standard library for simulating and rendering sparse volumetric data, such as water, fire, smoke, and clouds.

According to the team, NeuralVDB adds machine learning to the library, thus introducing compact neural representations that dramatically reduce its memory footprint by up to 100x compared to NanoVDB, a GPU-accelerated version of OpenVDB introduced by NVIDIA last year.

Thanks to AI, 3D data can now be represented at an even higher resolution and at a much larger scale, allowing users to easily handle massive volumetric datasets.

On top of that, NeuralVDB also allows the weights of a frame to be used for the subsequent one and enables users to achieve temporal coherency by using the network results from the previous frame, bringing new possibilities for scientific and industrial use cases.

"Hitting this trifecta of dramatically reducing memory requirements, accelerating training, and enabling temporal coherency allows NeuralVDB to unlock new possibilities for scientific and industrial use cases, including massive, complex volume datasets for AI-enabled medical imaging, large-scale digital twin simulations, and more," commented NVIDIA.

Click here to learn more about NeuralVDB. Also, don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more. 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more