logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

NVIDIA's New AI-Powered System For Real-Time 3D Rendering

The result is achieved "with a combination of algorithmic and system level innovations".

The NVIDIA Research team has recently shared Real-Time Neural Appearance Models, a new research paper that describes the team's novel method for real-time rendering of 3D scenes and objects with complex appearance, achieved "with a combination of algorithmic and system level innovations".

Powered by AI, NVIDIA's new appearance model utilizes learned hierarchical textures that are interpreted using neural decoders, which produce reflectance values and importance-sampled directions. According to the team, the decoders feature two graphics priors, one for accurate reconstruction of mesoscale effects and the other for efficient importance sampling, enabling the model to support anisotropic sampling and level-of-detail rendering.

"By exposing hardware accelerated tensor operations to ray tracing shaders, we show that it is possible to inline and execute the neural decoders efficiently inside a real-time path tracer," reads the paper. "We analyze scalability with increasing number of neural materials and propose to improve performance using code optimized for coherent and divergent execution. Our neural material shaders can be over an order of magnitude faster than non-neural layered materials. This opens up the door for using film-quality visuals in real-time applications such as games and live previews."

You can read the full paper here. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Keep reading

You may find these articles interesting

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more