NVIDIA will Present a Record 16 Research Papers at SIGGRAPH 2022

The materials will cover graphics research, advancements in neural content creation tools, display and human perception, the mathematical foundations of computer graphics, and neural rendering.

NVIDIA announced it's presenting a record 16 research papers at SIGGRAPH 2022 – an annual conference on computer graphics.

"The papers span the breadth of graphics research, with advancements in neural content creation tools, display and human perception, the mathematical foundations of computer graphics and neural rendering."

The papers were created in collaboration with 14 universities, including Dartmouth College, Stanford University, the Swiss Federal Institute of Technology Lausanne, and Tel Aviv University. NVIDIA states it has produced a reinforcement learning model that smoothly simulates athletic moves, ultra-thin holographic glasses for virtual reality, and a real-time rendering technique for objects illuminated by hidden light sources.

Here are the topics of the research that will be presented.

Neural Tool for Multi-Skilled Simulated Characters

AI usually learns just one skill at a time, but the researchers have created a framework that enables AI to learn multiple skills. They allowed the AI to reuse previously learned skills to respond to new scenarios, improving efficiency and reducing the need for additional motion data.

NVIDIA will also present 3D neural tools for surface reconstruction from point clouds and interactive shape editing as well as 2D tools for AI to better understand gaps in vector sketches and improve the visual quality of time-lapse videos. 

Bringing Virtual Reality to Lightweight Glasses

The researchers from NVIDIA and Stanford have created the technology needed for 3D holographic images to be put into a wearable display just a couple millimeters thick. The optics were designed with an AI-powered algorithm and can create holograms right in front of the user’s eyes.

Another paper proposes a new computer-generated holography framework that improves image quality while optimizing bandwidth usage. The third paper measures how rendering quality affects the speed at which users react to on-screen information.  

New Levels of Real-Time Lighting Complexity

Another paper introduces a path resampling algorithm that enables real-time rendering of scenes with complex lighting, including hidden light sources. This paper highlights the use of statistical resampling techniques during rendering to approximate the light paths in real-time. The researchers applied the algorithm to an indirectly lit set of teapots made of metal, ceramic, and glass. 

More works in this field include a new sampling strategy for inverse volume rendering, a novel mathematical representation for 2D shape manipulation, software to create samplers with improved uniformity for rendering and other applications, and a way to turn biased rendering algorithms into more efficient unbiased ones.

Neural Rendering: NeRFs, GANs Power Synthetic Scenes

The researchers will present the StyleGAN-NADA model that generates 2D images with specific styles based on a user’s text prompts, without requiring example images for reference. 

As for 3D, the scientists are developing tools that can support the creation of large-scale virtual worlds. NVIDIA's paper behind the popular Instant NeRF tool will also be presented at SIGGRAPH. 

Another work that will be presented compresses 3D neural graphics primitives. This can help users store and share 3D maps and entertainment experiences between small devices like phones and robots. 

SIGGRAPH 2022 will take place on August 8-11. You can learn more about NVIDIA's research on its website. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more