Real-Time Noise Filtering For Light Simulations
Take a look at an advanced filter that takes 1 sample per pixel path-traced input and reconstructs a temporally stable 1920×1080 image in just 10ms.
Take a look at an advanced filter that takes 1 sample per pixel path-traced input and reconstructs a temporally stable 1920×1080 image in just 10ms. Insets compare the input, filtered results, and a reference on two regions, and show the impact filtered global illumination has over just direct illumination. The input might be noisy, but the final image gets glossy reflections, global illumination, and direct soft shadows.
The model was introduced in a paper “Spatiotemporal Variance-Guided Filtering: Real-Time Reconstruction for Path-Traced Global Illumination” by Christoph Schied, Anton Kaplanyan, Chris Wyman, Anjul Patney, Chakravarty R. Alla Chaitanya, John Burgess, Shiqiu Liu, Carsten Dachsbacher, Aaron Lefohn and Marco Salvi. Check out a quick overview from the Two Minute Papers channel to get started:
Abstract
We introduce a reconstruction algorithm that generates a temporally stable sequence of images from one path-per-pixel global illumination. To handle such noisy input, we use temporal accumulation to increase the effective sample count and spatiotemporal luminance variance estimates to drive a hierarchical, image-space wavelet filter. This hierarchy allows us to distinguish between noise and detail at multiple scales using local luminance variance.
Physically based light transport is a long-standing goal for real-time computer graphics. While modern games use limited forms of ray tracing, physically based Monte Carlo global illumination does not meet their 30Hz minimal performance requirement. Looking ahead to fully dynamic real-time path tracing, we expect this to only be feasible using a small number of paths per pixel. As such, image reconstruction using low sample counts is key to bringing path tracing to real-time. When compared to prior interactive reconstruction filters, our work gives approximately 10x more temporally stable results, matches reference images 5-47% better (according to SSIM), and runs in just 10ms (+- 15%) on modern graphics hardware at 1920×1080 resolution.