logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

SSRTGI: Toughest Challenge in Real-Time 3D

UNIGINE’s Davyd Vidiger talked about the way Screen Space Ray-Traced Global Illumination is used to improve the image quality in real-time graphics.

Lead 3D Artist in UNIGINE Davyd Vidiger talked about the way Screen Space Ray-Traced Global Illumination is used to improve the image quality in real-time graphics.

My name is Davyd Vidiger, I have been working as a Lead 3D Artist in UNIGINE for several years. Today I am going to tell you how Unigine managed to improve the image quality in real-time 3D graphics using Screen Space Ray-Traced Global Illumination, what challenges did we face and why this technology is so important.

Introduction

Everyone who worked with real-time 3D graphics knows the dilemma: either to choose realistic static (baked), or less realistic dynamic lighting. We’ve also faced this dilemma while working on the Superposition Benchmark project. Unlike regular real-time applications, we have over 900 dynamic objects in a single room. All of these objects can be moved freely which significantly affects lighting. This scene unveiled a set of typical problems in modern 3D engines.

Here are the challenges we faced:

1. With baked lighting we can’t move dynamic objects

Suppose we pre-calculate lightmaps for each object offline, using ray tracing, and then bake them into textures. We would surely get a completely realistic and convincing lighting but, all objects would have to remain static – we couldn’t move them. Apart from looking plain and boring in VR, this would be considered as a technologically outdated solution.

2. Mixing dynamic and static lighting gives us an inconsistent image

It’s possible to bake lighting for static geometry (lightmaps or static AO), but this baked lighting won’t affect dynamic objects. As a result we can get too bright objects on dark shelves. Even if we try to improve the scene by content adjustment – inserting a dark environment probe into each bookcase, it still won’t look realistic. The fakeness of real-time graphics will still be evident.

3. Using environment probes gives sharp transitions of ambient lighting on the adjacent surfaces with different normals

We used environment probes for ambient lighting, applying it to objects’ surfaces according to the following scheme: if a surface normal points to the left, we use the lighting baked from the left side, if it points to the right – we use what was baked from the right. This is a classic method, which is now widely used in real-time for ambient lighting. However, it produces sharp transitions of ambient lighting on the adjacent surfaces with different normals which looks totally unnatural.

4. Providing an equally high-quality lighting for both medium shots and close-ups is really a challenge

Close-ups are still one of the most difficult challenges in real-time 3D graphics. It is close-ups, that’s where people notice even a slight imperfection, be it the content or renderer. The reason is that we deal with close-ups and human-sized objects every day. When the camera is close to an object, such deviations from reality as low-detailed content, unnatural shadows or reflections, become more noticeable than they are at a distance. Superdetailing is one of the most important conditions for close-ups, but at the same time, it is the main problem.

Why offline renderers do not suffer from these problems?

If we compare a ray-tracing renderer with a real-time one, we’ll see that the difference between the two consists in lighting, that makes the scene so realistic and plausible. The algorithms for diffuse lighting and specular reflections are best fitted by ray tracing, as it tries to mimic the behavior of photons bouncing from surfaces until they hit the eye. That’s natural, that’s how we see the world around us.

Shadows are areas hit by a low number of photons. A greenish sheet of paper lying next to a bright green wall is a result of reflection of photons from the wall before hitting the sheet (the Global Illumination effect). Multiple bouncing of photons from surfaces provides the required realistic smoothness of lighting.

Tracing algorithms are quite resource-consuming and even the most powerful of currently available GPU’s are not capable enough to process a FullHD picture at 60+ FPS properly without visible noise. It will take an extremely long time to calculate noiseless lighting for a very dark scene containing a small bright object (e.g. light shining through a small slit in the doorway into a dark room) since the number of rays will be low.

Research and implementations

While thinking about the problem of incorrect ambient lighting I came to an idea. If it is possible to bake static AO into a texture providing smooth transitions between different faces, why don’t we try to pre-calculate and bake normals. I was in touch with some guys from the 3D-Coat crew, so I asked one of their programmers to bake such normal maps according to the algorithm that I had composed. The result was fantastic.

1 of 2

Support for such a normal map indeed made some objects look more realistic to a certain extent. However, it was almost of no use for the scene as a whole, because a normal map works only for a single separate object (exactly as ambient occlusion baked into a texture). We couldn’t bake a single normal map for all objects, since objects had to be dynamic. So, this sort of implementation was no good for us, but it gave us a hint of using the very idea of such normals. After surfing the Web for a while we discovered that a similar technology had already existed, but no one really knew how to use it and what are its advantages. This was clear from the questions asked on different forums. The technology was called “Bent Normals” and wasn’t really popular due to the lack of the appropriate baking software. So, we had to make some modifications, which subsequently led us to the creation of our own technology.

After some time we were almost done with creating the content for our project. That was the time we faced the challenge of insufficient realism of lighting again. Objects in bookcases were still too bright, and when zooming in, we could see sharp transitions of ambient lighting between dynamic objects. That is why another attempt to improve lighting was made.

Our R&D Team came to the following assumption: if the popular Screen Space Reflection is so good at dealing with unnecessary reflections from environment probes, why don’t we try to calculate SSAO using a similar algorithm. So, we tried and got an amazing result.

Instantly we realized that everything we had before did not look like realistic lighting at all. We used the same approach to implement secondary diffuse light bounces, self-illumination of emission objects, and again Bent Normals, but this time without baking, since everything was calculated in real-time. Surprisingly, the whole thing worked perfectly. So, we called this technology Screen Space Ray-Traced Global Illumination (SSRTGI).

1 of 2

Ordinary screen-space effects, such as SSAO and SSGI, that are used to simulate global illumination, do not treat objects as obstacles for light rays. Roughly speaking, the light passes through objects, that’s why all popular approaches are unable to provide realistic lighting, offering just a rough imitation of interaction between photons and objects. Our approach is a real screen space ray tracing that takes obstacles into account.

Technical details:

  1. Basic settings

а. Number of rays. The higher is the value, the more rays are used for the effect, giving us a smoother picture;

b. Number of steps. The higher is the value, the further the tracing will be performed, resulting in wider radial gradients;

c. Step size. The higher is the value, the more precisely the obstacles would be determined. It results in smoother gradients on small objects;

2. Tracing

We used blue noise for sampling of the rays to be traced. Comparing the standard Photoshop pattern to blue noise, we can see that that the picture becomes less noisy, while the number of samples is the same.

3. Ray tracing gives us the following information:

  1. Number of rays, that hit obstacles, to be used for Ambient Occlusion calculation;

  2. Arithmetic mean of vectors, that didn’t hit any obstacles, to be used for Bent Normals calculation;

  3. Final color of the picture (without post-effects) to be used to determine the secondary bounce color and the self-illumination of the objects.

4. Denoising

This post-processing helps to remove noise, when the number of samples is insufficient and it is especially important for self-luminous objects, since they create very bright individual pixels, that immediately catch the eye.

5. Upscaling
SSRTGI can be rendered in full, half or quarter resolution. By reducing the resolution you can get a noticeable performance gain, but at the cost of a drastic change of image quality. The main problem is that the edges of objects become blurred due to low resolution. To make contours sharper, we took depth and normals for both lower and full resolution, compared them, and then restored the non-matching pixels.

6. Temporal filter

Every frame we change noise pattern for ray sampling, then make a reprojection of the previous frame taking into account the velocity buffer and mix the result with the current frame.

Practical application

Today, SSAO, HBAO, and SSDO are unable to achieve the level of image quality and realism offered by SSRTGI, even if we increase their resolution. However, using SSRTGI at its full capacity requires more advanced hardware. The key point here is that you can configure it to take more time providing a more realistic image, or to be faster producing a result, that does not differ much visually from these techniques. Fortunately, thanks to the flexibility of SSRTGI settings, you can adjust the balance between quality and performance.

Using a GTX 1070 we configured SSRTGI with the following parameters:

low = rays 4, steps 2, step size 2

medium = rays 16, steps 4, step size 2

high = rays 32, steps 8, step size 1

 

low

medium

high

1280×720

0.3

0.6

1.3

1920×1080

0.85

1.3

3.2

2560×1440

1.47

3

6.2

3840×2160

3.4

7.8

16.5

At the highest presets of the Superposition Benchmark we used SSRTGI at its full capacity and turned off baked lighting even for static objects. The result: we got them looking more consistent.

Significance of the technology

With SSRTGI technology it is possible to make real-time lighting consistent and realistic. It is a powerful tool in the hands of an experienced artist, offering an exceptional level of flexibility. A realistic bounce of warm light (we used a lamp in the dark to highlight the effect), adding more realism to the whole scene, serves as a good illustration of what this technology is capable of.

Summary

  • SSRTGI on a per-pixel basis gives us a consistent and realistic picture at any distance, from close-ups to outdoor locations;

  • SSRTGI is a perfect solution for large open-space worlds, where it is impossible to bake lighting;

  • With this technology, we bridge the gap that exists between real-time and offline renderers regarding image quality;

  • SSRTGI has a great potential: it can replace ordinary Screen Space Global Illumination effects providing a comparable performance, and moreover, it can drastically improve image quality at higher presets;

  • This effect is already included in Unigine 2 SDK. It is used in the released projects based on Unigine 2 and will be used in future projects as well;

  • The technology was presented and highly appreciated at the last SIGGRAPH conference in Los Angeles (Real-Time Live! show).

You can download the Superposition Benchmark, where we used SSRTGI, from the UNIGINE Benchmark website absolutely for free! In addition to the desktop mode, it also works perfectly well in VR.

Davyd Vidiger, UNIGINE

Interview conducted by Kirill Tokarev

Follow 80.lv on FacebookTwitter and Instagram

Join discussion

Comments 8

  • Heart Sticker

    Thanks to share this best information with us.Please visit our website to get more about best heart sticker designs.

    0

    Heart Sticker

    ·5 years ago·
  • Webfeed360

    Please visit to ge latest Tweets from WebFeed360. WebFeed360 is here to fulfill your daily dose on latest news, gossip, trending reading material. India. https://www.webfeed360.com/

    0

    Webfeed360

    ·5 years ago·
  • AmsberryLawFirm

    At The Amsberry Law Firm in San Antonio, we focus on helping our valued clients resolve legal concerns that have a profound impact on their lives and long-term well-being.

    0

    AmsberryLawFirm

    ·5 years ago·
  • Derpson

    What the hell are you saying? I can't make sense of it.

    0

    Derpson

    ·5 years ago·
  • Anonymous user

    ArK,
    Raymarched SSAO is remains AO while SSRTGI is approach to achieve GI. The idea is the same as any other ray tracing effect (like SSR that already was mentioned in that article).

    SSRTGI is basically a combination of 3 effects (SSAO, Bent Normals, SSGI) that calculates simultaneosly.

    000.graphics
    I'm afraid, not :(

    0

    Anonymous user

    ·7 years ago·
  • ArK

    Hi there guys! Amazing work!

    Could you explain a bit the difference between SSRTGI and basic raymarched SSAO? Specifically, is there any difference performance-wise? (http://gautron.pascal.free.fr/publications/I3D2011/i3d2011Poster.pdf)
    cheers!

    0

    ArK

    ·7 years ago·
  • Patapom

    Great result! (but this is rather indirect lighting than global illumination :D)

    0

    Patapom

    ·7 years ago·
  • 000.graphics

    Fantastic, is there a paper coming soon?

    0

    000.graphics

    ·7 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more