logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

River Editor: Water Simulation in Real-Time

We’re extremely happy to present an astonishing study of water systems and real-time water challenges by Jean-Philippe Grenier.

We’re extremely happy to present an astonishing study of water systems and real-time water challenges by Jean-Philippe Grenier. Don’t forget to check out the artist’s website and follow him on Twitter.

Intro 

I’ve always been interested in water in games. I played From Dust a lot when it came out and it fascinated me. More recently I was also very impressed with the mud and water system of Spintires MudRunner and Uncharted 4. So I decided to try and implement a small water system to better understand the different challenges real-time water represented.

WAVE PARTICLES

I was watching Carlos Gonzalez’s Siggraph talk Rendering Rapids in Uncharted 4 and something really caught my attention, there was a small demo of an idea called Wave Particles:

Image Credit: Carlos Gonzalez-Ochoa (Naughty Dog), Rendering Rapids in Uncharted 4:
I was amazed at the complexity that could arise by this simple idea of stacking moving simple wave particles on top of each other and had to try it for myself. I set up a small d3d12 app and even without any lighting it was obvious that it was possible to build a system around this that was very intuitive and fun to use:
Rendering of water surface using wave particles

One thing I tried early on was to combine the idea of Wave Particles and Wave Packets that was presented by Stefan Jeschke at Siggraph. A Wave Packet roughly speaking is similar to a wave-particle, but it carries with it multiple wavefronts:

A single Wave Packet

The hope was that these complex particles could capture subtle water surface behaviors which would not be possible with simple wave particles. While it was an interesting idea I did not have much luck with them. Mainly because it was difficult to gather the longitudinal forces of the waves but also because they prevented the use of an important optimization that plain wave particles offered. So I ended up abandoning this effort early and stuck to using plain wave particles:

Rendering Wave Particles with a simple lighting model

Once I had the lighting and modeling in a decent state, I turned my attention to generating an interesting vector field which could drive the motion of these waves.

VECTOR FIELD

I first looked at the distance field approach made popular by Alex Vlachos with the Portal 2 Siggraph presentation. Unfortunately, this approach lacks the ability to create interesting vortices which was a property I was hoping to have.

Image Credit: Alex Vlachos (Valve), Water Flow in Portal 2:
 

I also looked at Jos Stam Stable Fluid method. Since this is an actual fluid solver the method captures some interesting fluid properties:

Stable Fluid Solver

But then I found out about the Lattice Boltzmann Methods. The more I looked into these the more they revealed interesting properties. They seemed straightforward to parallelize and port to GPU. They captured vorticity effects really well and on top of that, they could be used to solve the Shallow Water Equations. Some configuration even appeared to converge to some stable state which seemed like a desirable property if the vector field were to be stored offline. For more information on how to solve the Shallow Water Equation using the LBM method, I recommend this fantastic book by Dr. Jian Guo Zhou.

I always like to start implementing these complex systems on CPU since it allows me to set breakpoints and investigate the simulated data much easier than when run on a GPU. But even at a very low resolution and frame rate, I could see that this method seemed promising:

CPU LBM Solver, 32px X 64px

I then ported the code to compute which allowed me to run the simulation at a much higher resolution:

GPU LBM Solver, 512px X 1024px

I was sold on the benefits of the LBM solver very early. The main drawback is the memory required to run it. The method needs to operate on 9 float components (so far 16F per component seems to be enough). To make it worst, I’m currently using a ping/pong scheme which means twice the amount of data. For example, if running the simulation on a 2d grid that’s 256×128, then it’s (256x128pixels) x (9 components) x (16 bits) x (2 for ping/ponging)

The vector field is then used to advect some data. Mostly foam related. At the moment the Foam Amount and 2 sets of UV coordinates are advected (so 5 more float components). We need two sets of UVs because we will fetch the foam texture with each set and continually blend back and forth between the two samples as it stretches because of the advection.

Pulse VS Stretch. Choosing a good blend period can be difficult.

Advecting the wave heights and normal maps didn’t prove very successful. A very visible and annoying pulsing appeared. While there is a lot of research done on this subject, but I haven’t seen anything that comes across as a silver bullet. Advected Textures by Fabrice Neyret advects 3 sets of U coordinates and choose which set would yield the minimum amount of distortion per sample. It gives very good results but requires storing and advecting two extra UV sets. Lagrangian Texture Advection by Qizhi Yu advects a small set of particles in the velocity fields and then splats out a UV set with very limited stretching. It yields incredible results but the particle splatting phase is very expensive due to a large amount of overdraw required.

The pulsing could be minimized using some Perlin noise to add some variation but it was a difficult battle to fight:

Annoying pulsing arising from advecting a heightfield and its normals.

Fortunately, roughly at the same time I was tackling these issues, Stefan Jeschke presented yet another new idea called wave Profile Buffers during Siggraph 2018. As soon as I saw the presentation I started reading the paper. One key property of the Wave Profiles is that they don’t require UV coordinates. To sample them you only need a world position, a time value, and a wave direction. This means that the wavefronts generated by sampling these wave profiles are spatially and temporally coherent. i.e. no stretching or tearing!

Surface detail added using Wave Profile Buffers yields no visible tearing or pulsing

A NOTE ON LIGHTING

The lighting of the water is fairly standard. It basically boils down to a specular term for the water surface and a scattering term which simulates the light scattered in the water volume coming back towards the eye. For the specular term, I’m simply sampling a cube map which is generated using an atmospheric shader. There’s a really good blog series on this subject written by Alan Zucconi. But the specular term could be anything that your pipeline is currently using.

For the volumetric light scattering approximation, I use a nice trick that was published by Patapom. The idea is to approximate the average incoming luminance of a point in a volume by lighting it with two imaginary infinite planes which act as large area lights. I define the luminance of these two planes as:

float3 TopPlane = WaterColor * dot(N,L) * SunColor; 
float3 BottomPlane = GroundColor * dot(N,L) * SunColor * exp(-depth * ExtinctionCoeff);
	

The two exponential integrals can then be solved using the closed form approximation:

// From patapom.com
float Ei(float z) {
  const float EulerMascheroniConstant = 0.577216f;
  const float z2 = z  * z;
  const float z3 = z2 * z;
  const float z4 = z3 * z;
  const float z5 = z4 * z;
  return EulerMascheroniConstant + log(z) + z + z2/4.f + z3/18.f + z4/96.f + z5/600.f;
}

float3 ComputeAmbientColor(float3 Position, float ExtinctionCoeff){
  float Hp = VolumeTop - _Position.y; // Height to the top of the volume
  float a = -ExtinctionCoeff * Hp;
  float3 IsotropicScatteringTop = IsotropicLightTop * max( 0.0, exp( a ) - a * Ei( a ));
  float Hb = Position.y - VolumeBottom; // Height to the bottom of the volume
  a = -ExtinctionCoeff * Hb;
  float3 IsotropicScatteringBottom = IsotropicLightBottom * max( 0.0, exp( a ) - a * Ei( a ));
  return IsotropicScatteringTop + IsotropicScatteringBottom;
}

float3 HalfPoint = lerp(RiverBedPos, WaterSurfacePos, 0.5f + FoamTurbulance);
float3 ScatteringTerm = ComputeAmbientColor(HalfPoint, ExtinctionCoeff);

	
Light Scattering in water volume

When the sun angle hits a certain grazing angle threshold, I also refract a ray to fetch a sky luminance value and then inject this as extra water scattering that can occur on the wave tips.

Adding additional light scattering in wave tips

Something I haven’t tried yet but might be interesting would be to support self-occlusion of the water surface by doing some screen space shadow tracing, but for now, the wave heights I’m toying with are too small for it to see any interesting benefit. For larger waves it seems like supporting some occlusion could be worth investigating:

Notice the impact of self-occlusion around some of the larger wave crests. Image Credit: Rutter.ca

FUTURE WORK

There is a lot of work left to do for this to be production ready. The next step will be to implement a system which will let me paint large river canals. I have been thinking about using some kind of virtual texture setup where the velocity field could be baked along with some initial condition which could be used to bootstrap the advection for a tile as the camera approaches a certain area. This would eliminate the cost of running the simulation at runtime.

Also, I’m interested to see how far one could go with pre-generating Wave Profile Buffers which would make them much more feasible to use on a tighter GPU budget.

Jean-Philippe Grenier, 3D Programmer at Ubisoft Montréal

Join discussion

Comments 7

  • ching

    amazing work

    0

    ching

    ·4 years ago·
  • sharpie

    Wow I can wait for this to be in games. Praise be to technical artists!

    0

    sharpie

    ·6 years ago·
  • john

    Amazing!
    Will the source code be available on github?

    0

    john

    ·6 years ago·
  • Michael Fabian \'Xaymar\' Dirks

    This looks amazing, it even shifts the water elevation.

    0

    Michael Fabian \'Xaymar\' Dirks

    ·6 years ago·
  • Kaji

    If i may add a little thought: I was watching videos from rivers and waterfalls for game research and in most cases they looked pretty repetitive. This might be a good thing if we could simulate and bake flipbooks or something cacheable that may blend and just repeat itself after a short time.

    0

    Kaji

    ·6 years ago·
  • Moose

    Now everyone feels dumb

    0

    Moose

    ·6 years ago·
  • doe

    Bravo !

    0

    doe

    ·6 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more