I really like how you've articulated your entire process. This was a very enriching read. A Well deserved feature!
Great article! Thanks for the awesome read.
Wow, this is so cool! Nice job!
Olivers Pavicevics was kind enough to talk about dealing with different liquid challenges inside Unity.
I have a design background (studied New Media Design in Milan, Italy), but in the year 2001, I’ve started to have some interest into real-time graphics that eventually led me to work in the VR industry. At that time I’ve started to collaborate with Virtools, one of the first solutions for VR and real-time graphics – somewhat similar to what is Unity today. I’ve mostly worked for VR solutions in industrial design for large Italian companies, mostly working on next-gen graphics and shader programming. When Virtools development stopped, I’ve started to use Unity and actively develop assets for Asset Store. The first asset that I’ve made was VR Panorama Pro, a tool that is used to make 360 movies out from Unity scenes. A second one – CScape, is an ultra-optimized city builder that pushes boundaries on the real-time rendering of large worlds. Both of my assets are best sellers and have five stars rating. At this moment, I’m also working as a professor in University Cattolica in Milan – teaching VR and AR related stuff. I also work in the Event industry, mostly doing real-time graphics for events. I’ve won few BEA (Best Event Award) for clients such as MTV EMA Awards, Fiat.
When starting any project of this kind I’m actually starting from an actual problem that I have. And in this case, there was a list of actual problems with any water systems available that I had a chance to explore before:
- Almost all systems use some sort of transparent shaders. Using transparent shaders can give serious problems with image effects. This means that we lose a possibility to render shadows and depth based image effects like SSR, AO or DOF.
- More realistic systems use one or two additional cameras, requiring to render a scene multiple times (normal scene without water, reflection plane, and refraction view). In VR, this means that we are getting a really big impact on performance by having to render scene 4-6 times (as we have to render stereoscopic view).
- Most water systems are made to simulate ocean water, while there are almost no realistic solutions for calm water
- Most water systems have a problem with blurry and eventually muddy water.
I must say that at this first stage I wasn’t interested in dynamic (interactive) water but was mainly interested in making a good shader that can work fast for VR and eventually for mobile systems.
The references were actually real life. I spent much time on a lake in Finland and was looking at what happens to a water at different times of day and weather. Spending all that time on a lakeside was actually an inspiration to start to work on NordLake where I wanted to actually capture some of the aspects that weren’t covered by other water systems.
A ket to realism
This is a little bit of a paradox, but I think that a key to realism here actually isn’t in a physically perfect simulation. Or at least, at this moment, we don’t have enough CPU and GPU performance to calculate in real-time all possible fluid dynamics. What I’m trying here is a more of an artistic view on a problem rather than following complex real-world physics equations. It mostly comes to looking at a real world and making a painting of that real world. With a difference: instead of using oil and canvas, I’m using shaders.
One of the main problems when having to deal with this type of surfaces is that they can be seen from different distances, and this means that they should look convincing at small and large distances.
What can seem like a clear wavy water from a near distance, seems almost non-reflective surface from afar.
Also, one other aspect of a water is that it’s almost never completely clear. You have always something floating on it: plants, insects, fishes making small ripples. So, one of the aspects of a water that I’ve tried to capture was exactly this one. But then again, I couldn’t use a simple tiled texture – so I’m actually using information from water depth to place various features on the water surface. This was done by using texture arrays. Texture arrays are a powerful way to pack a lot of textures into one single texture that can be seen by the shader (and GFX card) as one single texture.
After trying different ways to make some interactive water, like doing a painter for flow maps, I’ve come to the conclusion that those approaches were limited and difficult for an end user to do. Flow maps are pretty hard to do when you have to paint a large surface. Also, painting them in your application requires a steady hand and artistic skills, and can be pretty tedious. So at one point, I’ve found myself frustrated by this approach, and I’ve decided to trash all finished work that I’ve done when implementing flow maps.
Then I actually took the simplest possible approach. Water dynamics are actually generated by simple particles. It was the easiest way that is actually pretty easy to control artistically without impacting rendering performance. I’m using particles that get rendered into a render texture (as a depth map), and then just apply specific shader that creates a normal map from this depth texture. Resulting waves aren’t physically correct, but they look convincing. This approach is pretty flexible as it permits to use particles as sources for waves, and then eventually use some image masks as blockers for some areas where we don’t want any particles.
One important thing to say is that I’m not rendering micro perturbations by using particles, but those micro perturbations are hard-coded into a water surface shader, and they get activated based on input from a depth texture. This gives some big boost in performance, as a depth texture with information on waves can be pretty small – as it has to render only larger waves. Also, these waves get rendered in a scene only at the desired distance – usually covering only surrounding 50 meters. I found that this distance is enough for an FPS type of view.
In the end, for getting waves or ripples, the user has only to put some particle systems into a scene where he wants waves to appear. It could be under static objects, or they can be attached to moving objects (like a shark in my case). As I’m using Unity particles, those waves can also be bounced by colliders.
Workflow is pretty straightforward in Unity:
- You add a Nordlake water plane to your scene that contains terrain (Unity terrain or a mesh).
- Click the button and generate material and in few seconds all necessary maps for that lake are generated, and you are ready to go and set material parameters to your liking (watercolor, fresnel factors, wave speeds).
- One of the most important things for realistic rendering is to use Post Processing effects. As NordLake works best with Screen Space reflections (SSR) it is important to activate SSR. (but it isn’t necessary, as Nord lake can also use Reflection Probes) Also, one of the effects that are a must is a bloom – as with all reflective surfaces, having a bloom will help to simulate what actually happens in our eyes when we look at a water surface that is reflecting bright lights.
Stronger waves would require a different approach that can’t be as optimized as this one. Strong waves require some vertex displacement, and this means that you have to use some tessellation shaders – that aren’t available on all platforms. This doesn’t mean that they are hard to implement – actually, it should be pretty easy – but there are a few aspects that should be solved (stronger waves have different properties (as they can generate some foam and internal refraction).
Liquids are more or less 50% shading and 50% dynamics. Don’t underestimate any of those two components. If your shading goes to 60% then your dynamics can go to 40%.
Put simply, in my approach I would treat all liquids as particle systems. One of the things that one has to ask before doing any particle system is: do I want that this system is physically correct or do I want that it looks plausible? I am somewhat of opinion that human being isn’t capable of distinguishing physically correct particle flow from near physically correct (let’s call it plausible) movement. I will try to explain: we as humans are good at perceiving natural movement – we are seeing it every day. But we are generally seeing a macro movement.. macro dynamics. Another ‘attraction’ is how a light behaves with this movement. And from this standpoint, as for micro-movement, I think that that we should look into what is going on with reflections and lighting. One of the good tools to do any motion analysis is a simple phone camera, that can capture 60 fps or more. I like to use video captures and overlay them on particle systems, just to get that right timing and lighting match the real world. One of the tests that I’m doing with myself is to start by doing a simple screenshot of water surface. One single frame. And then look into that frame and try to understand if it works or doesn’t. Only when I find that that frame matches a photo, I’m starting to work on water movement. This can be allied on water as on particles – doesn’t matter. But you have to find a right balance so that visual FX doesn’t look out of place.
Olivers Pavicevics, 3D Artist
Interview conducted by Artem Sergeev