честно сказать я в ахуе
finally some good news
Muchas gracias por esto, no sabría como crear algo así :D
Jean-Christophe Cavadini shared his Houdini experience at CGMA course Introduction to FX with Houdini during which he mastered nodes and created a splendid water cascade.
Hi everyone! My name is JC “wHiteRabbiT” Cavadini and I am a Technical Artist / Sr. Unity Developer. I started my 3D journey with DirectX6 and 3ds Max, did a lot of programming, shaders, made my own 3D engine (a 128-bit Quadruple-precision floating-point 3D engine in C#/C++), also some Android games and, recently, I discovered Houdini.
Houdini is a reunion of what I always liked, from modeling to programming, and what I always wanted to achieve, for real-time purposes to deep baked simulations.
As I was willing to have a good overview of its capabilities and learn fast, I took the CGMA course Introduction to FX with Houdini.
I experimented with various simulations (such as destruction, clothes, etc.) in 3ds Max or Blender, played a little with After Effect but to this day, none of them has this straightforward capacity of doing it so easily, and above all, in a non-destructive way.
Houdini can stay fully procedural and always offers the possibility to tweak any value from the start to the end of the setup, at any time during the production.
Which is, in addition to being very powerful, extremely handy allowing to always keep an artistic control.
This project was split into 2 different tasks.
The first week was focused on the fluid simulation and the white water (and foam) generation.
The second was about how to render it with Mantra, Houdini’s renderer.
First of all, because this is a physics simulation, we have to make sure we are working with the right scale in our scene. In my case, I didn’t want to make a huge waterfall because I knew that my computer wouldn’t take it. So I created some rocky cascade container around 6m long, which I converted to a Volume Collision in order to have a good collision interaction in the fluid simulation.
Then we have to create a Fluid Source which will be the domain where the fluid will be generated from. It will simply try to keep this area always full. So for example, if there wasn’t any force at all (no gravity, wind, etc.) this fluid would stay still during the whole simulation.
For a continuous flow, we have to create this area slightly above the height of the future water level.
Adding some noise to the velocity will add more realistic and less predictable fluid behavior.
At this step, everything is ready to be used in the FLIP simulation.
The simulation will happen inside a DOP Network, where we will reference our Fluid Source and Volume Collision.
Its result is then converted to a polygon surface, which will be our water rendered geometry.
The whitewater and foam generation is quite straightforward from here. We just need to use a Whitewater Source node.
With this data (the velocity field and surface from the FLIP simulation) we can now simulate our whitewater inside a new DOP Network.
The output of this node will give us two point groups: “foam” and “spray”.
As this geometry node wasn’t for rendering purposes, I then merged everything at the end just to have a simple preview on the viewport.
The water, the whitewater, the foam and the rocks were then separated into their own Geometry node, each corresponding to a new Mantra Renderer node.
By doing so, we can tweak each renderer in order to be as precise and fast as possible for each different kind of material.
For instance, because the rocks are not moving, we don’t need as many Pixel Samples as the water surface does. Same applies to the ray limit of reflection, refraction, diffuse, etc.
Except for the water surface for which I used a Material Builder node, the shading part is quite simple and only use the PBR Principled Shader material.
For compositing purposes and flexibility at the end, I used differents AOVs (Arbitrary Output Variables) component for each render.
Those image plans give us the afterwise possibility to tweak any component like “diffusion from the environment light”, “refraction of the first light”, etc.
We have to set up the fluid simulation inside a DOP Network. Most of the work will be computed by a FLIP Solver node which will need our Fluid Source and will react with our Volume Collision.
I didn’t change much in this node, just enabled Droplets and Add Vorticity Attribute (for further shader purposes).
The tweaking part mostly consists in playing with 3 properties of the Fluid Source: its size, location and the Particle Separation value.
Finding the right Particle Separation value will affect the precision and details of the simulation, and therefore directly impact the simulation time cost.
Adjusting the size and location of the Fluid Source will change the flow.
Then comes iterations of “setting those parameters”, “re-simulate”, “create a Flipbook to have a good preview of the simulation behavior”, until we reach the right one.
The other DOP Network concerns the whitewater generation. Everything will happen inside a Whitewater Solver node, and besides enabling Spray and disabling Bubble, I didn’t change any other values.
I used a Material Builder node for the shader of the water to speed up the renderer and have full control of what was processed.
For the color, I chose to use the Velocity and Vorticity attributes to alter the diffusion and transmittance color: the more the water is in movement the whiter it becomes.
This diffusion color is then directly used by a PBR Diffuse node whose result becomes the base layer of a new PBR Non-Metallic node which have reflection and refraction abilities.
The next step is to find the right Refract and At Distance values, controlling the amount of refraction (and because it is a PBR shader, this will change the contribution of the diffuse part) and transmision color influence.
I also used a Is Shadow Ray node to have a better projective shadow look.
An interesting and very useful tip of the first week was about the collision: using a proxy Volume and choosing Volume Sample rather than the default Ray Intersect mode in a Static Object node results in having a more accurate collision response in the simulation.
For the rendering part, the main tip would be to not only rely on Principled Shader but build our own from scratch to insure better performance and an easier material to tweak.
Houdini & real-time production
This is not really applicable to my project because this is more of a baked simulation, but Houdini can be very powerful for real-time space creation.
In addition to having this ability to create plenty of different assets by using procedural generation, Houdini is now compatible with Unity and Unreal and you can use Houdini Digital Asset (HDA) directly in them.
When dealing with those kind of simulations and renderings, the hardest part is to reach the best realistic simulation and material shader as possible.
Which means that at some point you will have to increase the physics steps solver count, enhance the simulation definition, the pixel sample, etc, and so the simulation time will increase drastically, and the rendering time too, especially if you have a quite out-of-date computer like mine (knowing my computer limitation, the choice to make a pretty small simulation was clearly intended).
So we have to tweak the values with a less precise simulation, but keep in mind that sometimes, when we go back to the high resolution ones, the physics can have different and unexpected responses.
To speed up the shadering process, I always use a low-res Mantra node which possess light settings and only one light at a time.
This CGMA course was overall a really good experience. Having the possibility to exchange ideas with other passionate students and having some short deadline helped to reach the end of the projects.
Also, it is always very appreciated to get professional tips that have been tested in real production.
JC Cavadini, Technical Artist
Interview conducted by Kirill Tokarev