Utilizing Procedural Toolkit for Gamedev
Subscribe:  iCal  |  Google Calendar
Utrecht NL   29, Jun — 30, Jun
Brighton GB   10, Jul — 13, Jul
Brighton GB   10, Jul — 13, Jul
Cambridge GB   13, Jul — 17, Jul
San Diego US   19, Jul — 23, Jul
Latest comments
by Junkrat
17 hours ago

Awesome 😍😍

sir!! where can i get this kit for free .. i dont have money to spent on it !! please give me any source or link...

by Danielle T. Hebert
22 hours ago

Hi It is very nice article to read and i like this. Thank you for sharing this wonderful idea to us Have a nice day Give more ideas and article about this Thank you :https://medium.com/@yenhang1811

Utilizing Procedural Toolkit for Gamedev
29 May, 2018
Environment Art
Environment Design

Luiz Kruel talked about some of the cool tools in Houdini, which are aimed at helping developers build video games faster and more efficiently.


I’m originally from Rio de Janeiro, Brazil, but currently, live in South Florida. I’m a Sr. Tech Artist at SideFX Software (makers of Houdini).

I started my career at EA Tiburon working on Madden and NCAA Football, then I worked at Sony Online Entertainment on DC Universe Online, Avalanche Studios on Just Cause 3 and Certain Affinity on Doom, Halo Master Chief Collection and a couple of Call of Duty projects. I’ve been a Tech Artist my whole career, but have done something different in each company.

I started playing with Houdini when I was working on Just Cause 3, because we had a small team that needed to make a lot of content, and since then I’ve been sold on procedural and non-destructive workflows.

Texturing in Houdini

We have two main ways of texturing in Houdini. COPs which is our compositing network, and MAT which is our material context.

COPs was designed for film compositors as you mentioned, and the basic idea is that you have two images and you blend the two together based on a third (or an alpha channel).

This is what COPs looks like – it’s node based, like the rest of Houdini, and you have all of the basic functionality of blending images together, pulling keys, blurring, etc.

It’s pretty standard practice to have multiple passes when you’re rendering, so you can control each pass individually during compositing. These can be different elements in the scene (backplate, character, CG elements) or just rendering passes (different lights, shadows, etc).

These are referred to as Image Planes in Houdini. And they’re essentially layers on an image file. So what I did was use that system to blend materials together.

Instead of having key light, rim light and fill light as my image planes I had albedo, normal, and roughness.

So the nice thing here is that when you blend two of these images that have multiple planes in them, each plane gets blended properly.

So I just made a simple material node (just an image with predefined image planes and individual controls for each) which allowed me to set up a basic texturing pipeline where I had my Megascans materials, and I could blend them based on masks.

COP nodes for the GameDev

I’m going to add a couple of nodes like that Material node in the GameDev tools in the coming weeks. This allows you to plug in a premade material (like a Megascans asset) and use it in a material pipeline.

But at its core, mainly only using the OVER, Rename, and a modified ChromaKey node. Again, it’s a very simple but powerful workflow.

So in this barrier example, we’re just using the over node to blend things on top of each other:

One thing that we’re investigating is the ability to bring in mesh data directly from the geometry into textures, without having to bake. We do this with terrain data, and Mike Lyndon did this for our Vertex Animation Textures tool, which he explains in his webinar.


One of the nice features of our terrain tools is that ability to go in and out of COPs pretty easily. There is a SOP Import node that can sample from the heightfield directly, which bypasses the need to bake textures as you are iterating on the terrain.  

We also have a Mask by Feature node in the terrain, which lets you pull some nice masks based on slope, sun direction, and height. That combined with the debris map from the erosion gave me everything I needed.

I combined a base noise map, a height map (with the tips of the mountains isolated), a slope map and the debris map:

Which gives me something like this:

The darkest spots are where cliffs are doing to show up (high slope and high elevation) and the lightest spots are where the debris “fingers” will show up, basically where a sandy texture will show up.

I run this map through a lookup node, which basically uses a colored ramp to colorize this image.

By using a ramp that looks like this:

I was able to generate a map that looks like this:


I also layered in the area where I specifically wanted powdery materials (under the lava):

Which gave me a result like this in the game:

And I can use a regular terrain style shader to blend in 4 materials in the appropriate areas. And get something like this:

The material workflow 

The material workflow is something that you’d expect from an offline renderer. Where you can blend materials together and tweak your settings. So instead of dealing with things in texture space, you’re actually dealing it with the model geometry + shaders.

So now you can texture a very high-res mesh and bake it down into a game asset. With our baker, not only can we bake the geometry information (Normals, AO, Curvature) we can also bake material information (Roughness, Metallicness, Albedo). So your high-res model is already textured before you start the baking process. Which allows for faster iterations when your model changes.

Edge wear

Now we have two main nodes for doing edge wear: The Dirt Mask node and the Rounded Edge Shader node.

The dirt mask node gives us a nice mix of AO and Curvature. By default, it’s a great way to mask areas where grime would accumulate, but you can also reverse the normals before calculating it, which gives you a nice edge wear mask.

And then the rounded edge shader is designed to give you the look of a small bevel where the normal is sharp enough. But as a nice side benefit it can also output a mask of where it generated that beveling.

That mask is also a great way of localizing noise on areas of high curvature, either concave or convex. And on the texturing without UVs, I should’ve been a bit clearer. But we have a Triplanar projection node, which is pretty standard nowadays. The idea here is that you’re using the XY, ZY, ZX positions as your UVs. So you’re essentially projecting the texture onto the model and fading between the transitions.

This is perfect for noisy masks because where the blurry transitions happen it’s usually not very noticeable once everything is combined.

This is a simple material with just using that texture and the triplanar node, but also passing it to the bump (will turn into normals) and into a color mix node to give slight variation to the albedo:

But the nice thing about having a triplanar shader is that you can apply it to any model and it should look the same. And it’ll work on models without UVs: 

Using imperfections 

The imperfections are great for all of the man-made materials. Basically all of your metals, plastics, rubbers. For the most part, the main channel that makes the most difference in these materials is the roughness. The albedo might be a simple blend of two colors, but the roughness is where all of that nice details come in. Like in the example above.

They were used both in the roughness channels and as a blend texture between maybe a lighter and darker versions of a color.

Automated Photogrammetry

The game-res node helps hugely with creating game-ready assets. I go a little bit more in-depth about it in this presentation:

But the idea is that we can make a single node that has multiple nodes inside of it. In this case, the node will decimate your mesh, automatically UV it and then bake the maps from the original mesh to the auto-generated one in a single shot.

By baking the materials as well as the geo-information, this means that you can go from your high-res model with materials, straight to the game-res fully procedurally.

The idea is not to necessarily have perfect final quality assets (although we’re getting close), but to be able to iterate quickly and see your changes fast in your game engine. So instead of making assumptions on a turntable render on a 5 million poly mesh, you can get it into the game in minutes and make way better decisions.


I started with the COPs approach because it was more familiar, but I ended up preferring the MAT approach a lot better. At the end of the day they’re both outputting unique textures, so it’s not really a question of hero assets vs tileables. With both techniques you can just output masks, so you can still use them with tileable textures.

I guess my recommendation would be the following: if your model is changing a lot, use the material approach as you’ll be able to iterate without having to bake maps, but if your model is already locked and you baked out all of your geometry maps then COPs will probably be faster because you’re not having to wait to for the renders.

Luiz Kruel, Sr Technical Artist at SideFX

Interview conducted by Kirill Tokarev

Leave a Reply

Be the First to Comment!

Partners’ project
Partners’ project
Partners’ project