Gabe is amazing and im happy to have him basicaly be the face of gaming. If you have something against him reply to this, i would love to discus
Awesome! So proud of you!!! <3
Positive site, where did u come up with the information on this posting?I have read a few of the articles on your website now, and I really like your style. Thanks a million and please keep up the effective work. rv altitude
Great breakdown from Pim Hendriks about his amazing material in Substance Designer.
This leads to a graph that will start growing naturally, until you’re either satisfied or it turns into a monster. The graph below is still has a size that is quite acceptable compared to some of the much bigger ones out there. In total, there are around 275 nodes.
To start building the graph, I feed a square Shape and multiply it with a pyramid Shape to chamfer the edges. The Slope Blur that comes after that cuts away a little bit more. I then blur those effects and overlay the same Perlin Noise by 0.05 to get some very minor height variance. At this point, having this height variance is not necessary at all. I used it as a passing test, to see if it would even work and kept it because I liked it.
At end of pipe for the Fake Sculpt I subtract a Histogram Scanned Dirt noise to create a sort of acne for the tiles. In most of images when looking for terracotta I found that these baked tiles often times still contain air bubbles. The ones at the surface break and leave little holes.
As in my previous tutorial, I like building my damage masks and overlays myself. There are some nice tools out there that are very powerful. One that comes to mind immediately is Evan Liaw’s Stone Carver and while this is a great tool, I like having the control of creating totally custom damage so I don’t get someone else’s signature on my work. I prefer learning from a tool and mimicking, than just taking it and essentially learning nothing.
A Tile Generator does the heavy lifting by providing a random dot generation. The Distance node fills the gaps by growing each value from its point of origin until it touches the next. The Edge Detect node detects edges. The results are warped several times to avoid those typically straight lines. I then feed the crack system in a Pixel Processor where I offset the pixel data based on luminance value of a second Tile Generator, carrying the same attributes as the one that creates the 3×3 tile setup.
In this instance, these are 2 interesting nodes that I would like to expand on a little bit. In this graph I am driving the Transform node with a Function so I can dynamically change the width of the oak beams. This function specifically is very handy for driving a lot of Transform 2D nodes with the same information, something you would normally try to do with cloned nodes. Unfortunately there is not cloning (yet) in Substance Designer, so it has to be done by a function.
This is little advanced but very interesting nonetheless as it will help automatize your graph a great deal. Then, after that, I will treat the Pixel Processor, where I explain in some detail what makes the luminance based pixel offset works. However, for the next couple of pages, there’s simply too much structured information to create a set of separate paragraphs with all the images cut out and positioned alongside that specific text, so I decided to keep the original images. Below each image is a duplicate (sometimes reworded) text which may be preferential to read.
Everything starts by using Google to find a solution to your problem, and so did mine. Big thanks to Fabian F. on the Allegorithmic forums to setting this up and making it quite readable. I recreated the Function from scratch to better understand each node and how the nodes work together.
To find out which types need to be created, there are a couple of steps to this process. First, create a Transform 2D node. Inside the Parameters of the Transform 2D is a tab called Transform Matrix, which has some greyed out text and icons on the right side. This is what we need to expose the internals of it. A pop-up will ask for the name of the new exposed input, but it can be named anything. I named it, for the purpose of the tool, Transformation_2D_Fuction.
Before we continue, we need to know which type of value each transformation is. In this case:
- Rotation is a Float1
- Scale (uniform) is a Float1
However, we will also need non-uniform transformations so:
- Scale (non-uniform) is a Float2
- Skew (non-uniform) is a Float2
Next step is to create all the Input Parameters necessary to run the custom transformation, This is done by double clicking the background of the Graph, going into its Attributes panel and ‘adding a new input tweak’. What this essentially does is add input channels to the newly created graph that will enable it to take information. To test this, we can drag the empty Function Graph into the exposed Transform 2D graph. As you will see, the node has a cool logo but is otherwise completely useless as it has no input channels. Time to change this.
Add a Rotation Input Parameter that will enable the ability to append information and this will give the node a little green ball. This indicates that it is ready to receive information however, at this point it still has no idea what to do with it. It has to be told what to do with whatever is coming into input “Rotate – Float1” so lets go back into the Function Graph and add a Get Attribute node to interpret the Float1 information that is Rotation.
In this case the node is called a GetFloat1 and will carry a large exclamation mark, indicating it needs data. This may feel a little backward, but the function will only calculate when it receives data from user input. The GetFloat1 receives information from the Function Graph’s Input Parameters, which in turn receive their data from the graph in which the Transform 2D is located. When double clicking the GetFloat1 node, there’s a little dropdown. Select the identifier, in this case ‘Rotation’ and the exclamation mark will be replaced with the name of the identifier. To round up the interpretation of the Rotation manipulation, all that is needed now is to add a Rotation Matrix that will return a Float4 that spins as its Rotation changes. This node is necessary for the manipulation to actually rotate.
Without falling into too much repetition, it is possible to finish this graph by following the principle of Getting an Attribute, manipulating its specific data-type and feeding it through its designated Matrix. However, due to the non-uniform transformations that the Transform 2D node handles like a champ, there’s some additional work to avoid double transformations. A good way to do this is by separating the uniform from the non-uniform transformations as they are not the same type of data. A uniform operation behaves identical in both axis where a non-uniform operation does not. So, a non-uniform operation Float 2 (values) and a GetFloat2 is necessary to facilitate that. I start out by taking both types of data and create Input Parameters for the both of them in the Function Graph. In addition to that, I know that I will need to build a switch to prevent the two types of data to stack, so I will set that up too. A switch is a Boolean by the way. True or False.
I then set up a GetFloat2 node which I output in Swizzle nodes. These nodes are great to transpose data to divide or share data accross different channels. In this case, in the dropdown menu of the Swizzle node, I set one to X and the other to Y. This separated the first value from the second. The following step is create a small switch network that cancels one type of information when set to True and cancels the other type of information when set to False. I do this by creating a Get Boolean and set its dropdown to my Input Parameter, in this case “Switch”. That information can now be fed into a If/Else node that has a white input ball, indicating it’s looking for a conditional input. After that, connect the Scaling_f1 and X for Scaling_f2 to the input 1 and 2. Create another If/Else to connect the Scaling_f1 and Y for Scaling_f2 to the corresponding inputs of the first If/Else.
The graph should now have 2 If/Else nodes that are fed by a GetBoolean, a GetFloat1 and two Swizzle nodes. To add in the Skewing part, it’s important to add another Input Parameter, set to a Float2 identifier and recreate the Float2 Scaling network of three. Essentially, just copy it and have the GetFloat2 refer to the Skewing Input Parameter. These values can now be freely merged using Vector Floats nodes. Personally I like keeping the X and Y values separated to I know which information is merged at a specific node but, as far as I know, this doesn’t change the results of the Function Graph.
I merge the Scaling .X If/Else with the Skew .X Swizzle by taking a Vector Float2 and assigning them from first to last. The Skew .Y Swizzle is then connected to a Vector Float3, which takes a Float2 in the first input. Then, the last Vector Float4, takes the Vector Float3 output in its first input and the Scaling .Y Swizzle in it’s last input. At the very end is a Matrix Multiply node that will allow us to combine the outputs of both the Rotation Matrix and the transformation data and it is set, like in the Transform 2D node, to acting as an Output node. This was the last step in the creation of this Function Graph, but it still needs to function in the Substance Graph.
In the Substance Graph it’s now time to make sure the information we’ve supplied in all the GetFloat and GetBooleans will actually be provided for. Double click the background of the graph and go into the Attributes panel. Scroll all the way down and add the same input tweaks to this graph as were added to the Function Graph. The menu is a little different and there are a little more options present, Keep these values on the low side.
After setting up the Input Parameters, its possible to go into preview mode of the available parameters to see whether or not they will behave as expected. Use the ‘Switch to Preview Mode’ folder icon to get a preview. Each Input Parameter is separated by a (surprise!) separator but will otherwise resemble any other regular node. The Switch will return a True or False signal, that will either disable the Uniform or the Non-Uniform scaling. Skewing will continue to function as it’s not fed through a condition node.
The final step to driving the Transform 2D node with the Input Parameters is to again go inside the transform matrix and edit the input variables to run the transformations. So for repetition’s sake, go to the Transform Matrix of the Transform 2D, drop down the menu and select Edit with the cool logo. This brings you into the Transform Matrix that we’ve left empty. Drag in the Function Graph into the Matrix of the Transform 2D node. Visible are the 5 Input Parameters that were set up with regard to the Rotation, Scaling and Skewing manipulations, as well as their non-uniform sisters.
There is also something to note with the colors of these inputs, as I haven’t discussed this yet.
Set the function graph as Output and add 2 GetFloat1s, 1 GetBoolean and 2 GetFloat2s and set the identifier for each to the identical Input Parameter of the Substance Graph. Connect the newly created nodes to their corresponding inputs on the Function Graph and voila, that’s it. You’ve created a global transformation function for each Transform node you wish to link to this system. The easiest way to go around and duplicate these badboys is by simply copy pasting the original node around.
This luminance based UV offset is useful for objects that should not share damage. Reasons could be that the surfaces are separated by other objects or that they don’t share the same physical strain as others. I’ve got to preface this section by saying I’ve only just touched the surface of this deeper level of data management in Designer, but I’m starting to understand why something goes where and when.
I start off by defining that I’m working with 2 floats (U and V) by creating a Float2 Variable and setting it to $pos. I can use the second image in Pixel Processor by setting the dropdown attribute to Image1. This should give me Float2 values, one for the each of the UV coordinates. It is possible to run this through a Swizzle node, which allows you to shuffle your values around, but I didn’t think I’d need that. Then, I feed those float values into the input of a Multiplication node and add a constant Float to multiply their values with. Now, I’m not sure about this but I think the Multiplication node will just take the first float of a Float2 and will ignore the second float. I’m not sure… Anyway, I multiply the U coordinate, with a random constant and recreate the vector with the second Vector Float2. I have to do this because after the multiplication, I’m left with only a Float1. The U coordinate will continue acting as the U coordinate, but the V coordinate is now U*π. I then add those values to the already accessed UV coordinates from the $pos variable and set that as the new UV space coordinates for image0. This effectively transposes the existing UV coordinates to the new ones, creating a “new” pattern.
Done with the technical stuff! From now on it’s just artsy fartsy details. The bake points, as referred to before, are leftovers of when terracotta tiles are baked at high temperatures. Sometimes the air bubbles at the surface break and other times they don’t. These are there to imitate the ones that baked down in little blunt stumps. They are added with a Blend node without a mask. Then next, some scratches. I had a hard time finding a simple and quick solution to setting up a quick scratch pattern and I had recently loaded this Grunge Map 19 into my Substance Package so I figured why not. This material is specifically not made to be used in real-time so having this and a couple more heavy grunge maps was okay for me.
I also felt that the shape of the tiles was too offset from the ground and compared to the reference stuck out of the ground too much, especially at the edges. A slight elevation falloff would be nice. To accomodate this I pulled a mask from the transformed 3×3 Tile Generator and blurred it to the point where the value falloff would facilitate a nice and smooth hill-like effect.
Considering mortar for floors is usually pretty settled in the cavities of the tiles I didn’t think I’d need a big network to convey the image of mortar. I use Black and White Noise that I remap to a simpler color scheme. I want to remove the bigger noise shapes because I want to control that myself, by blending the result with 2 Perlin Noise Zoom nodes. This way I can decide how big the lumps will be, in between the tiles. Afterwards, I use a Levels node to decrease the range so I won’t have spikes in the mortar once I merge it with the already existing luminance values of the tiles.
The mask of the mortar is pulled from the inverted Fake Sculpt. The reason for this is that I want to work with my original setup and control the influence of any of the later added details like scratches and bake points. I then pull another inverted mask from the Damage setup because, from much of my reference, it’s clear that in the process of laying these tiles (physical labor) the mortar is swiped over the tiles as a finishing layer. As a result, some of the mortar gets caught in the cracks. This means they’ll have to carry at least some of the mortar height data. In the last Blend node, I add the Damage mask to the inverted tile mask and set its blending opacity to 0.6. The rest of the Damage is masking out by a blurred version of the original Damage mask. This is to convey some height influence to the mortar interception.
This is, again, going to be a little more information than can fit on a single image, so I’m going to take the liberty to just dive into this. I really enjoyed figuring out and setting up this pattern but it’s definitely not the only way to achieve a result like this. There are many instances of awesome looking grain on ArtStation, all using different methods. Take from it what you want!
I start off by defining that the base pattern of a grain pattern is a set of lines that are then warped, pushed and blurred around. In my mind it was better to start off with lines that were already manipulated a little bit, so I use Tile Generator that I Blur and Transform to give them some variation. I then feed that information into a Safe Transform to increase the amount of data available and Warp them using two Mask setups. In succession to this, I warp the Blended result a couple more times to force some more swirls in the grain pattern. Then a Transform 2D scales it down non-uniformly, I sharpen the grain a little bit. The Gradient Map then forces all information to be remapped to different luminance values to get another level of LIVE grain detail in there. This is a nice little trick to quickly diversify the available information.
This is a step I find missing in most grain breakdowns. This is the height edging of the grain, meaning, when the wood is sawn off you will find that each grain lies in a little crease. This crease itself also a little height difference towards it’s edges. The difference in the grain pattern is minimal but noticeable. Then, because I want the edges of the wooden planks to be a little worn and torn I thought it was a good idea to single out the top and bottom grains at a relatively clean state, push their values quite harshly and use them to cut out the ends of each beam.
The deep grain in this case isn’t supporting anything related to the actual wood grain pattern but it’s there to feed into a Slope Blur to bevel the beams along their long sides. Then, finally, I build a mask with pointy ends by substracting a rotated Square shape. I then multiply the grain pattern on top of the pure white values of the frame mask so I get all the values from my grain back. After that I split the pipe in two tracks so the top beam is different from the bottom beam. A Slope Blur proves the usefulness of the Deep Grain by beveling only the sides. Immediately after beveling, the Clipping layer is multiplied on top. The Safe Transform nodes move the image 0.5 units to one side and enables proper tiling. I then blend the beams together by adding their values and Warp this one last time to have them bend a tiny bit. In the end I re-use the Bake Points from the terracotta tiles to subtract some small impact damage.
Simplicity over complexity. I don’t like doing very difficult merges of materials and I also don’t think it’s necessary to make these types of combine operations difficult if your material doesn’t need it. In this specific case I’m just copying the image data to input A / B and masking it by Histogram Scanning (over-exposing) the beams.
To finalize my height setup, before I go into creating the roughness or diffuse pass, I like to set up a little Normal / AO network to preview all the decision I made and hopefully highlight the ones that make the material less convincing. In this case the Levels node acts as a Null node. It’s like a ‘dot’ node in Nuke as its just allowing data to pass through. This information goes directly into the Height input as well as into a Normal map node and a Ambient Occlusion node. The Normal map node will help support micro and grain detail to an extent the height channel cannot maintain. The Ambient Occlusion is just there to help visualize height information in the OpenGL viewport.
Like mentioned in my other Breakdown, I don’t do a lot of spectacular stuff here. Most of it are Gradient Maps that apply color information against height data. I’ll go over the one’s that stand out most like the very first Gradient Map, that pretty much defines the base color to a baked cake type of feel. Then I use a Curvature to Edge Wear node to replace the Edge Detect that was there before. Seems like Edge Detect is unstable in SD6. To enable some more color variation due to heat residu, I take a random Grunge Map (015) and take a rainbow like Gradient Map. Then I add the mortar color, masking that in and to top it off multiply the Bake Point on top.
The wood diffuse is a little easier as it also starts with just one Gradient Map overlaid with a Constant Color and then overlaid by a couple of Knot Masks. The masks are set up to provide information where the color and roughness data should differ, essentially achieving the flame-like brightness and saturation variation we’re used to seeing in darker woods like this. To finalize, I add a little bit of scratching information, change the HSL and copy it on top of the tile diffuse.
The roughness is also not very exotic as it really is just a culmination of inverted height information that is masked out by, consequentially, combined masking operations. The base of the roughness is laid by inverting the terracotta height image and multiplying the Pixel Processor’d information of Grunge Map 015 on top after a 90 degree rotation. Then, before adding anything from the wood section, I balance the roughness response with a Levels node so I don’t have to sacrifice any control.
On top of the blended materials, I multiply a global dirt called Grunge Map 016, which will provide some glossier stains. Then the second Grunge Map (001) is masked to just affect the tiles as a little extra information, though hardly visible when viewing from a distance.
And that’s pretty much it. The breakdown is over. If you made it this far, that’s awesome! Feedback is very welcome and if you have questions, please leave a comment or send a message! I’d be happy to answer any questions!