Deep Dive: Terrain Shader Generation Systems
Events
Subscribe:  iCal  |  Google Calendar
Orlando US   24, Jul — 27, Jul
Grapevine US   25, Jul — 29, Jul
Los Angeles US   28, Jul — 29, Jul
Los Angeles US   28, Jul — 2, Aug
Helsinki FI   1, Aug — 5, Aug
Latest comments

very good

by DIGITAL
16 hours ago

GRACIAS

by Zack Smith
19 hours ago

I have read all the comments and suggestions posted by the visitors for this article are very fine,We will wait for your next article so only.Thanks! order now

Deep Dive: Terrain Shader Generation Systems
20 June, 2019
Environment Design
Interview

Jason Booth gave a talk on the terrain engines and his helpful shader generation systems for Unity MicroSpalt and MegaSplat. Remember that these tools are supported by SineSpace – to learn how to quickly get started, read our previous article.

Introduction

Hello. I’ve been in the game industry for about 25 years. I was going to Berklee College of Music in Boston and got interested in 3D animation and the Amiga computer.

I spent the last year of my schooling teaching myself Imagine and eventually Lightwave, and after driving around the country that summer bumped into some people who were starting a company with the idea of doing some kind of a large scale online game. This would eventually become Turbine Entertainment, with Asheron’s Call being the first game, which shipped several years later. I spent ten years there, working as a creative director, designer, engineer, 3D artist, and musician. The next 10 years were spent at Harmonix, where I worked on the original Guitar Hero and Rock Band titles, and eventually was in charge of redesigning the core engine and tools used to build games there. I also helped start Conduit Labs, sold to Zynga, and was at Disruptor Beam for 4 years doing the client and graphics architecture, and focusing on cramming high-end graphics into low-end mobile chipsets.

Currently, I’m doing freelance graphics coding and optimizations for various companies, mostly focused around hard rendering problems and refactoring systems for speed. Across my career, I’ve pretty much done every job there is to do, from being a Creative Director, 3d artist, musician, or engineer on projects. In my spare time, I’ve been largely involved in the music scene, have raised two amazing kids, and travel a lot.

Terrain Engines

Terrain has changed a lot over the years. I was heavily involved with the terrain engine of Asheron’s Call, which was the first game with a continuous open-ended world with no zones. The game had to run on a Pentium 75 with no 3D graphics card, so everything was tight. As an example of how this worked, we couldn’t really afford to blend textures together. Instead, what we did was render the same texture across the entire terrain, but adjust its palette for each tile. Basically, I would break up sections of the texture into separate sections of the palette, such as the first 12 colors might represent the top left corner, and the next 12 might represent the top right corner, etc. Each palette also contained lighting information, which was common for early 3D, so lighting and texturing were essentially handled via a single palette per tile. If I recall, there were 13 sections of each palette, so that we could create different terrains on each corner, along with roads and paths across them. Thus, terrain could be drawn with a single texture by varying the palettes used to represent all kinds of different terrains.

The terrain engine in Unity is pretty old, but with enough shader work, you can get it to look pretty decent. The basic idea is to have a texture which contains weights for four terrain textures, then blend the 4 texture sets together based on those weights. Need more textures? You draw the terrain the second time with new textures and weights.

Modern terrain engines often use virtual caching systems for textures. The basic idea is that you render chunks of the terrain into a cache which can be reused on subsequent frames without having to compute the full shader calculations. This kind of concept is being extended beyond terrain, though, as part of the full rendering pipeline, into GPU centric rendering approaches for the full frame. The actual palette of techniques is growing wider, as GPU capabilities increase, and hardware platforms are changing. For instance, things like deferred rendering get less practical as resolution increases, because the GBuffer data required is just massive. Trying to provide a one size fits all rendering solution or even terrain texturing solution thus becomes harder.

Vertex Painter

The vertex painter started because I couldn’t find one I liked and wrote my own on one Saturday. I was doing a lot of optimization and built system stuff at work, so having some graphical outlet was fun. I released it as open source and started adding some fun shaders to it and generally extending its capabilities. But the open source scene is not that large for Unity since everyone was looking in the asset store. So one night I started messing around with a vertex based splat mapping technique which used texture arrays and posted a video of it on Youtube. A few weeks later I released it as MegaSplat on the asset store and documented the next 26 versions of it as I build up the toolset and shader system.

The vertex painter is free to use and available on my Github.

MegaSplat & MicroSplat

MegaSplat and MicroSplat are both essentially shader generation systems. Unlike most shaders on the store, it actually rewrites the code as you change options around. This is because the way Unity handles options in a shader doesn’t allow for that many options, so writing my own generation system allows me to have hundreds of options, optimizing every feature out when not in use. The core technique used in each product is very different, as well as the tooling around them.

MegaSplat is based on a mesh based technique that allows for hundreds of textures to be blending and wasn’t explicitly designed for terrains at first. But shortly after release, I realized that the demand was very high for being able to use the technique on Unity Terrains and adapted it to work. However, this means it’s very different than Unity’s technique, requiring its own toolset for everything from painting to procedural texturing. This is cool if you want to really invest the time into it, but what most people want is something much simpler which just makes things look better. And because the technique was so different, you couldn’t just put it on your terrain and have it look the same but better- which is all a lot of people wanted.

As MegaSplat aged, it became harder to work on, because it has to support a wider range of topology (Not just height field based terrains), and has a massive toolset behind it. Additionally Unity upgrades would constantly break things with it, because it used vertex/fragment shaders and every time Unity would change their lighting model, I’d have to troll through their shader code and figure out what they had changed, which is no fun (They never document this code, and it’s particularly nasty with bad macro habits, so it’s no fun).
The other tech they share is a texture packing system for Texture Arrays. Originally you had to supply MegaSplat’s textures packed in these custom formats, which many users had no experience doing. So I decided to write a texture packing system that would handle all of that for the users, and generate any missing textures they might not have (smoothness, normals, etc).

Right around this time, CTS shipped, and the guys who wrote it were getting reports that it was running very slow on some machines (mine included). They asked if I could help them identify the problem, so I started looking at their shader and thinking about how I would do a Unity Terrain style shader that wouldn’t have these problems. For me, optimization is a problem to be solved first, by making an architecture which is fast – not optimizing every operation. After sending over my analysis, I hacked together a prototype of the technique I came up with and started writing the new texture packer for MegaSplat in the same project. It was refreshing to work in a small code base again, and the technique turned out to be blazingly fast. I started expanding the shader because it was so fun to have a fresh clean codebase again. And this solved a lot of my issues with MegaSplat, as I wouldn’t have to write as many tools, and the technique was immediately compatible with all the other tools out there.

So I started thinking about releasing it, along with other things I wanted to solve. Limiting the shader generator to only produce surface shaders meant my shaders would automatically update with new Unity versions, instead of being broken every time Unity made a change to the lighting model. Releasing it as separate modules allowed users to reason about the worth of each feature, only purchasing what they needed, but allowing me to charge more for the full package. It also made extending the system easier, since the code had to conform to the module architecture, and each module is reasonably self-contained.
In the end, if your working on terrains, MicroSplat offers many more features than MegaSplat with faster performance. It runs circles around everything out there. However, if you need mesh painting, then MicroSplat currently can’t do that. I have a Mesh module in the works, but it isn’t vertex based and will have very different tradeoffs than MegaSplat.

Sine Wave has added support for Jason Booth’s fantastic MicroSplat terrain shading system. Read a quick tutorial to get started with MicroSplat in SineSpace here:

A snippet from the article:

There are a couple of really simple things you can do to make a big difference. The first is to add any other texture map data you may have. If you’re using PBR materials (and you should – they look great), you may have more than the basic albedo and normal map supported in Unity terrain. MicroSplat lets you add the maps you have, and will generate the other channels automatically based on the maps you do have.

To locate your texture maps, click on the Diffuse map icon. In the image above, I clicked on the grass texture as shown in the red square in the upper right, and it highlighted the material in yellow (see the red arrow in the lower left).

[…]

Whether you’re using materials you made or something from a library, they are probably stored in the same folder and share similar names (like mine do). Just drag and drop the maps into the empty slots.

A smoothness map may also be labeled as glossiness. You can also use a roughness (the opposite of smoothness) map, but if you do then check the Invert box (I circled the box you need to check if you use a roughness map). If you have one, your AO map may also be called ambient occlusion or simply occlusion. If you don’t (like I don’t in my region), don’t worry – MicroSplat will generate what it needs and still get pretty great results.

Once you finish adding maps, click the Update button (I’ve outlined it in red on the right in the image above). Once it finishes processing, you’ll notice that your terrain blending improves quite a bit (see the arrows). It now uses the height information along with the shape of the terrain to blend the edges more naturally.

Speed Advantage

The main advantage of speed is that you can then do more in a given frame. With MicroSplat, you can have dynamically flowing lava and water pouring down your terrain, wind storms on your sand, have meshes melt into your terrain with no seams, use stochastic height sampling or texture clusters to defeat all tiling artifacts, procedurally choose textures at runtime based on terrain topology (height, slope, erosion, cavity), etc, etc, and it’s all very fast.

Unlike most shaders, MegaSplat and MicroSplat both run at a fixed sampling cost which is not based on the number of textures used. Let’s consider the standard Unity technique. If you have 16 textures in use on your terrain, you draw the terrain 4 times, with each pass sampling a control map, 4 diffuse, and 4 normal textures. That means you sample textures 36 times. Now if you want to add something like Triplanar texturing, you have to sample each of the diffuse/normal maps 3 times, once for each projection. So now you’re at 100 texture samples. Now let’s add something like Distance Resampling (AKA UV Mixing or mixmapping), this technique samples the textures twice at two different UV scales and mixes them based on distance – now you’re at 196 samples per pixel.

Now let’s consider the MicroSplat technique. The basic idea is to take advantage of a simple truth- on most pixels, a very small subset of the textures will actually be used. So if we could somehow skip over textures that don’t contribute to the final color of the pixel, we wouldn’t need to sample them, right? The problem is that you can’t just branch around things like this on a GPU. So what I do instead is sample the control maps (4 control maps, producing 16 weights for 16 texture sets), then sort the weights and texture indexes so we have the top 4 texture sets which contribute to this pixel. Then we only need to sample those 4 texture sets. So with triplanar texturing and distance sampling, we end up with 4 control texture samples, 4*3*2 diffuse, and 4*3*2 normal samples. So now we’re down to 52 total samples from 196 for the exact same effect.

But wait, we can go further – since the distance resampling is often in the distance, maybe we don’t need to sample all 4 layers on that effect. By limiting the samples to 2 on that effect, we now get 4 + 2 * (4*3 + 2*3) samples per map type. So now we’re down to only 40 samples for the whole effect. And what if we only sample albedo with distance resampling? Now we’re at 4*3 + 2 * 3 + 3 * 2 + 4, or 28 samples per pixel. And guess what, if we go to 32 textures, we only have to sample 4 more control maps; the rest of the sample counts are the same. Further, the user can select to have only the top 2 or top 3 textures to be samples, trading lower quality for faster speed. Because I have my own shader generation system, I can easily provide all these as options to the user (Blend Quality controls how many samples per pixel, distance resampling comes in several flavors), and at the bottom of the GUI, you can see exactly how many samples are being used.

Another example would be texture packing formats (optionally packing a full PBR spec into 2 or 3 texture arrays, as quality vs. speed tradeoff), which can reduce sample cost as well. Basically memory operations, like on the CPU, are incredibly slow- so design around memory access, not algorithmic intensity. There are many other tricks I use like this to really reduce the cost of operations, and when there is a question of quality vs. speed, it’s usually given as an option to the user.

Video Tutorials

For my tech, each MicroSplat module has a video of it which should give you some idea of how to use the various features it has. MegaSplat has about 25 videos in developer log format – these are not as easy because the interface changes as you go through versions, but I tend to drop a lot of detail when I talk which could be useful as well.

Here’s my youtube channel where you can find a bunch of videos on MicroSplat, usually at least one per module.

Future Plans

I have several half done modules for MicroSplat, and some other ideas I’ve toyed with. I recently released a stochastic height sampling node for Unity’s shader graph, as well as Amplify’s, which is really cool. I don’t think too far ahead; this isn’t really a business for me in the traditional sense. I can make far more money per hour spent working on freelance gigs, so all of this is kind of like a hobby that pays me a little for my time. As such, I mainly focus on what interests me, not what will necessarily make a profit.

Jason Booth, Game Developer

Interview conducted by Kirill Tokarev

You can find more details and features of these assets at Unity Asset Store:

Leave a Reply

avatar
Related
Education
Environment Art Program
Education
Electives
Education
Environment Art Program
Education
Character Art Program