Santa Monica's Senior Programmer on How God of War Ragnarök's Snow System Was Made

Paolo Surricchio, a Senior Staff Rendering Programmer at Santa Monica Studio, told us why the team redesigned the snow system for God of War Ragnarök, provided some insightful details regarding the creation of the system, and spoke about optimizing snow for PS4 and PS5.

Image Credit: Sony, God of War Ragnarök

Introduction

Hi, my name is Paolo Surricchio, and I'm a Senior Staff Rendering Programmer at Santa Monica Studio.

I started my career in the game industry in 2012: I graduated from DigiPen Institute of Technology with an MS in Computer Science, and immediately after, I was hired by High Moon Studios, working on Deadpool on Xbox 360/PS3. Since then, I've always worked as an engine and rendering programmer, and I've been lucky to join teams on all sorts of interesting projects. After Deadpool, I worked on Call of Duty: Advanced Warfare.

After that, I moved to San Francisco and joined Campo Santo as a rendering programmer on Firewatch. After that project, a friend of mine from DigiPen asked me if I wanted to join him at Santa Monica Studio since they were working on a cool, unannounced project.

At the end of 2015, I joined Santa Monica Studio on what was going to be the new God of War (2018). I worked on God of War (2018) and God of War Ragnarök, with a short break in between, where I worked at Respawn Entertainment on Apex Legends.

Image Credit: Sony, God of War Ragnarök

God of War Ragnarök's Snow System

I go into a lot of detail on why we redesigned the snow system for Ragnarök in the presentation I gave this year at GDC 2023 – "Advanced Graphics Summit: Reinventing the Wheel for Snow Rendering", but the summary is scalability and ease of use.

The main factors behind this were both how much bigger, geometrically speaking, God of War Ragnarök is than its predecessor and the art goal of having snow in more places than before. One of the art goals for God of War Ragnarök was to show how realms were affected by Fimbulwinter, and in Midgard, this manifested as constant snowfall. This meant the team wanted deep snow the player could interact with everywhere.

The old system was great at representing details and was very malleable, but both the level and material setup were time-consuming and error-prone, so all that control came at too high of a cost. Not only that, but the rendering technique we chose for the previous system had some drawbacks and visual artifacts that couldn't really be mitigated. 

It was clear that while we could use that technique in other areas (we render the water waves and ripples in both God of War (2018) and God of War Ragnarök with a similar technique as the old snow system), for our terrain, we had to change the core rendering part. Instead of using Screen Space Parallax Mapping (a version of parallax mapping created by our at-the-time Technical Director), we decided to use geometry displacement with hardware tessellation. The main difference is that parallax mapping is a pixel shader technique, while geometry displacement acts at the vertex/geometry level. This eliminates all the screen space limitations and artifacts we had encountered with our version of parallax mapping.

Lastly, as the game and the team grow, our job on the engineering side is to always re-evaluate our assumptions to make sure they scale with what the art team is trying to achieve. Snow scalability and tooling were brought up as a concern from the beginning by the art department, and therefore, it was one of the first things we tackled for the project.

1. Regular mesh with no displacement

Goals and Requirements

While achieving a great look with great performance was paramount, the main goal from the beginning was ease of use. The point was to make the process as simple as possible, as close to "Select the mesh you want to have displaced, check a checkbox, done".

As we achieved that, we started working on ways to improve the controls and add detail without requiring content to have to be re-authored, while keeping the goal of scalability in mind. Every solution was systematic, meaning we could easily control its behavior for the entire game with just a few options.

We obviously left room for areas to be customized, but the mantra of the system was: "You don't need to tweak parameters, unless you want to".

2. Only displacement is applied

The Research Behind the New System

We knew from the beginning of the project what the limitations of the old system were, and so we had a phase during pre-production where we tried different approaches. The key at this stage was to focus on what the must-haves of the new technology should be, and use those as the driver of what avenues were worth trying first. Rendering research can take as much time as you are willing to give it, plus more. We knew we wanted to change the way displacement was rendered, so we prioritized research that focused on using geometry to do that.

The other key factor was to constantly iterate with artists and to make them part of the research process. We asked them to give us a breakdown of how they'd like the system to work, what kind of features they needed, and what the most important features were. This allowed us to focus our research on the solutions that took all our requirements into account and was key to having a short but effective R&D process. You can learn more about the research process by checking out my GDC 2023 session.

3. A normal map is applied in the displaced area

Long-Term Planning

When I mention scalability and ease of use as the main goal, the intent is to build on this technology for years to come. As I mentioned in my GDC presentation, this is achieved mainly by structuring the systems in a modular way, where each system is developed so that its output can be changed if need be. We obviously optimized for our use case, but we did so while leaving ourselves open in the future to change parts of the systems and build on them.

That doesn't mean that next time we can re-use this technology as-is without changing a line of code. The idea is to get as close as possible to it, where we won't need to reinvent the wheel again, and instead, we'll be able to capitalize on most of the research and the tooling built for this system for the next one, and only change small parts of the system, or optimize for the new case scenarios without having to re-write everything.

The presentation goes into more detail about this, but the main goal is to build the components of your systems in a modular way while walking the balance between too much abstraction, performance, and achieving the visual goal for the project. It's an approach that requires more care, and at first, can take a bit longer, but it pays itself tenfold in the long run. What's funny is that during the development of God of War Ragnarök, we ended up re-using big parts of the snow system for other systems (mainly the geometry enhancing tessellation options that are active on PS5), so this approach started paying dividends even before the project was done.

4. Another detail normal map is applied reading a top down alpha mask that is generated at runtime and written to it with vfx every time a character is moving around

Finding the Acceptable Compromises

When it comes to compromises we had to make during the development, it's important to remember that most solutions have compromises. The key is to find the compromises that work for the team, the tools, and the goals you have.

I feel like there's an analogy to the thought experiment of the "tree falling in a forest, but no one is there to hear it": if you compromise in an area you don't care about, is that really a compromise? We knew from the beginning we couldn't compromise on performance. We ended up with a system that performs the same and often faster than the previous one. We couldn't compromise on usability, in fact, we needed to improve it. We did that. We also couldn't compromise on visual quality, and in fact, the art team pushed the bar even higher for what the system was supposed to do, and how it was supposed to scale on next-gen hardware on PS5.

The first iteration of the system tackled the usability and ease of use first, but it didn't look as good. Once we had a good foundation, we rebuilt the visual features on top of it, and we brought back up the quality level to the high bar the art team set. By the end, we ended up with a system that gave all the control artists wanted without compromising how easy it was to set up and use in the generic case, and that could easily scale from base PS4 to Quality Mode on the PS5.

The key to doing this was again to involve the art team from the beginning and to iterate with them on how and what the system had to expose to cover all user cases.

5. VFX cards and VFX models are spawned and they bounce and collide with the displaced geometry (GPU collision)

Making the Collisions Work

Design and artistic intent were keys here: given the solution is used to represent soft materials, there was no expectation that gameplay physics objects would collide with the top of the snow. In fact, from the beginning, one of the goals was to have physics objects carve snow as they fell through snow. This was achieved by attaching the same capsules that we use on characters to our physics rigid bodies. From a rendering perspective, there is no difference between a character or a generic object for the snow: you can have carving shapes attached to anything in the world, and they will push the snow down depending on how the object intersects with the snow plane, and where the hard ground below the snow is.

At the same time, we added details to decorate the snow and simulate light small chunks rolling on top. Since all the information lived on the GPU, we had to develop a GPU system to simulate that. We added the feature to our VFX system to read the displacement information, which is carved every frame at runtime. This allows particles to collide and bounce on snow without any screen space limitations like depth collision.

6. Lastly, dynamic persistent models are spawned around the displacement with rules decided by art

On top of that, we developed a fully GPU-driven asset spawning system that allows artists to select a set of meshes and have them spawn around the camera on meshes with this displacement tech: there's a set of rules that describe the size and location of meshes with respect to where the displacement is, and how far the mesh is from it. This allowed us to use the same technology with just a few data changes to represent deep snow in Midgard, and deep sand in Alfheim. This is another one of those powerful pieces of technology that we'll surely build on for future titles.

Optimization

Performance was key in shipping God of War Ragnarök, so we had to develop a system that would perform well on the base PS4, and that could scale its features on next-gen hardware on PS5.

There are two main things to keep in mind for the performance of this system: tessellation parameters and the number of triangles we tessellate. Since we use hardware tessellation for geometry displacement, we have to find the right balance between quality (tessellation factor) and performance.

It's hard to give exact numbers for hardware tessellation since there are many control variables that can change the performance profile: what your output vertex struct size is, how many triangles you are tessellating, or whether you are vertex or fragment shader bound. The rule of thumb on most hardware is that hardware tessellation has a fixed cost to use: this means you should use it when it's really worth it, and try to tesselate fewer triangles more rather than tesselate a lot of triangles just a little bit. Make your system and the numbers tunable and find the best balance for the hardware you are targeting.

The second and most important part of shipping this system was finding a way to draw as few triangles as possible with hardware tessellation given its cost. This was particularly important given that meshes with this feature are usually big terrain meshes.

I have a detailed breakdown of how we achieved this in the GDC presentation, but the idea is to use a mesh-shader-like approach (but without using actual mesh shaders, since we had to implement this on PS4, which doesn't support them), and divide the mesh into sub-chunks, or meshlets, and then use indirect draw calls to limit the number of chunks that were drawn with hardware tessellation. Each terrain mesh was, in fact, 2 draw calls, one with hardware tessellation, and one with regular vertex shader pipeline. A compute shader was able to do both camera culling and decide which chunks fell into the displacement area and which didn't. It then drew only the needed ones with hardware tessellation. This was fundamental to push the quality as high as possible on the base PS4.

Paolo Surricchio, Senior Staff Rendering Programmer at Santa Monica Studio

Interview conducted by Arti Burton

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more