Today we are going to look at the latest game from a French game developer, Asobo Studio. I first saw footage from this game when a colleague shared the 16 minute gameplay trailer from last year. The rats vs. light gameplay caught my attention, but I didn’t really consider playing this game. That was until the game got released and a lot of people started saying that it looks like it’s made with Unreal but it’s not. I was curious to see how the rendering works and how much is it really inspired by Unreal. Also another interesting aspect is to see how the swarm of rats is rendered, because it looks really convincing in the game and it’s one of the key gameplay elements.
When I started trying to capture the game, I thought I will need to give up because nothing seemed to work. Even though the game uses DX11, which probably enjoys the best support from all tools right now, I wasn’t able to get any of them to cooperate. The game crashed on startup if I tried to use RenderDoc, and the same happened with PIX. I still don’t know why this is, but fortunately I managed to get some captures using NSight Graphics. As always I put all settings to the maximum and started looking for frames to analyze.
After taking a couple of captures I decided to use one from the very beginning of the game for the frame breakdown. There doesn’t seem to be much difference between the captures and this way I can make sure to avoid any spoilers.
As always, let’s start with the final result of the frame:
The first thing I noticed was that there is a completely different balance of rendering events in this title than what I’ve seem in other games before. There are a lot of draw calls which is normal but surprisingly there are very few that are for post processes. While in other games the frame goes through many steps after the colors are rendered to reach the final result, in A Plague Tale: Innocence the post process stack seems to be very small and optimized to just a few draw/compute events.
The game starts out by rendering a GBuffer with 6 render targets. Interestingly this is done in a way that the render target formats are all 32bit unsigned integer formats (except for one) instead of RGBA8 colors or other data specific formats. This posed a challenge because I had to decode every channel manually using the Custom Shader feature of NSight. I spent a lot of time trying to figure out what values have been encoded into the 32bit targets but there is a chance I still missed something.
The first target contains some kind of shading values in 24 bits and some other values for the hair in 8 bits.
The second target looks like a traditional RGBA8 target with different material control values in each channel. My understanding is that the red channel is metalness (not sure why some of the leaves are marked as well), the green channel looks like a roughness value, while the blue channel is a mask of the main character. The alpha channel wasn’t used in any of the captures I took.
The third target again looks like an RGBA8 with the albedo in the RGB channels and the alpha was fully white in every capture I took, so I’m not sure what that was supposed to do.
The fourth target is an interesting one as it’s almost fully black in all my captures. The values look like it is a mask for some of the foliage and all the hair/fur. Maybe something related to translucency.
The fifth target is probably some kind of encoding of the normals, because I haven’t seen them anywhere else and the shader looks like it’s sampling the normal maps and eventually ends up outputting into this target. With that said, I haven’t figured out how to visualize them properly.
GBuffer 5 depth
GBuffer 5 mask
This last target is an exception because it uses a 32bit float format. The reason for this is that it contains the linear depth of the image and in the sign bit it encodes some other mask, again masking the hair and some of the foliage.
After the GBuffer is finished, the depth is downsampled in a compute shader, and then the shadow maps are rendered (directional CSM from the sun and depth cubemaps for point lights).
With the shadow maps done, the lighting can be calculated, but before that god rays are rendered into a separate target.
During the lighting phase a compute shader is dispatched to calculate SSAO.
Lighting is added from cube maps and the local lights. All these different light sources in combination with the targets rendered above, end up creating the lit HDR image.
The forward elements are added on top of the lit opaques but in this scene they are not very visible.
After all the color has been accumulated we are almost done, there’s only a few post process steps and the UI left.
The color is downsampled in compute shaders and then consecutively upsampled to create a very nice soft bloom effect.
After compositing all the previous results, adding some camera dirt, color grading and finally tonemapping the image we arrive to the scene colors. Overlaying the UI gives us the final image from the beginning of this article.
There are a couple interesting things about the rendering that are worth to mention:
- Instancing is used only for certain meshes, seemingly only for foliage. All other objects are rendered with separate draw calls.
- The objects seem to be sorted roughly front to back, with some exceptions.
- There doesn’t seem to be any care for batching drawcalls along the lines of material parameters.
As I mentioned at the beginning, one of the reasons I wanted to look into this game was to see how the swarm of rats has been rendered. The solution is somewhat disappointing, because it looks like it’s mostly brute force. Here I will be using screenshots from another scene in the game, but I believe there aren’t any spoilers to be afraid of.
As with other objects there doesn’t seem to be any instancing for rats, that is until we reach the distance where we switch to the last LOD. Lets see how it works.
Rats have 4 LOD levels. Interestingly the third level has the tail curled next to the body and the last level doesn’t have a tail at all. This probably means that animations are only active on the first two levels. Unfortunately, NSight Graphics seems to lack the tools to verify this.
In the scene captured above the number of rats rendered:
- LOD0 – 200
- LOD1 – 200
- LOD2 – 1258
- LOD3 – 3500 (instanced)
This suggests that there is a hard limit of how many rats can be rendered in the first two LODs.
In the capture I took I couldn’t figure out any logic regarding which rats are in which LOD. Sometimes rats that are close are not very detailed and sometimes rats that are barely visible have higher detail.
A Plague Tale: Innocence is a really interesting game rendering-wise. The results are undeniably impressive and they serve the gameplay really well. As with every proprietary rendering engine, it would be great to have a more detailed breakdown from the developers, especially that I wasn’t able to verify some of my theories. I hope this will reach someone at Asobo Studio and they will see that there is an interest.
As always, if there is something more you would like to know or you would like to see your favorite game analyzed, please leave a comment or let me know via twitter.