Part 3: Types of Normal Maps
Like many things in this industry, normal maps have evolved throughout the years. There are several types of normal maps and they may look different. Here's a compilation of the ones I can remember, but there might be others as well.
Tangent space normal map: this is the most common normal map nowadays, and the one we have been talking about in the previous parts of this tutorial. It modifies the normal direction of the model based on the normal direction of its vertices (so we must control the vertex normals of our low-poly).
Mikk tangent space normal map. Not every program calculates the average of the vertex normals the same way. This can lead to differences in how a normal map looks in different engines, so we should bake the normal map using the same method as the rendering program will use (this is called "using a synched workflow").
Mikk is a proposed way to calculate vertex normals aimed to be universal so that every program calculates them in the same way. Workflow-wise, this means that we can use a low-poly with all its vertex normals averaged (one smoothing group or all edges smooth), bake a normal map using the Mikk tangent space and it will look just like the high-poly without having to deal with smoothing errors or separating the hard edges in the UVs. I will upload a tutorial on how to do this in the future.
Keep in mind that this is still a tangent space normal map, but the normals of the models are calculated in a way that is universal and interchangeable between programs.
2-channel tangent space normal map: turns out that using the information stored in two of the three channels of a normal map, the computer can calculate the third one, reducing memory usage but slightly increasing processing usage. Since memory is usually a bigger concern, this optimization is commonly used and some engines do it automatically (i.e. Unreal Engine when we set a texture normal compression to "normal map"). Freeing up a channel on our normal map allows us to reduce the texture size or use the channel for metalness/roughness/opacity.
Usually, the discarded normal map channel is the blue one, so these textures look yellow. This optimization is sometimes done automatically by some engines, so you might see these textures from time to time in your project.
World space normal map: instead of modifying the normal direction of the vertex normals, this normal map ignores them completely and changes how the low-poly bounces light in the world space (considers the normals of the vertices as aligned with the world when baking).
Think of the tangent space normal map as "you should reflect light to your right" and a world space normal map as "you should bounce light to the east".
These normal maps are more colorful and have more prominent gradients; they were used because one didn't have to worry about the low-poly vertex normals, but they have a drawback - you can't move the model as it will look strange (we are setting a face so it always bounces the light to the east. If you rotate it, the face will keep bouncing the light to the east).
World space normal maps are very rarely used in games nowadays, but they can be used to create some nice textures, i.e. the blue channel shows how your model should bounce the light that comes from the top of the model. You can use this to add a painted light to the texture.
Keep in mind that the world coordinates change between applications: in Unreal, 3D Studio Max, and Blender the Z-axis is up, while in Maya, Modo, and Cinema4D, Y is up. This means that world space normal maps can break when changing between different applications.
Object space normal map: this is an upgraded version of the world space normal map, and it's very similar to the latter. The idea is that when moving the model in the world, its world space normal map would change to reorient itself relative to the object.
Think of it as "this face will bounce light to the right of the model". If you rotate the model in the world, the normal map would reflect this change. However, this doesn't work with deforming meshes, as it only takes into account the object transform. This is the reason that tangent space normal maps are more widely used today.
Bent normal maps: they basically combine the information of an AO and a normal map, bending the normal directions so that light tends to bounce towards the parts of the model that are exposed to the light.
These are basically used for improving Ambient Occlusion and avoiding an effect called "light leaking", where a model could bounce light from parts that light shouldn't reach. I never personally used them, but I would investigate them if I had a noticeable "light leak". You can find more info here, here and here.
16-bit normal maps: sometimes, if we have a very smooth gradient in our normal map, we can see some banding. This banding comes from the lack of enough colors to represent the smooth gradient, usually from texture compression.
Even then, sometimes we have a large and smooth surface and these problems appear even with an uncompressed texture. In this case, we can use 16-bit normal maps, usually as .tga files, which have more colors and are larger in size than the usual 8-bit normal maps.
You can learn much more about 16-bit normal maps from the god of tutorials himself, Earthquake.
Keep in mind that there are other techniques that can be used to mitigate this problem, such as removing the normal map altogether (use only geometry to represent this smooth surface), making the low-poly more similar to the high-poly so that the gradients are less noticeable, or using dithering.
So, which one should we use?
Mikk Tangent space normal maps are 90% of the time the best option. Unlike with world and object space normal maps, your model will be able to deform and the normal direction will remain correct.
You should bake your normal map using the same tangent space as the rendering program. The most used tangent space is Mikk, so you should use it when possible.
And, if your normal map is showing some pixelation, consider using 16-bit normal maps or one of the solutions mentioned above.
Those are basically all the normal maps I have encountered that I can remember. If you know about some other types of normal maps, let me know so I can include them in this tutorial!
P.S.: Thanks to Shnya for his feedback and help.
Part 4. Normal Map Troubleshooting
Here´s a compilation of normal map problems I have seen throughout the years, and some of the solutions I know to fix them.
Problem: there are "black lines" or "insets" at the edges of my model.
This happens when you have hard edges in your model, because the vertices of your model have normals completely perpendicular to the polygon surface, and this can cause the baker to miss some details (leaving those black lines in your model).
Solution: normal map bakers take this problem into consideration when creating normal maps, and try to mitigate it by calculating a little bit extra information beyond the vertex normals, but in order to store it, they need a gap between the polygon UVs.
Here´s a more detailed explanation, but the rule of thumb is very simple: whenever you have a hard edge in your model, separate the faces connected by it in your UVs.
Problem: my normal map looks VERY wrong, especially from some angles.
This problem can appear for multiple reasons, let´s discuss some of them:
1. You are using the wrong tangent space: The normals on your low-poly that we are trying to bend using a normal map can be calculated differently in the baking program compared to the program you are using to render the model. If this calculation differs, your normal map can look very strange, especially from some angles.
Alternatively, it is possible that you are using a world space normal map as a tangent space normal map. In this case, make sure you are baking a tangent space normal map and using it as such.
Solution: always try to use the Mikk tangent space basis to calculate your normal maps. This is a standardized way of calculating normals that was made to avoid these problems. If your normal map baking program can't use Mikk, try using a program such as Handplane to switch between one tangent space and the other.
2. You are using gamma correction on your normal map: normal maps are not regular images with color information. They carry surface normal information and don't behave as color images. Gamma correction is an adjustment to the colors of an image and can change the color of your normal map in unwanted ways. To remove the gamma correction on your normal map, change the color space of your normal map to linear/linear color/raw, or untick the sRGB option in Unreal Engine.
3. You are not using a tangent space normal map as a tangent space normal map: make sure your engine is not using your tangent space normal map as an object space normal map, a bump map, displacement map, etc.
4. Your low-poly normals are different in your baking program from the low-poly normals in your rendering program: this can happen if you lose smoothing groups/hard edge information during the export/import when you are using custom/weighted normals and your rendering program doesn't support them or discards this information.
In this case, compare the low-poly in both apps and if they look different, try changing the import/export settings, the file formats you are using (OBJ files lose normal information), and the compatibility of your program with custom normals.
Problem: how do I make a normal map of a spiky cone?
Solution: you... don't. You don't need a normal map for everything.
The spiky cone is a classic example of this, but there are many other places where normal mapping just isn't the solution.
We use normal maps to change the normal direction of our low-poly normals. Sometimes, the direction of our normals is perfectly fine and doesn't need any adjustments and sometimes, the normals of our low-poly are extremely bent (such as in the case of a spike) and details from the high-poly don't align with the low-poly surface properly. In these cases, I simply erase the normal map details using this color:
This color is 50% red, 50% green and 100% blue, and doesn't change the normal direction of a tangent space normal map, so you can use it to erase details where the projection isn't good.
The spiky cone is just an example of one case where normal mapping might not solve your problems. What's important to remember is that there are some cases where a normal map is not the best solution. Normal maps are limited and we can't expect them to do what we need for every situation. Sometimes, we spend a lot of time trying to make a normal map work when we could just add the details to the diffuse texture or the low-poly, and not rely on the normal map for that specific detail.
Problem: the details in my model look inverted.
This is a very common problem and can be seen in a lot of video games, even AAA.
As we saw in the first part of this tutorial, normal maps are textures that use the green, red and blue channels of a texture to change how light reflects from the surface of the model when it comes from the side, top and front respectively (keep in mind this is a simplified explanation and not 100% correct).
The problem is, some apps consider that the green channel should show the model as lit from below and some apps consider it should show the model lit from above. This is sometimes referred to as "normal map right-handiness":
- OpenGL apps (right-handed, positive green channel): Blender, Maya, Modo, Toolbag, Unity.
- DirectX apps (left-handed, negative green channel): 3DStudio Max, CryEngine, Source Engine, Unreal Engine.
- Substance Painter can work with both and export both types of normal maps.
Solution: invert the green channel of your normal map. Most game engines will have the option in the textures to invert the normal map, or you can manually invert the green channel of your texture in Photoshop (navigate to the channels tab, select the green channel and press Ctrl+I).
Problem: some parts appear flat/missing some detail.
When baking normal maps, imagine that the baking program casts rays from the surface of your low-poly following your low-poly normals until the rays hit the high-poly and bend. Then, the baking program takes this information and stores it into a normal map.
The rays that have been cast can't travel forever, because they could hit a faraway part of your high-poly and bend incorrectly, so the baking program limits how far away these rays can be cast and, sometimes, the rays could be stopped before they even hit the high-poly at all. In this case, we lose details and our normal map has zones of flat color.
Solution: depends on how your baking program lets you control the baking distance:
- Some programs will only look for details outside your low-poly and ignore what's "inside" it (though most modern bakers will look in both directions). In this case, adjust your models so that the low-poly completely fits inside your high-poly.
- Other programs such as Max will use a cage, an "extruded" version of your low-poly that you can modify to precisely control the limit of the baking process.
- Other programs let you set the baking distance using a number (max frontal and rear distance in Substance Painter).
You can also try to make the low-poly and/or the high-poly more similar to each other so that the rays can get every detail of your model. Another option is to bake two normal maps using different cage distances and mix them in different parts of your textures. Some normal map purists might scream at you, so tighten your headphones.
Problem: my normal map has distorted details.
This is a very typical problem. It happens when our low-poly normals don't align properly with the high-poly details, so they appear bent (in reality they are perfectly aligned if you look from the vertex normal direction). This usually happens because you have some faces forming an extreme angle.
Solution: I wrote more extensively about this topic in the second part of the tutorial, but the general solutions are:
- Soften your extreme angle by adding a bevel.
- Convert the edge of your extreme angle into a hard edge/separate the faces into different smoothing groups.
- Use custom normals/weighted normals.
Problem: my normal map looks pixelated or has bands.
Earthquake (AKA the god of normal maps) wrote a very good explanation of this problem here.
If your low-poly and high-poly are very similar, most of your normal map will have the base normal map color, with a different color where your low-poly differs from your high-poly.
If we have the opposite situation and your low-poly and high-poly are very different, the normal map will have much higher color variety, and gradients will start to appear:
These soft gradients are troublesome because we need a lot of colors to represent them, and the most common ways of compressing textures are based on reducing the total number of colors.
1. Make your low-poly more similar to the high-poly: this way, the normal map has to do less work, and it will look more similar to the first image, avoiding these large and soft gradients. Modifying the normals of your low-poly so that they align better with the high-poly could also help.
2. Use 16-bit normal maps: by default, most images use 8-bit color depth. This means that each color channel of your texture can use 8 different values between 0 and 1. When you consider all 3 color channels, this gives us 256 possible colors.
When we have soft gradients we might see bands in our model, because the image doesn't have enough colors to represent such a small change of the color.
16-bit images can use 16 different values for each channel, which means up to 65536 possible colors. This provides a lot more range for soft gradients. Be aware that 16-bit images are larger in size than 8-bit ones (because they carry more information). Also, sometimes, 16-bit images have alpha channels and are referred to as 24-bit images.
There are also images with higher bit depth, but they are not used for normal maps as 16-bit is more than enough.
3. Use dithering: lack of colors in our textures is a problem that has been around for decades, and one solution that appeared long ago was to use dithering. The idea is that we alternate pixels in our texture to represent the gradient, and it works fine when you zoom out on the image. You can usually activate it when exporting your texture.
4. Make sure your normal map is correctly compressed: when textures are compressed, the computer takes zones of a similar color and merges them to create a "patch" of color, reducing the number of colors in your image. This is usually fine for regular images but terrible for normal maps: not only it destroys your gradients but it can also merge the information in your color channels. There are special compression algorithms designed for normal maps. Make sure your game engine is interpreting the image as a normal map (usually by selecting an option in your texture asset to mark it as a normal map) and the compression settings will be configured automatically.
Problem: there are some visible pixels on some parts of my model.
The obvious solution would be to increase the size of your UV island for that part of your model or use larger textures, but let's take a look at some less obvious solutions:
1. Bake your final normal map at double resolution and then reduce the size of your image: Ii you are using a 512x512 texture, bake your normal map at 1024x1024 resolution and then convert the image to 512x512. This way, each pixel of your final texture will take information from 4 pixels, making a sort of "antialiasing" and reducing the pixelation. This is true for other baked images as well, and you will also keep a high-res version of your textures in case you need to increase the detail in some zones later.
Notice how in this image the normal maps have the same resolution, but the one we baked at 1024 looks more rounded and similar to the high-poly because it stored some extra information during the reduction process.
2. You can stack your UV islands on top of each other so that they use the same normal map information on different parts of your model. Just make sure you move one side of the model 1 unit outside the UV space so that the baker doesn't try to get details from both sides at the same time. You can go even further and use trim textures or decals for some details to optimize your texture usage.
3. Textures use a pixel grid, and pixels are square. If you have some details that form a line, try to align this line vertically or horizontally. This way, the pixel grid and your texture details will align.
Problem: my model is symmetrical, but the normal map looks different depending on the side.
When applying symmetry to your model, the normal directions can change because the way the faces are connected has changed. Sometimes, this means that you can see a seam right at the center of your model. To avoid it, make sure your low-poly normals right at the center are aligned and adjust the smoothing if needed.
Another possible cause is triangulation: when importing models to a game engine, they are always triangulated and sometimes, this process can change the low-poly normals and some artifacts will appear at the diagonal of your low-poly faces. To avoid this, triangulate the model before baking, bake the normal map and then apply the symmetry modifier.
Finally, here's a small tutorial by Earthquake that helped me understand a little bit more about vertex normals and normal mapping. I talked about the same topics throughout this tutorial series, but I wanted to include it here.
Also, check the polycount wiki for more information about normal mapping.