logo80lv
Articlesclick_arrow
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_login
Log in
0
Save
Copy Link
Share

UE5 Triplanar Deep Dive: From WorldAlignedTexture to High-Quality Normals – Part 1

Alina Ledeneva shared a comprehensive breakdown of how WorldAlignedTexture and WorldAlignedNormal operate under the hood in Unreal Engine 5.

Introduction

Usually, applying a texture to a model requires creating UVs manually. This can take quite some time, especially when you need to build an environment or a prototype quickly. Unreal Engine offers a faster way: triplanar projection.

This technique works as though the surface of an object were lit by three projectors from different directions: from above, from the front, and from the side. Each of them casts its own copy of the texture onto the visible part of the model.

These projections overlap, and where the surfaces turn, the texture from one side smoothly blends into another. As a result, it looks as though the texture naturally wraps around the object from all sides, without seams or stretching.

This approach allows materials to fit objects of any shape naturally and greatly speeds up environment creation (it's most often used with procedural landscapes and modular wall-floor systems).

This article provides a complete deep dive into how triplanar mapping works in UE5, covering both textures and normals from basic principles to practical improvements and final implementation.

My focus is on visual understanding. The math will be presented with almost no formulas, the key ideas illustrated through videos and clear visuals.

In this article, I will:

  • Break down Unreal Engine's built-in WorldAlignedTexture and WorldAlignedNormal functions.
  • Explain the difference between TransformVector and TransformPosition, and why it matters for normals and local space.
  • Modernize the triplanar mapping to work in Local Space, add a shared projection anchor for multiple meshes, take scale into account, and synchronize normals with textures.
  • Add a custom rotation for the projected textures.
  • Explain where the rotation matrix comes from and how it affects projection directions.

By the end, you'll gain not only a full understanding of triplanar logic and ready-to-use material nodes, but also practical recommendations for integrating them into your project.

How the WorldAlignedTexture Function Works

In the video below, you can see the function's behavior on cubes. As they move, the tiled texture appears anchored to world space, producing a seamless result. When the cubes rotate, one texture projection smoothly blends into another. This is a triplanar projection: the texture chooses the most favorable angle and wraps evenly around the shape.

If you look inside the WorldAlignedTexture function, you can see how this logic is implemented in nodes. The function converts the world coordinates of each point into three independent projections and uses them to compose the final color. The system works automatically, regardless of an object's shape, scale, or orientation.

Conceptually, the whole process can be divided into three main steps:

  • Preparing projection coordinates.
  • Creating masks.
  • Final blending.

I'll explain each step in detail with clear visuals.

Preparing Projection Coordinates

At this stage, three sets of UV coordinates are created. Each set corresponds to its own plane and is used for a separate projection.

How it looks in the material graph:

Let's break down what's happening here.

The UV input of a Texture Sample node defines the coordinates the shader uses to sample color or data from the texture. In other words, it's the address of a specific pixel, expressed not in pixels, but in fractions of the texture's width and height, where U is the horizontal coordinate (along the X-axis of the texture), and V is the vertical one (along the Y-axis). If nothing is connected to the UV's input, the engine uses the model's default UV coordinates. If you connect something to this input, you're explicitly telling the shader: "use these coordinates to read color from the texture."

In the screenshot, you can see that the UVs input receives the result of several math operations performed on the Absolute World Position node (the three-dimensional coordinates of each point in the scene XYZ). Now I'll explain why this is done and how it works.

To achieve the triplanar effect, you need to generate UVs for each point with coordinates (X, Y, Z) as if the texture were tiled across the world planes. Simply put, the same texture tile is projected onto the three main world planes, and each of them is treated as a separate UV space.

The TextureSize input parameter controls the texture's scale: by adjusting its value, you change the tiling density.

However, if you map the coordinates directly  (x, y) → (u, v),  (x, z) → (u, v),  (y, z) → (u, v), the texture appears flipped on the YZ and XZ planes:

To fix this, the axes are inverted by multiplying by –1:

The process of projecting and tiling the texture across world space planes is shown below:

Creating Masks

Triplanar projection works by sampling three versions of the color for every point on the surface, one from each projection. To determine which one should be more visible, you need to know the polygon's facing direction. If a surface is oriented more strongly along one axis, the texture from that axis should be more prominent.

This is where masks come in, they control the influence of each projection. In other words, masks act like filters that automatically distribute the texture's intensity depending on the surface angle, controlling from which side and how strongly the texture is applied.

To determine the facing direction, the shader uses the polygon's normal (a vector perpendicular to the surface) from Tangent Space.

Tangent Space is defined by three axes:

  • Tangent X (direction tangent to the surface at that point; it matches the U direction in UV space).
  • Binormal/bitangent Y (complements the tangent to form a left-handed coordinate system; it matches the V direction in UV space).
  • Normal Z (the surface normal, perpendicular to the surface at that point).

Thus, the vector N = (0, 0, 1) in Tangent Space always points along Z and stays unchanged regardless of the object's position or rotation. That's convenient as a reference, but useless for triplanar projection: it doesn't tell you which way the surface actually faces in the scene.

However, if you view this vector in World Space, its components change with the object's rotation. The normal effectively "spreads" across the world coordinate axes. These component values represent each axis's contribution and form the basis for the masks. In the video below, you can see how normal vectors change in World Space as the object rotates. For clarity, the vector's components are visualized as RGB channels (red X, green Y, and blue Z).

To account for both positive and negative directions (for example, X and –X), take the absolute value of each component. Otherwise, the texture would appear only from one side. As a result, you get three masks, one per world axis:

  • X mask → shows how much the surface faces X (controls the strength of the YZ projection).
  • Y mask → reflects the contribution of Y (in practice, unused, because the XZ intensity is derived from the remaining value after X mask).
  • Z mask → corresponds to Z and the XY.

If Tangent Space happens to align perfectly with World Space, each mask becomes pure and passes only one projection. In that case, the texture is projected strictly from that side.

When the object rotates, the normal vector decomposes onto multiple axes, which means several projections start to overlap. The blending is smooth: the mask with the higher value contributes more, while the others fade out. Visually, it looks like soft transitions between projections, with emphasis on the dominant axis.

In the video below, note how the intensity of each channel (the brightness of R, G, and B letters) changes depending on the object's rotation angle. The overall face color represents the combined channel values, clearly showing how "pure" or "blended" each mask is at any moment.

You can optionally control the character of these transitions. In WorldAlignedTexture, the masks are derived from the components of the world normal and blended through two sequential Lerps. First, the X component of the normal selects the dominant plane between YZ and XZ. Then, the Z component blends XY into the result. This order produces two noticeable effects:

1. Cosine ≠ Equal Share

Normal components vary with the cosine of the angle, so a raw component value is not an equal share within the pair. At 45°, you might expect a 50/50 split between two projections, but cosine-based masks don't produce that.

In the video below, using the YZ projection as an example, two masking methods are compared:

  • CosineMask (the mask value is taken directly from the raw normal component (N.x))
  • LinearMask (the mask value is computed as the proportion of one axis within the pair: |N.x| / (|N.x| + |N.y|))

You can see that with LinearMask, equal intensity (0.5/0.5) is actually reached at 45° on all faces, while with CosineMask, the 0.5 value first appears on one face and then "shifts" to the other.

This isn't a drawback, in fact, cosine-based masks produce a more organic transition on spheres and curved shapes. In the next video, you can see how the smoothness of blending changes when adjusting the Contrast parameter. In the linear version, the brightness of both projections changes simultaneously, and their blending zone stays stable. With cosine masks, the faces "fade" one after another, resulting in a more natural look.

2. The "Stickiness" of the Third Projection

This blending order makes the last projection slightly sticky: during the first Lerp, the YZ projection gets a bit more influence area than it would under a strict ratio-based mask. Then, during the second Lerp, the XY projection gains a larger share relative to the mixed XZ/YZ result. This effect is most noticeable on spheres (as seen in the previous video) and often works in favor of landscapes, as the top projection behaves more stably on hills and plateaus.

The chosen blending method directly affects how transitions between projections feel: linear masks look technically even and precise, while cosine masks appear more lively and organic.

Final Blending

The final stage of triplanar projection is the blending of three texture samples using the generated masks. The entire process can be broken down into consecutive steps.

Take any point on the object, for example, Axyz(200, 400, 200). You can think of it as a vector going from the origin to the position (200, 400, 200). If you project this vector onto the world planes, you get three 2D representations: 

  • Axy is the projection onto the XY plane.
  • Axz is the projection onto the XZ plane.
  • Ayz is the projection onto the YZ plane.

Each of these projections is already converted into UV coordinates, which define which pixel of the texture will be sampled.

If you repeat the same for every point on the object, each world plane will contain a region that matches the object's shape, and the texture will be sampled from that region.

1 of 2

Thus, each point on the object has three color candidates. This is where the masks come into play: they determine which of the three projections should be more visible. If the normal points closer to the X axis, the texture from the projection related to X will appear stronger. If it's closer to Z, the projection associated with Z will dominate, and so on. The higher the mask value, the greater the contribution of that projection to the final color.

After blending, a single color value remains for each surface point, one that already accounts for all three projections and their smooth transitions. Visually, this appears as if the object is wrapped in three textures simultaneously, while the masks smoothly distribute their visibility across the shape.

WorldAlignedTexture Summary

WorldAlignedTexture is effective because it provides smooth, seamless coverage for any object. All three texture projections are projected simultaneously, while the masks control their contribution, ensuring smooth transitions without artifacts and a stable visual appearance. 

However, this approach has a significant limitation: it is always dependent on World Space and doesn't account for local transformations. To make the triplanar world-independent and achieve predictable results when working with multiple objects, you need to convert calculations to Local Space.

Furthermore, it's sometimes necessary to manually control the texture orientation (for example, to rotate a pattern or set the artistic direction of a projection). In the next section, I'll add support for local coordinates and custom projection rotation.

WorldAlignedTexture Modernized: Local Space + Texture Rotation

WorldAlignedTexture is hard-locked to world coordinates, which makes it ideal for environments: rocks, buildings, and landscapes stay visually continuous.

But try moving or rotating an object in the scene. The mesh moves, while the texture stays in place. You get a sliding effect that can be confusing, especially when you're experimenting with asset placement. It can also look odd if objects are rotated at angles where the blended projection zones start to look unnatural.

How to avoid this? You can detach from World Space and switch to Local Space, that is, compute projections in the object's own local coordinate system instead of the scene. There are two ways to do this, each with a different result. I'll walk through both options and show how to pick the right one for your task.

First, I'll explain the difference between two commonly confused functions: TransformVector and TransformPosition. Both work with a Vector3, which can be interpreted in two different ways:

  • As a point in space with coordinates (x, y, z)
  • As a direction vector, an arrow from the origin to that same point

If you treat Vector3 as a direction (TransformVector), only its orientation and length matter, not where it starts. When you transform such a direction into a new coordinate system, you effectively move the arrow so that its starting point aligns with the origin of the new system, and then observe where it points. The direction and length are recalculated relative to the new basis.

If you treat Vector3 as a point (Transform Position), the actual position of that point matters. When you switch to a new coordinate system, you're essentially redrawing the arrow from the origin of the new system to the same world-space point. The arrow's direction and length change, but the final point in world space remains the same.

Local Space in Triplanar Mapping

When Absolute World Position is used inside WorldAlignedTexture, the texture is projected directly from world coordinates. But if you first pass those coordinates through TransformPosition or TransformVector (from World Space to Local Space), the result changes.

It's important to understand that each point of the object (Absolute World Position) will go through a transform before getting its color from the texture. Previously, the path was: Coordinates (World) → Color. Now it becomes: Coordinates (World) → NewCoordinates (Local) → Color.

Think of it as two cubes:

  • The first is the real object in the scene.
  • The second is a "virtual" one whose coordinates are reinterpreted in Local Space. The triplanar logic then uses the virtual cube's coordinates to texture the real one.

With TransformPosition, the real and virtual cubes coincide. The positions of points in space stay the same; only their description changes to the local coordinate system. So, when the object moves (Local Space moves relative to World Space), the cube stays locked to Local Space, and there is no texture sliding.

The downside: if you place another object nearby, its projection won't blend seamlessly with the first one, because each object has its own local coordinate system.

With TransformVector, the real and virtual cubes do not coincide in position. The virtual cube is positioned relative to the origin of Local Space in the same way the real cube is positioned in World Space. As a result, when the object rotates, the object and texture behave in sync (Local Space rotates together with the object). But when the object translates, the texture starts sliding, staying tied to the world. This preserves seamlessness across different objects that share the same rotation.

Since an object's local scale may differ from the world scale, you need to compensate for scale before computing UVs. The idea is simple: stretch the UV coordinates per axis according to the object's scale. This behaves similarly to the TextureSize parameter, but auto-adapts to the object's size.

So, TransformPosition gives the effect of a texture being firmly glued to the object, while TransformVector keeps the texture partially tied to the world and makes it blend more naturally with neighboring objects.

With normals used for masks, the situation is a bit different. Normals are treated specifically as directions that define a surface's orientation. Unlike a point's position, a normal has no meaning as a "place in space", so what matters is its angle relative to the axes, not the point it starts from. Therefore, it's more correct to transform them with TransformVector.

If an object has a non-uniform scale, the normal will be "stretched": its components will no longer correspond to a unit vector of length 1. In that state, the normal no longer represents the correct surface direction because its length has changed. For that reason, always normalize your normals after TransformVector. This restores length 1 and guarantees that triplanar masks reflect the correct surface orientation.

The videos below show how TransformPosition and TransformVector behave differently when objects move and rotate:

You can see how strongly the texture shifts when the object rotates under TransformVector. This effect intensifies the farther the object is from the world origin. Earlier, I described how TransformVector "builds a virtual object" in Local Space, mirroring the real one in World Space. The texture is sampled by the coordinates of this virtual object but displayed on the real one. That's why the offset occurs.

To control this offset, adjust the reference point. Subtract the vector of a chosen anchor point in the world (call it AWP2) from the real object's coordinates (AWP). The transformation then receives this difference vector (AWP - AWP2). As a result, the texture stops depending on the Local Space position relative to the world origin and starts orienting itself relative to the selected anchor.

The video shows how strongly the choice of anchor point affects TransformVector. Two edge cases stand out. In the first, the anchor coincides with the world origin, so the vector passed into the transform is simply AWP. In the second, the anchor is moved to the Local Space origin, and the vector becomes zero.

Here's a simple trick: if you subtract ObjectPosition from Absolute World Position, the texture becomes fully attached to the object's Local Space. The effect matches TransformPosition: the texture is permanently glued to the surface.

You can take this a step further by defining a shared projection anchor for a group of objects. For example, take the ObjectPosition of one mesh and pass it to others via Custom Primitive Data or a Blueprint. Then they all share a common projection basis. As a result, their textures look as if they belong to one continuous surface: projections align, seams disappear, and when the group rotates, it behaves like one monolithic object.

This approach is useful for modular walls, large composite assets, and procedural assemblies where maintaining a unified pattern across multiple meshes is important.

Rotation and Mirroring

To add even more control, you can pass the coordinates through a CustomRotator before feeding them into UV space. You can expose the rotation angles as material parameters for quick manual tweaking, send them via Custom Primitive Data, or drive them directly from Blueprints.

There's an important nuance. Because the projection goes straight through the world planes, a rotation on one side produces a mirrored rotation on the opposite side. This is especially noticeable on spheres.

To compensate for this, the shader needs to know which direction a point belongs to. At the masking stage, capture the signs of the world-space (geometric) normal components and store them as R_Direction, G_Direction, B_Direction:

You can then use these values as needed:

  • If the texture is symmetric, flip the sign of the rotation angle based on the face's orientation.
  • If the texture is asymmetric and the pattern must be preserved, mirror it in advance during UV preparation.

WorldAlignedTexture_Modernized Summary

Starting from the base WorldAlignedTexture, a predictable setup was assembled. It supports both World Space and Local Space, switches between TransformPosition and TransformVector, compensates for nonuniform scale, and provides a shared anchor for aligning patterns across meshes. For artistic control, each projection can be rotated, and mirroring can be handled per face normal so motifs remain consistent on opposite sides.

The result is a flexible triplanar setup suitable for rapid environment blocking and for production materials where control and predictability matter. The final video below shows the extended version in action. Next, the same principles are applied to normals to keep lighting consistent with the texture projections.

How WorldAlignedNormal Works

Now that WorldAlignedTexture is clear, it's time to move on to WorldAlignedNormal, which ships alongside it. It's important that they work in sync: if the color is seamless but the normals aren't, the lighting will immediately reveal the seams.

The overall principle is similar: three projections along X, Y, and Z, each blended with its own mask. However, normals have their own nuances, and simply copying the texture logic does not produce correct results.

The reason is that in a Normal Map, the same RGB channels carry a completely different meaning.

If a texture is imported as Color, the engine interprets each RGB channel as a color component in the [0…1] range. It contributes to albedo (the visible color of the surface) and doesn't affect lighting, normals, or light direction. For example, a pixel with color (100, 149, 237) becomes approximately (0.39, 0.58, 0.93) after normalization and is perceived as "cornflower blue". Put simply, red, green, and blue add up to a visible image without any physical interpretation of those numbers.

If the texture is imported as a Normal Map, its data are interpreted differently: the engine reads them as direction vector components in Tangent Space and remaps them to [-1…1] (because a normal can point in the negative direction along any axis). Thus, the pixel (100, 149, 237) ("cornflower blue") after processing becomes approximately (-0.24, 0.19, 0.95) in Tangent Space, that is, a vector pointing slightly left and slightly forward from the surface. This is a vector of the imaginary perpendicular to the surface at that point. The engine then assumes this part of the surface is slightly tilted, and lighting produces a soft highlight or shadow, as if there were micro-relief. This lets a flat mesh appear to have detail that isn't present in the real geometry.

In the video below, you can see how the same numeric values turn from an ordinary color into a normal vector. On the left, the texture is imported as Color, so a pixel (127, 127, 255) normalizes to (0.5, 0.5, 1.0) and looks like a light violet tint. On the right, the same texture is imported as a Normal Map; for (127, 127, 255) this yields roughly (0, 0, 1) in Tangent Space, a normal strictly perpendicular to the surface. So the "blue" on the right isn't a color; it's a direction, and the shader uses it to compute highlights and shadows.

Now I'll explain WorldAlignedNormal step by step, just like I explained WorldAlignedTexture.

Preparing Projection Coordinates

This stage fully mirrors WorldAlignedTexture: the Normal Map is sampled on the three World Space planes (XY, XZ, YZ), forming a dedicated UV projection for each.

Creating Masks

Masks are computed exactly as in WorldAlignedTexture from the geometric (world-space) surface normals: the closer the normal is to a given axis (X, Y, or Z), the brighter the corresponding mask. In practice, a face that "looks" along X gets a white NormalMaskX; as it tilts away, the mask darkens. The same applies to Y and Z.

There is, however, an additional step. Before taking absolute values (making all mask components positive), the function also stores the signed component values (NormalMaskR, NormalMaskG, NormalMaskB). This distinguishes sides (e.g., +X vs -X). Thanks to that, in the next stage, you can correctly flip normals for opposite directions, so that bulges on one side don't turn into dents on the other, and the boundaries between projections remain consistent.

Final Blending

At first glance, blending normals seems to mirror WorldAlignedTexture: the brighter the mask, the stronger that projection's contribution. But there's a key difference: a normal isn't a color, it's a direction.

With color, the process was simple: for a point on the object, you computed its UVs and then blended the sampled pixel colors from different projections by masks. That doesn't work for normals. In a Normal Map, the vectors are stored in Tangent Space, while projection and blending happen in World Space.

If you simply take the normal values from the three projections "as is", they'll be identical on all faces. On a cube, this is obvious: the same pixel lands on all six faces, yielding the same normal everywhere. This breaks the lighting: shading goes wrong and looks off across the faces.

For normals to work correctly, you must transform them properly into World Space. Normally, you'd just do a TransformVector from Tangent to World, but here it's more involved: each projection (XY, XZ, YZ) has its own local coordinate frame defined back in step one, and you must also account for whether the face points in the positive or negative direction of an axis. In other words, first reorient the vector from Tangent Space so that Tangent axes align with the projection's local UV axes, and then perform a manual TransformVector into World Space with the face's direction taken into account.

Mathematically, this is an orthogonal transform. Because the angles between these frames are multiples of 90°, all sines and cosines collapse to 0, 1, or -1. Practically, you don't need heavy linear algebra; you can permute components and flip their signs as needed. It's not a full "rotation" so much as a remapping: take the numbers X, Y, Z from Tangent Space and say "this is no longer X_tangent, it's Z_world", and so on, adding a minus when the direction is opposite. You can think of it as "reassembling the vector" in a new frame: same numbers, new meaning.

Let's use the YZ projection as an example.

At the very beginning, this projection redefined the normal map's UVs so that:
U(newUV) = -Y(world) and V(newUV) = -Z(world)

That establishes the local frame for the YZ projection. So first, reorient the normal from Tangent Space into this local frame:

  • Tangent Space is built such that X(old_tangent) = U(oldUV) and Y(old_tangent) = V(oldUV)
  • Rotate/remap the vector so Tangent X/Y align with the new UV axes:
  • X(new_tangent) = U(newUV) = -Y(world)
  • Y(new_tangent) = V(newUV) = -Z(world)
  • Lock this alignment: this becomes the starting frame for the normal

Next, do the manual TransformVector into World Space by finding parallel directions between this starting frame and World Space: X(new) ↔ Z(old),   Y(new) ↔ X(old),   Z(new) ↔ Y(old). This defines the axis order in World Space and the new coordinates of the original vector.

One detail remains: opposite sides of the object must yield normals that point in opposite directions. That's why, in the previous step, you stored the signed normal components. It might seem like the sign alone would suffice to detect which way a polygon faces, but in practice, you use the actual component value. It encodes both sign (positive/negative direction) and orientation strength (how strongly the surface faces that axis). I'll break this down in more detail in the next part.

Continuing the YZ example, you get X(new) = Z(old) or −Z(old) depending on the object's side:

X(new) = Z(old)

X(new) = -Z(old)

After remapping, all normals are correctly oriented in World Space and expressed via world axes, so you can proceed to blending. It's crucial to normalize immediately after blending. This step is mandatory because linear blending (Lerp) mixes vectors together, and their length almost never remains equal to one.

For example, blend up (0, 0, 1) and right (1, 0, 0) equally: you get (0.5, 0, 0.5). That vector is shorter than 1; if you leave it as is, lighting looks wrong, and the surface becomes too dim. Normalization restores unit length while preserving direction, which makes the result physically correct.

As a final step, convert the resulting World Space normal back to Tangent Space via TransformVector, because the Normal input in the material expects data in Tangent Space.

WorldAlignedNormal Summary

With the base WorldAlignedNormal explained, the discussion can move on. Up to this point, nothing was changed; the logic was examined step by step to show how triplanar projection for normals is achieved and why it produces correct results within the standard setup.

Next, the same improvements used for triplanar textures are introduced: execution in Local Space and custom normal rotation so that lighting remains aligned with the texture directions. After that, WorldAlignedTexture_Modernized and WorldAlignedNormal_Modernized operate in full sync and can be used together as a single triplanar pair.

WorldAlignedNormal Modernized: Local Space + Texture Rotation + Rotation Matrix

Now I'll show how to add the same changes I applied to WorldAlignedTexture so that WorldAlignedTexture_Modernized and WorldAlignedNormal_Modernized can be used together in one material.

Local Space in Triplanar Normals

TransformVector, TransformPosition, choosing an anchor projection point in the world, and mirroring textures on opposite sides all work exactly as in WorldAlignedTexture, so I'll jump straight to scale.

If you simply add UV scaling upfront (as I did in WorldAlignedTexture_Modernized), the result will be incorrect.

To explain why this happens, let me briefly recap how TransformVector behaves when converting between coordinate systems: it already accounts for the scale of the target space. For example, when converting a vector from Space1 with scale (1, 1, 1) to Space2 with scale (1, 1, 2), the old vector (0.5, 1, 1) becomes (0.5, 1, 0.5) in the new space: the component along the stretched axis becomes smaller.

In WorldAlignedNormal_Modernized, the normal is first taken from the texture in Tangent Space, then manually reassembled in Local Space without considering scale, and finally converted back to Tangent Space with scale automatically applied. If the local space is non-uniformly scaled, the vector is already distorted at the first step, and the normal's direction drifts.

In the screenshot below, the vector taken from Tangent Space (0.5, 1, 1) is first treated as (0.5, 1, 1) inside a stretched Local Space (its direction changes), and then converted back to Tangent Space with scale, becoming (0.5, 1, 2).

To avoid distortion, the scale must be taken into account twice in advance. First, scale the texture sample coordinates (same principle as in WorldAlignedTexture_Modernized):

Second, compensate the normals before the vector goes into TransformVector and before mask blending. Divide the vector's components by the object's per-axis scale. This cancels the stretch/squash of the local coordinate system and restores the correct direction.

Next, blend the normals from the three projections by their masks. At this stage, the vector length almost never remains 1 (linear blending changes length), so the result must be normalized.

However, if you normalize inside a non-uniformly scaled Local Space, the vector will only be "correct" there; a subsequent TransformVector will distort it again. The proper order is: divide to compensate scale, blend by masks, convert to Tangent Space (TransformVector), and normalize at the very end. This guarantees the length returns to 1 and the normal behaves correctly.

With non-uniform scale changes, the image no longer distorts:

Rotation and Mirroring

Now let's look at rotation. The logic here is the same as with scale: in WorldAlignedTexture_Modernized, it was enough to rotate the UVs through a CustomRotator, but for normals, that's not sufficient. When the texture rotates, the normals must rotate by the same angle.

I'll implement this through a rotation matrix and, at the same time, explain how it works and where it comes from. I'll demonstrate it visually in a video, as clearly as possible and without formulas (you'll only need to remember sines and cosines).

Let's start with understanding the term vector rotation. For clarity, it is convenient to introduce two ways to interpret rotation: active and passive.

In the active interpretation, the vector itself rotates. For example, a vector A1 with coordinates A(x1, y1) becomes a vector A2 with coordinates A(x2, y2), rotated by an angle φ.

But what happens if you take two vectors rotated relative to each other and mentally combine them into one? Each of them now has its own coordinate system, and those systems are rotated relative to each other by the same angle φ.

This is the passive interpretation: the vector stays fixed, while the coordinate system rotates. As a result, the vector's coordinates are redefined in this new rotated system. You can see that when φ = 0, the coordinates x1y1 match x2y2, and when φ = 90°, x2 = y1 and y2 = x1. This shows there's a consistent dependency between them, which can be expressed with formulas.

For normal rotation, the active interpretation is what we need, as the vectors must rotate along with the texture. However, to explain and derive the rotation matrix visually, I'll use the passive interpretation, since it's easier to see how the axes shift, flip, and how vector components swap and change sign.

In the video below, you can see how the rotation matrix for vector A is formed. Let (Ax1, Ay1) be its coordinates before rotation, and (Ax2, Ay2) after rotation. The goal is to express Ax2 and Ay2 through Ax1 and Ay1. The first thing that can be done to somehow relate these coordinate systems to each other is to project the old coordinates onto the new X2 and Y2 axes using sines and cosines and record the results. Then, by constructing equal triangles, you can see the basic relationships between component vectors:

Ax2 = Ax1 ∙ cos(φ) + Ay1 ∙ sin(φ)

Ay2 = Ax1 ∙ sin(φ) + Ay1 ∙ cos(φ)

Viewed in vector form, this is straightforward: the projections always add up, since direction already encodes sign. In scalar form, though, signs must be explicitly marked. They depend on whether the coordinate system is right- or left-handed and on the direction of rotation.

In general form:

Ax2 = Ax1 ∙ cos(φ) −(+) Ay1 ∙ sin(φ)

Ay2 = +(-)Ax1 ∙ sin(φ) + Ay1 ∙ cos(φ)

Which in matrix form exactly corresponds to the rotation matrix entry:

The first sign corresponds to counterclockwise rotation in a right-handed coordinate system.

Another important detail is the order of operations. In general, it doesn't matter at which stage the vector is rotated, right after it's taken from Tangent Space or after it's converted to the local projection space. The only difference is that in the latter case, the rotation matrices must be applied carefully for each projection (as their orientations differ and rotations will be performed relative to different axes, as explained in the previous section):

And if mirroring was applied earlier, the face direction must also be taken into account:

Once the normal vectors are rotated, they produce correct lighting results.

WorldAlignedNormal_Modernized Summary

Now WorldAlignedTexture_Modernized and WorldAlignedNormal_Modernized fully correspond to each other and work as a pair. The projections are bound to the object's Local Space, they can be freely rotated in the scene, and lighting stays accurate on rotations. Objects sharing the same angle preserve their texture alignment to the world, maintaining seamless transitions.

However, this approach works best for models with mostly flat surfaces, those whose faces align with local axes or have small rotation angles. On curved shapes like spheres or cylinders, the limitation becomes visible. In the video below: a sphere with WorldAlignedNormal_Modernized, compared to a cube with a reference normal map (UV) and a reference sphere.

You can see that lighting on the sphere with WorldAlignedNormal_Modernized behaves as if it were a cube, not a sphere. This happens because the projections are effectively transferred from local planes onto the surface without accounting for the orientation of individual polygons. Increasing mask contrast only amplifies the issue: the sphere loses readable form, the relief breaks, and lighting becomes unnatural.

In fact, UE includes two implementations: the basic WorldAlignedNormal and the more accurate but performance-expensive WorldAlignedNormal_HighQuality (which is used by default). I'll cover that one in the next section. A small spoiler, it looks like this:

For now, here's the final result using WorldAlignedNormal_Modernized on cubes: the local attachment behaves exactly as expected and clearly emphasizes the geometry.

Ready to grow your game’s revenue?
Talk to us

Comments

0

arrow
Leave Comment
Ready to grow your game’s revenue?
Talk to us

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more