logo80lv
Articlesclick_arrow
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_login
Log in
0
Save
Copy Link
Share

Deep Dive: UE5 Camera vs SceneCapture, Maintain Axis, Frustum Math, Projection Pipeline

Alina Ledeneva shared a deep dive about why a gameplay camera and SceneCaptureComponent2D can produce different framing, explaining the workflows in UE5 and giving two solutions.

Introduction

This article is for anyone who has run into a mismatch between a SceneCaptureComponent2D image and the main camera when the aspect ratio changes. SceneCaptureComponent2D renders the scene into a Render Target and is used in a wide range of setups: mirrors, portals, alternate worlds, predictive visuals, and 3D previews in UI.

When the SceneCapture projection is not synchronized with the camera, the effect breaks as soon as the window size changes. The frame can look stretched, shifted, or scaled incorrectly. In this piece, I show how to make the synchronization fully predictable:

  • The relationship between FOV, aspect ratio, and Maintain Axis, and why the same FOV does not guarantee identical framing
  • Formulas for Maintain Y and Major Axis, and how to apply them in practice
  • A breakdown of the UE5.6 projection pipeline for the gameplay camera and SceneCapture, with pointers to the engine source code
  • Two working solutions: a fast Blueprint fix and a cleaner C++ approach using CustomProjectionMatrix, so SceneCapture follows the same Maintain Axis rule as the camera

Framing in Unreal Engine

Imagine the following setup: you want to show a "window" into another part of the level on screen, for example, a future prediction mode or a ghost world. To do that, you use the player's main camera, Camera_A, and a SceneCapture2D actor, Camera_B. Camera_B copies Camera_A's position and rotation, then applies an additional world space offset. It has a SceneCaptureComponent2D that writes the image into a TextureRenderTarget2D.

In the post process, you blend the two images. Inside a defined region, you show the "real" world from Camera_A, where the character actually exists and moves, while the rest of the screen becomes "virtual" and pulls the image from the Render Target. For the effect to look correct, the gameplay camera and the SceneCapture view must produce identical framing at any resolution and aspect ratio.

At first glance, it seems straightforward. If two views look in the same direction, their images should match. In practice, an unexpected issue shows up: the frames line up only at one specific viewport aspect ratio, and drift apart as soon as the window changes shape.

Make the viewport wider or narrower, and the Render Target suddenly feels too zoomed in or too zoomed out. This breaks any parallel world effect because the two images no longer match. This breaks any parallel world effect because the two images no longer match.

Since the images match only at one specific aspect ratio, the problem almost certainly lies in the projection math. To find the exact source of the mismatch, it helps to start by looking at how Unreal Engine builds the frame. I will start with the core projection logic, and later in the article, I will walk through the full pipeline with direct references to the engine source.

When a camera renders the scene, it relies on a projection matrix: a set of transforms that maps 3D world coordinates onto a 2D image. This matrix defines what part of the world ends up on screen, how strong the perspective looks, how the image is scaled across width vs. height, and what gets clipped. The simplest mental model is: it’s the camera's optics.

The projection matrix is driven by several parameters: FOV (Field of View), aspect ratio, the Maintain Axis rule, and depth or clipping settings. Even if two cameras share the same FOV, a small difference in any other input produces a different projection matrix, and the framing stops matching. Next, I will briefly define what each parameter means and how it affects the final image.

Near Clip and Depth Clipping

The near clip plane (and related depth/clipping settings) defines the camera's usable depth range: how close an object can get to the camera before it starts rendering, and where geometry gets clipped. These settings also affect how precision is distributed in the Z-buffer, which can contribute to depth-related artifacts.

I will not go deep on clipping in this article. The near clip plane and other depth settings are usually configured centrally at the project level to keep shadows, depth behavior, and intersections consistent. Because of that, when you're trying to synchronize a gameplay camera with a SceneCapture view, these values usually stay the same, and they typically aren’t what causes the framing mismatch.

FOV (Field of View)

FOV describes how wide an angle of the world the camera can see. There are two distinct FOV angles: vertical FOV and horizontal FOV. Vertical FOV determines how much you see in height, while horizontal FOV determines the width. Together they define the view frustum, a truncated pyramid that represents the camera's visible volume.

In Unreal Engine, the camera exposes a single FOV value. The final frustum shape is derived from that value using the current aspect ratio and the Maintain Axis rule. As a rough mental model, imagine starting with a square view defined by the FOV, then cropping it to match the target aspect ratio.

The video below shows how the image changes as the FOV changes. Increasing FOV makes the camera feel like it pulls back and captures more of the scene. Decreasing FOV narrows the frame and makes the scene appear closer.

Aspect Ratio

Aspect ratio is the relationship between a frame's width and height. It defines the overall shape of the image, whether it feels wide, tall, or close to square. It is important to understandthat that changing the window shape is not just stretching a finished picture. It changes the projection. With the same FOV, the camera can show a different amount of the world depending on the aspect ratio.

The video below shows how the image changes as the aspect ratio changes. In this example, Constrain Aspect Ratio is enabled, so if the viewport's aspect ratio does not match the camera's aspect ratio, black bars appear.

Maintain Axis

At this point, a natural question comes up: what happens when Constrain Aspect Ratio is disabled, but the viewport's aspect ratio doesn't match the camera's? And if a gameplay camera and a SceneCaptureComponent2D share the same FOV and the same aspect ratio in their settings, why do their images still drift apart as the window is resized?

This behavior is controlled by the Aspect Ratio Axis Constraint, the Maintain Axis rule that tells Unreal which axis to preserve as aspect ratio changes, and which part of the framing should remain stable. This parameter is not stored as a single value inside the projection matrix. Instead, it is a rule Unreal uses to compute the values that end up in the matrix.

So when Constrain Aspect Ratio is disabled, resizing the viewport effectively follows one of two modes:

  • Maintain Y Axis: the vertical FOV stays constant, and only the horizontal coverage changes. Visually, objects keep the same height while the frame expands or squeezes horizontally. (This is the top example in the video.)
  • Maintain X Axis: the horizontal FOV stays constant. As the aspect ratio changes, the vertical coverage is recalculated, and objects appear larger or smaller. (This is the bottom example in the video.)

This is the core of the mismatch. By default, the gameplay camera uses Maintain Y Axis, while SceneCaptureComponent2D builds its projection as if horizontal FOV is preserved (effectively Maintain X Axis).

As a result, the two frames only match at one aspect ratio and drift apart as soon as the viewport shape changes. The quickest way to make them match is to switch the gameplay camera to Maintain X Axis, so both views derive their projection using the same rule.

In most projects, Maintain Y Axis is the default because it produces a more stable, familiar-looking frame. Switching away from it is often not an option. At the same time, SceneCaptureComponent2D does not provide a direct way to change the Maintain Axis behavior.

So if you need the images to match, there is only one path left: compute the SceneCapture projection yourself, and make sure both views build their projection under the same logic. Below, I'll show two approaches: one in Blueprints and one in C++.

Frustum Geometry and FOV Recalculation

Let's start with the FOV recalculation logic for a camera set to Maintain Y Axis. In the diagram, green highlights the camera frustum and its base setup: x is the frame width, y is the frame height, and ARcam = x / y is the camera aspect ratio stored in the camera settings (the aspect ratio used to interpret the FOV value).

Let the vertical field of view (YFOV) be β, and the horizontal field of view (XFOV) be α. From the frustum geometry:

ARcam = tan(α/2) / tan(β/2)

Here is the key point: in Unreal Engine's perspective camera, the single FOV value from the camera settings is interpreted as the horizontal FOV α when building the projection. So α is known (it's the main FOV), and with a known ARcam, we can compute the matching vertical angle β. Later in the article, I will show exactly where this happens in the engine source.

tan(β/2) = tan(α/2) / ARcam

Now, the blue color highlights the actual on-screen frame. Let its aspect ratio be AR1. It can differ from ARcam because the viewport (or the render target) may have a different shape. In the diagram, AR1 is wider than the camera's base framing.

With Maintain Y Axis, the vertical FOV is fixed (β=const), so when the aspect ratio changes, only the horizontal FOV is recalculated. So we want the horizontal angle α1 for the actual AR1, while keeping β unchanged.

Using the same relationship:

(α1)/2 = atan( AR1 · tan(β/2) )

Here, AR1 comes from the current viewport dimensions, and tan(β/2) can be substituted from the previous step (expressed through the main FOV α and ARcam). In other words, with Maintain Y, you keep the vertical FOV stable, and the horizontal FOV adapts to the current aspect ratio.

Recalculating FOV in Blueprints

Now let's bring these calculations into Blueprints to synchronize a SceneCaptureComponent2D with the gameplay camera. The idea is simple: SceneCaptureComponent2D builds its projection in a Maintain X style, so we feed it a FOV value that produces the same final framing as the main camera, which runs in Maintain Y.

Because this FOV needs to be updated whenever the viewport changes (window resize, switching fullscreen -> windowed, resolution changes, etc.), you should trigger the logic from a viewport resize event. For a quick prototype, running it on Tick is fine, but it's better to cache the inputs and recompute only when the viewport size or camera parameters actually change.

Step by step:

  • Get the current viewport size
  • Compute the actual frame aspect ratio AR1
  • Read the main FOV and ARcam from the camera settings
  • Recalculate the SceneCapture FOV using the method described above
  • Apply the result to SceneCaptureComponent2D
  • Resize the TextureRenderTarget2D to match the current viewport size ( Resize Render Target 2D )

C++ solution: Matching SceneCapture to the Gameplay Camera

This section has two goals:

  • Explain Unreal Engine's render order from the very beginning: when the gameplay camera and SceneCaptureComponent2D are updated, what data they use, and where their pipelines diverge
  • Show a C++ solution: a custom SceneCaptureComponent2D that can follow the camera's Maintain Axis rule via CustomProjectionMatrix

Render order: Entry Points for Gameplay Camera and SceneCapture

Let's begin at the very start. We need to identify where the main view is assembled, and where SceneCaptureComponent2D gets updated during rendering.

On the game thread, the frame update begins in UGameEngine::Tick() (GameEngine.cpp). After World->Tick(), the engine kicks off rendering via RedrawViewports().

C++ // void UGameEngine::Tick()

1 Context.World()->Tick( LEVELTICK_All, DeltaSeconds );

2 if (bIsRenderingScene && !bRenderingSuspended)

3 {

4   RedrawViewports();

5 }

UGameEngine::RedrawViewports() checks whether a game viewport is available, figures out how many local players are active and how their views are laid out on screen, and then enters the viewport draw path by calling GameViewport->Viewport->Draw().

C++ // void UGameEngine::RedrawViewports()

1 GameViewport->LayoutPlayers();

2 if ( GameViewport->Viewport != NULL )

3 {

4 GameViewport->Viewport->Draw(bShouldPresent);

5 }

Inside UGameViewportClient::Draw() (GameViewportClient.cpp), Unreal creates an FSceneViewFamilyContext called ViewFamily. It is a container for per-frame settings shared across all views in a view family. This matters for split screen and multiple local players: each player gets their own view, but the shared rendering configuration lives at the FSceneViewFamily level.

Next, for each ULocalPlayer, Unreal builds an FSceneView via ULocalPlayer::CalcSceneView(). This is the final snapshot of what that player sees for the current frame. Later, the scene renderer is created from these views and performs the actual rendering.

C++ // UGameViewportClient::Draw()

1 FSceneViewFamilyContext ViewFamily(...)

2 ...

3 for (FLocalPlayerIterator Iterator(GEngine, MyWorld); Iterator; ++Iterator)

4 {

5   ULocalPlayer* LocalPlayer = *Iterator;

6   FSceneView* View = LocalPlayer->CalcSceneView(&ViewFamily, ... );

7 }

Inside ULocalPlayer::CalcSceneView(), the camera pipeline effectively begins. This is where Unreal gathers view parameters and computes the projection. Let's step into it briefly to find the exact entry point, then return to UGameViewportClient::Draw(). At a high level, CalcSceneView() works like this.

First, it creates the input set used to initialize the view, FSceneViewInitOptions ViewInitOptions, and fills in the pieces that describe view geometry and rendering context. This happens through CalcSceneViewInitOptions(), which calls GetProjectionData(). GetProjectionData() reads the current gameplay camera state: it builds an FMinimalViewInfo called ViewInfo (a structure that holds camera parameters such as FOV, AspectRatio, post-process settings, and more) and populates it via GetViewPoint().

The filled ViewInfo is then passed into CalculateProjectionMatrixGivenView(), which builds the projection matrix while applying the Maintain Axis rule. The AspectRatioAxisConstraint parameter is passed along as well. This is the function I will use later as the starting point for the deeper camera pipeline breakdown.

C++

1 bool ULocalPlayer::GetProjectionData(FViewport* Viewport, FSceneViewProjectionData& ProjectionData, int32 StereoViewIndex) const { ...

2 FMinimalViewInfo::CalculateProjectionMatrixGivenView(ViewInfo, AspectRatioAxisConstraint, Viewport, /*inout*/ ProjectionData);

3 ...}

After that, CalcSceneView() calls GetViewPoint() one more time, but this time not for ProjectionData. It uses it to fill the remaining ViewInitOptions fields that depend on the camera, but are not part of ProjectionData and are not directly tied to the projection geometry.

Finally, the fully populated ViewInitOptions becomes the basis for creating the final FSceneView.

The resulting FSceneView contains everything the renderer needs and defines how the scene should be rendered for the current frame: the camera position and orientation, the view rectangle, the matrices, visibility settings, post-process parameters, and other data. From there, UGameViewportClient::Draw() hands off to the renderer module:

C++ // UGameViewportClient::Draw()

GetRendererModule().BeginRenderingViewFamily(SceneCanvas, &ViewFamily);

BeginRenderingViewFamily() (in SceneRendering.cpp) is essentially a thin wrapper around BeginRenderingViewFamilies(). Inside BeginRenderingViewFamilies(), the render-preparation phase begins.

It creates a SceneRenderBuilder, which collects everything that needs to be rendered: the set of view families, additional passes, custom passes, and more. Before the main frame renderers are created, the engine updates all deferred SceneCaptures via SceneCaptureUpdateDeferredCapturesInternal().

C++ // void FRendererModule::BeginRenderingViewFamilies()

1 FSceneRenderBuilder SceneRenderBuilder(Scene);

2 bool bShowHitProxies = (Canvas->GetHitProxyConsumer() != nullptr);

3 if (!bShowHitProxies)

4 {

5  SceneCaptureUpdateDeferredCapturesInternal(Scene, ViewFamilies, SceneRenderBuilder);

6 }

You can treat the call to SceneCaptureUpdateDeferredCapturesInternal() as the entry point for the deferred SceneCapture pipeline. It iterates over the list of deferred capture components and, for each one, calls UpdateSceneCaptureContents(), which ultimately results in the scene being rendered into the Render Target.

C++ // void SceneCaptureUpdateDeferredCapturesInternal()

1 for (TWeakObjectPtr<USceneCaptureComponent> Component : SceneCapturesToUpdate)

2 {

3  if (Component.IsValid())

4  {...

5   Component->UpdateSceneCaptureContents(Scene, SceneRenderBuilder);

6  ...}

7 }

SceneCapture can actually be updated in two modes: immediate, when you call CaptureScene(), and the capture is triggered right away without having to go through UGameViewportClient::Draw(), and deferred, where the capture is scheduled for the next frame.

The chain described above (BeginRenderingViewFamilies() -> SceneCaptureUpdateDeferredCapturesInternal() ) applies specifically to deferred updates. It runs inside the renderer module after the main ViewFamily has been assembled, but before the main scene renderers are created.

This timing ensures Render Targets are updated, and any additional passes or custom render passes are registered in time for the main render to consume them (for example, in materials or post-process).

Now that we have identified the entry points for the gameplay camera and the SceneCapture pipelines, we can compare the two branches and see where Maintain Axis enters the flow. In Unreal, Maintain Axis is represented by EAspectRatioAxisConstraint (EngineTypes.h).

It defines how FOV should be constrained as the aspect ratio changes: preserve the vertical angle (Maintain Y), preserve the horizontal angle (Maintain X), or preserve the angle along the major axis (Maintain Major Axis, which picks X or Y depending on whether the frame is wider or taller).

C++ // Enum describing how to constrain perspective view port FOV

1 UENUM()

2 enum EAspectRatioAxisConstraint : int

3 {

4  AspectRatio_MaintainYFOV UMETA(DisplayName="Maintain Y-Axis FOV"),

5  AspectRatio_MaintainXFOV UMETA(DisplayName="Maintain X-Axis FOV"),

6  AspectRatio_MajorAxisFOV UMETA(DisplayName="Maintain Major Axis FOV"),

7  AspectRatio_MAX,

8 };

Maintain Axis in the Camera Pipeline

We reached the point where FMinimalViewInfo ViewInfo (which holds the camera parameters) is passed into CalculateProjectionMatrixGivenView() together with AspectRatioAxisConstraint from the LocalPlayer. What happens next?

CalculateProjectionMatrixGivenView() is a preparation step before the projection matrix is built. It accounts for asymmetric crop (which can change the effective aspect ratio), asks the viewport to compute the view rectangle for that aspect via CalculateViewExtents(), and finally calls CalculateProjectionMatrixGivenViewRectangle(), passing along both ViewInfo and the AspectRatioAxisConstraint parameter.

C++

1 void FMinimalViewInfo::CalculateProjectionMatrixGivenView(FMinimalViewInfo& ViewInfo, TEnumAsByte<enum EAspectRatioAxisConstraint> AspectRatioAxisConstraint, FViewport* Viewport, FSceneViewProjectionData& InOutProjectionData)

2 {

3  const float CropAspectRatio = (ViewInfo.AsymmetricCropFraction.X + ViewInfo.AsymmetricCropFraction.Y) / (ViewInfo.AsymmetricCropFraction.Z + ViewInfo.AsymmetricCropFraction.W);

4  const float AspectRatio = ViewInfo.AspectRatio * CropAspectRatio;

5  FIntRect ViewExtents = Viewport->CalculateViewExtents(AspectRatio, InOutProjectionData.GetViewRect());

6  CalculateProjectionMatrixGivenViewRectangle(ViewInfo, AspectRatioAxisConstraint, ViewExtents, InOutProjectionData);

7 }

CalculateProjectionMatrixGivenViewRectangle() builds the ProjectionMatrix while applying Maintain Axis and any camera overrides. In perspective mode, it computes the projection parameters and then constructs the projection matrix using FReversedZPerspectiveMatrix().

C++

1 template<typename T>

2 FORCEINLINE TReversedZPerspectiveMatrix<T>::TReversedZPerspectiveMatrix(T HalfFOVX, T HalfFOVY, T MultFOVX, T MultFOVY, T MinZ, T MaxZ)

3  : TMatrix<T>(

4  TPlane<T>(MultFOVX / FMath::Tan(HalfFOVX), 0.0f, 0.0f, 0.0f),

5  TPlane<T>(0.0f, MultFOVY / FMath::Tan(HalfFOVY), 0.0f, 0.0f),

6  TPlane<T>(0.0f, 0.0f, ((MinZ == MaxZ) ? 0.0f : MinZ / (MinZ - MaxZ)), 1.0f),

7  TPlane<T>(0.0f, 0.0f, ((MinZ == MaxZ) ? MinZ : -MaxZ * MinZ / (MinZ - MaxZ)), 0.0f)

8  )

9 { }

The purpose of the projection matrix is to transform a point from view space into clip space. After that, the GPU performs the perspective divide, and you end up with screen space coordinates. The first two coefficients control the X and Y scale, adjusted for the frame shape. The third and fourth rows handle depth and the perspective projection itself, including the perspective divide. Since we care about Maintain Axis, the rest of this section focuses on how the first two coefficients are computed.

The image below shows the camera frustum (a truncated pyramid that represents the region captured by the frame). If you slice this pyramid with planes perpendicular to the view direction (cross sections at different distances z from the camera), you can see that the farther the slice is, the larger the visible rectangle becomes.

This is intuitive: rays leaving the camera at the FOV angles diverge over distance. At 1 meter, they spread slightly, while at 10 meters, they spread much more. As a result, the frustum's visible width and height grow with distance. This is the core of perspective: an object of the same physical size occupies more of the frame when it is close to the camera, and less when it is farther away.

Working directly in world units is inconvenient for this reason. The same x offset, for example, 1 meter to the right of center, produces a completely different screen offset at different depths z. That is why graphics pipelines use normalized screen-space coordinates, usually called NDC (Normalized Device Coordinates).

In NDC, positions become comparable across depth: the center of the frame is 0, and the visible boundaries map to -1 and +1 (left/right for x, bottom/top for y). So instead of asking "how many meters from the center," you ask "what fraction of the maximum visible range at this depth does this point occupy."

That is what normalization means here. For a given distance z, you compute where the visibility boundary lies and divide the point's coordinate, for example, y, by that boundary, ymax(z). A point on the boundary always gives yndc = 1, no matter how close or how far it is.

The problem reduces to computing axis scale factors (call them Sx and Sy) that project points correctly at any depth. This falls straight out of basic frustum geometry. Take a vertical slice. At distance z from the camera, the half-height of the visible area is:

ymax(z) = z * tan(β/2)

To map a point on the frustum boundary to yndc = 1, you normalize:

yndc = y / ymax(z) = y / (z * tan(β/2)) = (1 / tan(β/2)) * (y / z)

This is where the key coefficient appears: Sy = 1 / tan(β/2). It converts the FOV angle into a linear scale. In normalized coordinates, it effectively behaves like a focal length: a smaller angle produces a larger scale (more zoom), while a larger angle produces a smaller scale. The horizontal slice is analogous, with Sx = 1 / tan(α/2).

In Unreal Engine, this is implemented by feeding a single reference half-FOV into the matrix, while the difference between axes is handled through multipliers. These are corrective factors that account for the frame aspect ratio and the chosen Maintain Axis rule. As a result, the axis scales can be written as

Sy = MultFOVY / tan(HalfFOVY) and Sx = MultFOVX / tan(HalfFOVX).

These multipliers define the X and Y scale inside the projection matrix. In essence, they compensate for the current aspect ratio so that when the frame shape changes, the intended FOV axis stays fixed and the frustum remains consistent with the selected Maintain Axis rule.

The job of CalculateProjectionMatrixGivenViewRectangle() is to compute the correct inputs for FReversedZPerspectiveMatrix(): the reference half-FOV and the per-axis multipliers (MultFOVX and MultFOVY).

These values depend on the ViewRect dimensions, the chosen Maintain Axis mode, and any camera overrides. At a high level, the sequence looks like this. First, it initializes the axis multipliers and reads the current view size, SizeX and SizeY:

C++ // void FMinimalViewInfo::CalculateProjectionMatrixGivenViewRectangle()

1  float XAxisMultiplier;

2  float YAxisMultiplier;

3  const FIntRect& ViewRect = InOutProjectionData.GetViewRect();

4  const int32 SizeX = ViewRect.Width();

5  const int32 SizeY = ViewRect.Height();

Next, it resolves AspectRatioAxisConstraint using ViewInfo (the snapshot of the camera parameters). If the camera does not override it, the value from the LocalPlayer is used. If an override is present, it replaces the LocalPlayer value.

C++ // void FMinimalViewInfo::CalculateProjectionMatrixGivenViewRectangle()

AspectRatioAxisConstraint = ViewInfo.AspectRatioAxisConstraint.Get(AspectRatioAxisConstraint);

Then, based on the final AspectRatioAxisConstraint, it derives the bMaintainXFOV flag as true or false. That flag is used to compute the axis multipliers that were initialized at the start.

C++ // void FMinimalViewInfo::CalculateProjectionMatrixGivenViewRectangle()

1 const bool bMaintainXFOV =

2  ((SizeX > SizeY) && (AspectRatioAxisConstraint == AspectRatio_MajorAxisFOV)) ||

3  (AspectRatioAxisConstraint == AspectRatio_MaintainXFOV);

4 if (bMaintainXFOV)

5 {

6  // If the viewport is wider than it is tall

7  XAxisMultiplier = 1.0f;

8  YAxisMultiplier = SizeX / (float)SizeY;

9 }

10 else

11  {

12  // If the viewport is taller than it is wide

13  XAxisMultiplier = SizeY / (float)SizeX;

14  YAxisMultiplier = 1.0f;

15 }

Next, to build the projection matrix, Unreal has to decide which half-FOV to use as the reference angle (MatrixHalfFOV) when computing the projection scale terms.

Quick reminder: in Unreal Engine, the camera FOV setting is stored and interpreted as a horizontal FOV (HFOV). So when Maintain Y is selected, the engine first reconstructs the vertical half-FOV that corresponds to this HFOV at the camera's "base" aspect ratio ARcam (ViewInfo.AspectRatio).

That vertical half-FOV becomes MatrixHalfFOV when building the matrix, and the final frame width is then adapted to the current viewport through the per-axis multipliers. Given HFOV and ARcam, the vertical half FOV can be computed as β/2 = atan( tan(α/2) / ARcam ) (this is the same relationship derived earlier)

That is exactly what the code does: it takes the user-provided FOV as HalfXFOV, computes the corresponding HalfYFOV, and in Maintain Y mode chooses MatrixHalfFOV = HalfYFOV. This value is then passed into the projection matrix constructor together with the axis multipliers.

C++ // void FMinimalViewInfo::CalculateProjectionMatrixGivenViewRectangle()

1 float MatrixHalfFOV;

2 if (!bMaintainXFOV && ViewInfo.AspectRatio != 0.f && !CVarUseLegacyMaintainYFOV.GetValueOnGameThread())

3 {

4  const float HalfXFOV = FMath::DegreesToRadians(FMath::Max(0.001f, ViewInfo.FOV) / 2.f);

5  const float HalfYFOV = FMath::Atan(FMath::Tan(HalfXFOV) / ViewInfo.AspectRatio);

6  MatrixHalfFOV = HalfYFOV;

7 }

In the else branch, which covers Maintain X, legacy behavior, or the AR = 0 case, the function takes the half-FOV directly from ViewInfo.FOV and converts it to radians:

C++ // void FMinimalViewInfo::CalculateProjectionMatrixGivenViewRectangle()

1 else

2 {

3  MatrixHalfFOV = FMath::Max(0.001f, ViewInfo.FOV) * (float)UE_PI / 360.0f;

4 }

In the end, the projection-matrix receives the reference half-FOV, the two-axis multipliers, and the depth/clipping parameters.

C++ // void FMinimalViewInfo::CalculateProjectionMatrixGivenViewRectangle()

1 InOutProjectionData.ProjectionMatrix = FReversedZPerspectiveMatrix(

2  MatrixHalfFOV,

3  MatrixHalfFOV,

4  XAxisMultiplier,

5  YAxisMultiplier,

6  ClippingPlane,

7  ClippingPlane

8 );

Maintain Axis in the SceneCapture Pipeline

For the SceneCapture pipeline, we stopped at the entry point in SceneCaptureUpdateDeferredCapturesInternal(), which iterates over the deferred capture components and calls UpdateSceneCaptureContents() for each one.

From here, we can follow the flow and see where AspectRatioAxisConstraint shows up, if it shows up at all. UpdateSceneCaptureContents() is declared as virtual in SceneCaptureComponent.h and is overridden for SceneCaptureComponent2D:

C++

1 void USceneCaptureComponent2D::UpdateSceneCaptureContents(FSceneInterface* Scene, ISceneRenderBuilder& SceneRenderBuilder)

2 {

3  Scene->UpdateSceneCaptureContents(this, SceneRenderBuilder);

4 }

In practice, Scene is the renderer-side representation of the world (FScene) from the Renderer module. It is stored as an FSceneInterface* in UWorld->Scene. SceneCaptureComponent2D acts as the source of capture parameters, while the actual work is performed through FSceneInterface.

In FScene::UpdateSceneCaptureContents() (SceneCaptureRendering.cpp), we finally reach the part we were looking for: this is where FOV is evaluated, and the projection matrix is built. This is also where the key difference from the gameplay camera pipeline becomes clear: the projection is built purely from the CaptureComponent parameters, and AspectRatioAxisConstraint does not appear anywhere in this path.

C++ // void FScene::UpdateSceneCaptureContents()

1 const float UnscaledFOV = CaptureComponent->FOVAngle * (float)PI / 360.0f;

2 const float FOV = FMath::Atan((1.0f + CaptureComponent->Overscan) * FMath::Tan(UnscaledFOV));

3

4 if (CaptureComponent->bUseCustomProjectionMatrix)

5 {

6  ProjectionMatrix = CaptureComponent->CustomProjectionMatrix;

7 }

8 else

9 {

10  if (CaptureComponent->ProjectionType == ECameraProjectionMode::Perspective)

11  {

12   const float ClippingPlane = (CaptureComponent->bOverride_CustomNearClippingPlane) ? CaptureComponent->CustomNearClippingPlane : GNearClippingPlane;

13   BuildProjectionMatrix(CaptureSize, FOV, ClippingPlane, ProjectionMatrix);

14  }

15 }

What happens here is straightforward:

  • FOV is read from CaptureComponent->FOVAngle
  • If bUseCustomProjectionMatrix is enabled, the engine uses the matrix you provide
  • Otherwise, it calls BuildProjectionMatrix()

Now let's follow the code into BuildProjectionMatrix() in SceneCaptureRendering.cpp. This function computes the axis multipliers from the Render Target size and builds the projection matrix.

C++ // void BuildProjectionMatrix()

1 float const XAxisMultiplier = 1.0f;

2 float const YAxisMultiplier = InRenderTargetSize.X / float(InRenderTargetSize.Y);

3

4  if ((int32)ERHIZBuffer::IsInverted)

5 {

6   OutProjectionMatrix = FReversedZPerspectiveMatrix(

7    InFOV,

8    InFOV,

9    XAxisMultiplier,

10  YAxisMultiplier,

11  InNearClippingPlane,

12  InNearClippingPlane

13  );

14 }

The key lines here are:

float const XAxisMultiplier = 1.0f;

float const YAxisMultiplier = InRenderTargetSize.X / float(InRenderTargetSize.Y);

In other words, BuildProjectionMatrix() does not offer a Maintain X / Maintain Y choice like CalculateProjectionMatrixGivenViewRectangle() does. SceneCapture always builds a projection the same way: the horizontal component stays fixed, and the vertical axis is adjusted to match the Render Target aspect ratio.

So the next step is to add what SceneCapture is missing. In the next section, I will create a custom SceneCaptureComponent2D that exposes a Maintain Axis switch, and I will show exactly where it needs to affect projection-matrix construction.

Creating a Custom MaintainAxisCaptureComponent2D

Let's create a UMaintainAxisCaptureComponent2D derived from USceneCaptureComponent2D, so SceneCapture can support Maintain X, Maintain Y, and Major Axis the same way a regular gameplay camera does. For SceneCapture, the key question is: where do we intervene in the projection?

There are two main options here: recompute the input FOVAngle before the matrix is built (similar to the Blueprint approach), or build the matrix directly and override the default one via bUseCustomProjectionMatrix.

Choosing the Approach

Overriding FOVAngle works, but it comes with a downside: a parameter that should represent a camera setting, something a person chooses, turns into a derived technical value that depends on the current render target size and the selected Maintain Axis mode. As a result, FOVAngle keeps changing over time, becomes harder to debug, and stops behaving like a stable input.

A custom projection matrix lets you separate these roles. FOVAngle stays as the component's original input, while all synchronization logic lives separately and is expressed directly in the projection matrix, in the exact form the renderer consumes.

The result is more transparent and easier to control: you override the matrix that SceneCapture passes into FSceneViewInitOptions, and you can match the camera behavior precisely without changing what the component's user-facing parameters mean.

What to Keep in Mind

  • It is important to keep in mind that in the regular camera pipeline, the vertical HalfYFOV is derived from the camera aspect ratio:

const float HalfYFOV = FMath::Atan(FMath::Tan(HalfXFOV) / ViewInfo.AspectRatio). 

SceneCaptureComponent2D does not have an equivalent parameter. It only knows the Render Target aspect ratio, so if you want to maintain Y behavior, you need one extra parameter. I will call it ReferenceAspectRatio, and it should match the aspect ratio the main camera uses to interpret its FOV.

  • There is one more nuance. bUseCustomProjectionMatrix comes with a warning that it "does not currently affect culling." At first glance, this sounds like the engine would still cull objects using the default frustum and ignore your custom matrix. But the actual pipeline reveals an important detail.: when bUseCustomProjectionMatrix is true, FScene::UpdateSceneCaptureContents() (SceneCaptureRendering.cpp)  takes ProjectionMatrix directly from CaptureComponent->CustomProjectionMatrix and passes it as is into CreateSceneRendererForSceneCapture().

C++

1 void FScene::UpdateSceneCaptureContents(USceneCaptureComponent2D* CaptureComponent, ISceneRenderBuilder& SceneRenderBuilder)

2 {...

3  if (CaptureComponent->bUseCustomProjectionMatrix)

4  {

5    ProjectionMatrix = CaptureComponent->CustomProjectionMatrix;

6    ...

7    FSceneRenderer* SceneRenderer = CreateSceneRendererForSceneCapture(...,    ProjectionMatrix, ...);

8    ...

9  }

10 ...}

That matrix then ends up in FSceneViewInitOptions::ProjectionMatrix. ViewMatrices are built from it, and the view frustum is derived from the resulting view-projection matrix in FSceneView::SetupViewFrustum(). In other words, the main frustum planes are computed from ViewMatrices.GetViewProjectionMatrix() does take your custom projection into account.

That said, the Epic warning should be read carefully. Some secondary visibility heuristics and culling-related decisions may still rely on scalar parameters such as FOVAngle, which is why the feature is marked as use with caution. If you run into artifacts or incorrect results, you may need to dig deeper and audit the places where the engine uses scalar camera parameters separately from ProjectionMatrix, then bring them back in line with the custom projection.

So if you go with CustomProjectionMatrix, there are two ways to implement Maintain Axis selection.

  • You can fully mirror the gameplay camera logic: feed a vertical half FOV into the matrix and compensate the other axis through a multiplier.

(HalfYFOV = atan(tan(HalfXFOV)/ARcam), MatrixHalfFOV = HalfYFOV, XAxisMultiplier = 1/ARrt, YAxisMultiplier = 1)

  • Or you can keep the default SceneCapture multipliers and adjust only the FOV. This approach is closer to the intuitive definition of Maintain Y: the vertical angle stays fixed and only the horizontal coverage changes. First, reconstruct HalfYFOV from ARcam, then compute a new horizontal HalfXFOV_new for the current ARrt. After that, you can use the standard SceneCapture multipliers (as in BuildProjectionMatrix).

(HalfYFOV = atan(tan(HalfXFOV)/ARcam), HalfX_new = atan(ARrt * tan(HalfYFOV)), MatrixHalfFOV = HalfX_new, XAxisMultiplier = 1, YAxisMultiplier = ARrt)

The first option is closer to the camera code and slightly cheaper (one atan instead of two). The second option, however, has a few practical advantages.

  • It stays closer to the engine's default math. Only HalfXFOV changes, and everything else remains aligned with BuildProjectionMatrix(). That reduces the chance of mismatches and becomes even more important once AdjustProjectionMatrixForRHI, reversed-Z, jitter, and similar steps enter the path.
  • It keeps a clear meaning for MatrixHalfFOV. It always represents the horizontal half-FOV that matches the current render target, regardless of the selected Maintain Axis mode.
  • It is easier to reuse and extend. If you later need clamping or a minimum FOV, dynamic overscan, cinematic camera synchronization, or support for multiple render targets, this approach stays simpler because you always work with a concrete output value, the horizontal FOV, that is easy to compare and print.
  • It also conflicts less with the engine's scalar parameters. It gives you a natural way to keep the scalar FOV and the projection matrix consistent for logs, heuristics, and debugging.

In the end, I went with the second option.

What Else to Keep in Mind

To avoid doing unnecessary work every tick, you can cache the input parameters (Render Target size, FOV, overscan, Maintain Axis mode, near clip) and recompute the matrix only when something actually changes. Nothing foundTextImageExternal videoSnippetCalloutTableList

Also, keep in mind that the base SceneCaptureComponent2D has a Main View Camera mode, where it copies matrices from the main view. In that branch, the engine takes ProjectionMatrix from the main view and does not check bUseCustomProjectionMatrix, so you should not override the matrix in this mode.Type*

C++ // FScene::UpdateSceneCaptureContents ()

1 if (CaptureComponent->ShouldRenderWithMainViewCamera() && CaptureComponent->MainViewFamily)

2 {

3    const FSceneView* MainView = CaptureComponent->MainViewFamily->Views[0];

4    ViewLocation = MainView->ViewMatrices.GetViewOrigin();

5    ViewRotationMatrix = MainView->ViewMatrices.GetViewMatrix().RemoveTranslation();

6    ProjectionMatrix = MainView->ViewMatrices.GetProjectionMatrix();

7 }

8 else

9 {

10    ...

11    if (CaptureComponent->bUseCustomProjectionMatrix)

12        ProjectionMatrix = CaptureComponent->CustomProjectionMatrix;

13    else

14        BuildProjectionMatrix(...);

15 }

Implementing UpdateCustomProjection()
Below is the core UpdateCustomProjection() function, which updates the projection using the following logic:

  • Compute the render target aspect ratio, AspectRT, from the render target size
  • Decide which axis to preserve (via bMaintainX / the selected Maintain Axis mode)
  • Depending on the mode: Maintain X: MatrixHalfFOV = HalfXMaintain Y: reconstruct the "camera" HalfY using the reference aspect ratio (ARcam, ReferenceAspectRatio), then compute a new horizontal half-FOV HalfX_new for the current AspectRT
  • Keep XAxisMultiplier and YAxisMultiplier the same as in the default SceneCapture path
  • Build an FReversedZPerspectiveMatrix and write the result into CustomProjectionMatrix

C++ // MaintainAxisCaptureComponent2D.cpp

1 void UMaintainAxisCaptureComponent2D::UpdateCustomProjection()

2 {

3 const int32 SizeX = TextureTarget->GetSurfaceWidth();

4 const int32 SizeY = TextureTarget->GetSurfaceHeight();

5 if (SizeX <= 0 || SizeY <= 0)

6 {

7  return;

8 }

9    const bool bOverrideNear = bOverride_CustomNearClippingPlane;

10 const float NearClip = bOverrideNear ? CustomNearClippingPlane : GNearClippingPlane;

11 const float AspectRT = float(SizeX) / float(SizeY);

12 switch (MaintainAxis)

13 {

14 case EAspectRatioAxisConstraint::AspectRatio_MaintainXFOV:

15 bMaintainX = true;

16 break;

17 case EAspectRatioAxisConstraint::AspectRatio_MaintainYFOV:

18 bMaintainX = false;

19 break;

20 case EAspectRatioAxisConstraint::AspectRatio_MajorAxisFOV:

21  // wide -> maintain X, tall -> maintain Y

22 bMaintainX = (SizeX >= SizeY);

23 break;

24 }

25 const float HalfX_Unscaled = FMath::DegreesToRadians(FMath::Max(0.001f, FOVAngle) * 0.5f);

26 const float HalfX = FMath::Atan((1.0f + Overscan) * FMath::Tan(HalfX_Unscaled));

27 float MatrixHalfFOV = HalfX;

28 if (!bMaintainX)

29 {

30  const float ARcam = (ReferenceAspectRatio > 0.0f) ? ReferenceAspectRatio : AspectRT;

31  const float HalfY = FMath::Atan(FMath::Tan(HalfX) / ARcam);

32 MatrixHalfFOV = FMath::Atan(AspectRT * FMath::Tan(HalfY));

33 }

34 const float XAxisMultiplier = 1.0f;

35 const float YAxisMultiplier = AspectRT;

36 const FMatrix Proj = FReversedZPerspectiveMatrix(

37   MatrixHalfFOV, MatrixHalfFOV,

38   XAxisMultiplier, YAxisMultiplier,

39   NearClip, NearClip

40  );

41  CustomProjectionMatrix = Proj;

42 }

With the added Maintain Axis and ReferenceAspectRatio (ARcam) parameters, the component can follow Maintain X/Y/Major Axis just like the gameplay camera and interpret FOV the same way in both views.

Conclusion

In this article, I did more than just "fix a framing mismatch." I made it testable. You can see exactly where UE5 builds projection for the LocalPlayer, where it builds projection for SceneCapture, and where AspectRatioAxisConstraint enters (or doesn't enter) the matrix construction.

By tracing both pipelines into the engine source and unpacking the frustum math, we can explain why the same FOVAngle doesn't guarantee the same composition across aspect ratios. The practical result is UMaintainAxisCaptureComponent2D: a SceneCaptureComponent2D variant that adds Maintain X/Y/Major Axis and synchronizes projection via CustomProjectionMatrix, while keeping FOVAngle as a user-facing setting.

This matters anywhere an image is reused and must match the main camera: portals and in-world screens, UI renders, camera-aligned masks and matching, compositing, virtual optics, and SceneCapture-driven post-process effects.

After this breakdown, you can debug framing issues. Trace the pipeline, check the reference half-FOV and multipliers, and you'll know exactly why two "identical" cameras don't match. Once both views build projection under the same rules, the framing stays consistent, no matter how the viewport changes.

Ready to grow your game’s revenue?
Talk to us

Comments

0

arrow
Type your comment here
Leave Comment
Ready to grow your game’s revenue?
Talk to us

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more