Using RealityCapture & Unreal Engine to Create a Valencian Scenery

Pontus Ryman has told us about his photogrammetry workflow, spoke about the right weather conditions for scanning, and explained why RealityCapture is his software of choice when it comes to running the images through for a 3D model.

Introduction

Hi! I am Pontus Ryman. I've been working as an Environment Artist in the AAA game development industry for close to 12 years now.

I have had an interest in video games and 3D graphics since an early age and since getting hooked, it was always a goal for me to work with 3D art. I started early doing half-life 1 maps in the old hammer editor, moving on to making textures and modding for battlefield 1942, and eventually working on the Black Mesa game when it was still a mod in the Source engine.

Eventually, I studied Digital Graphics, and soon after graduating, I managed to land a position as a 3D Artist at DICE.

Spending my first 8 years at DICE and having worked on games such as Battlefield 3,4, and 5 and Star Wars Battlefront 1&2. I've been fortunate enough to have gathered a lot of experience throughout the years, and with the games relying heavily on photogrammetry, I have had the chance to be a part of developing cutting-edge tech and workflows in the field.

With the establishment of fully photogrammetry-based environments on Battlefront 1, the core workflows and principles have not really changed since then, only tools and specific gear have really been changed to improve specific steps. Photogrammetry environments in games still benefit from the rule of curating a small but well-built set of photogrammetry content that can cover an entire biomes core visual look across multiple maps. Mix in a few specific elements in each map and you can spice up the same content to create sub-variants of the same environments.

The Forests of Valencia Project

The Forests of Valencia project came about as a follow-up to my last project Summer Archipelago. At the time, I just needed something to do while I was on parental leave. Spending time with our baby has been incredible but I had to have something productive to do in the evenings when my daughter was asleep. My father just bought a house in Spain and we were going there for vacation. I thought why not combine a vacation trip with a photogrammetry project with a theme that differs from the last project but followed the same framework?

In this project, I wanted to recreate the natural forests in the Valencia region where I captured the content, as I was out and about doing the actual scanning. I also took a lot of references of not just vegetation but also lighting, composition, placement of assets, and even sound.

I would later have a lot of use for these references to accurately assemble the content into a believable world. Some of my shots are even close to 1:1 of the reference images, where the images can almost act as concept art if the composition and lighting are striking enough.

Compared to the last project which was rendered in Unreal Engine 4, I wanted to switch over to 5 to test the new features, especially Lumen which has produced some amazing results.

Equipment

The equipment I used was quite a basic set of cameras and supporting gear and was based on the kit we had used when going on photogrammetry trips for the AAA games i have worked on.

The main kit includes:

  • Canon 6D MKII;
  • X-rite Color Checker Passport;
  • 24mm lens with image stabilizer for the majority of the scanning;
  • 70-200mm lens with image stabilizer for references and scanning at a distance;
  • Tripod


I also had a large white cloth of non-reflective fabric, sturdy and easy to clean, and a small blue cloth of non-reflective fabric, sturdy and easy to clean.

This time, I also took some extra gear which I do not always bring but can be handy at times:

  • Canon 750D;
  • 18-55 mm lens with image stabilizer;
  • Polarizing Lens cap;
  • Gorilla tripod.

In-engine collection of some of the gear

This is the kit I use but photogrammetry these days can be done with any type of camera and the cost of getting into using the technology is significantly lower than a few years ago.

If there is any one piece of gear I would recommend getting then it is the color chart, having a correct color reference for your scanned assets is incredibly valuable once you start to build up a library of similar assets.

A simple but effective kit

Lighting Conditions

When scanning the rule of thumb is to always avoid scanning in sunlight or in rain/wet weather. While scanning in sun is possible it introduces a whole lot of cleanup work in the content creation phase that sometimes simply cannot be cleaned up in a good way and will leave artifacts. So avoid the sun altogether if possible and stick to the shadows!

Wet weather can cause reflections from different angles and should be avoided since it can throw off the photogrammetry software alignment. A wet surface is also darker than a dry surface in the Albedo since the surface has soaked up the water, which does not accurately represent the right values you want for a balanced scene.

The best possible scenario is an overcast sky, this will give an even lighting condition without any color bounce from any nearby surface. While you can scan in shade or during a temporary cloud coverage, during a sunny day there can be shifts in the color cast on objects through indirect lighting. Generally, this is not a big issue and a color chart image can help calibrate this correctly, but it's good to keep in mind.

Capturing Content

Capturing photogrammetry content is a fairly straightforward process and the basics are easy to understand: take a lot of images from every possible angle of a real-life object so that the photogrammetry software can match up the images to create a 3D point cloud and generate mesh and textures from it.

In my process, I used the equipment I talked about earlier, and RealityCapture is always my software of choice when it comes to running the images through for a 3D model.

When capturing content there are a few key things to be mindful of, and while they are easy in concept, a mix of circumstances can make it challenging to uphold some of them.

These key things are:

  • Keep your images as sharp as possible, with high shutter speed and high f-stop values, preferably low ISO;
  • Cover the asset with as many images and angles as possible;
  • Make sure the asset captured is not moving in any way between images;
  • Check the histogram so you do not hit absolute black or absolute white, there will not be any information to gather in these parts of the images if it does and you cannot compensate when calibrating the images before running it;
  • Always take an image of a color chart with your asset.

Sharpness is incredibly important for the alignment of the images and will also affect the detail quality of the high poly mesh that is generated. Generally, this means having the correct focus point when scanning and also keeping your F-stop away from lower numbers. I usually try to stick between 9-11 on my Canon 6D. It can be hard to keep the F-stop high, however, as lower numbers give you a brighter image in low light situations, and balancing it with a higher ISO can introduce noise which is not preferable for your image. 

A lower shutter speed can give you a brighter image as well since it will let in more light into the lens but that introduces risk in terms of sharpness loss since you are likely to have micro-movements of your hand while taking images. If in very low light situations it is recommended to use a tripod that can help stabilize with a low shutter speed. Scanning an object can take longer but it's important to stick to the sharpness rule as much as possible. 

With a lower shutter speed, there is a risk of movement blurring

For hand-held scanning, I avoid going below 200 in shutter speed, and if I need to move the camera away from a position where I can look through the lens (when I need to photograph hard-to-reach places like high above my head or low to the ground) I try to keep it even higher, as keeping the camera steady becomes even harder.
It is possible to make up for lower sharpness images with a lot of image coverage/overlap, at least when it comes to alignment, but the smaller details will still not come out right if many of those images are still not sharp.

It's a good thing to keep in mind that if a feature or shape of an asset being scanned does not exist in any photo, it will never exist on the generated 3D mesh, there have to be more than at least a few images of each feature for it to be created. Nothing unique will be "generated" from extrapolation or procedural solutions in RealityCapture or other photogrammetry software, only stretched textures and flat geo will bridge empty pockets of missing information at the most.

Scanning vegetation is its own beast and requires a different approach. I lay out vegetation on a large white cloth as they will be intended in the engine as alpha cards in order to either scan them directly or use them as a reference for the creation of the alpha cards.

Sometimes using a blue cloth sheet can be efficient as it's easier to mask out in Photoshop, but the blue color can also cause problematic blue light bouncing up on the vegetation that is placed on the board. I have gone through trial and error and opted for white over blue or black sheets but it's really up to taste here. A black sheet is easier to mask and does not bounce blue onto the subject but it can become very dark instead making capturing harder.

Larger leaves and thicker sticks are easier to scan directly as the photogrammetry software has more clear details to align to, so I try to get a 3D mesh scan out of those directly, it will save a lot of time and the quality is always better but when there is vegetation with thinner details such as pine-like trees and bushes or grass, the risk of them moving in the wind while scanning or just that they are too thin can create a lot of issues in the final 3D model. In these cases, I use just a top-down image as a reference and then high poly model the branch in Blender. Sometimes the branch can be scanned but the thinner ends of a branch do not turn out well, in these cases a manually made high poly model can be created and then baked down and combined with the scanned part of the branch.

Running Your Assets in Reality Capture and Cleanup

When creating the content from a photogrammetry scan there are a few steps to clean the asset and prepare it for in-engine use.

The amount of cleaning and prepping is dependent on how good the scan is and in which condition the scan was taken in. In the best-case scenarios, there is barely any cleanup to do at all.

The first step is to calibrate the images based on your color chart. The most straightforward way to do this is to simply white balance towards the white points on your color charts, this is a quick and easy way to put your asset in a good ballpark. However, if you want to get very exact calibration, calibrating towards the entire color chart color swatch is preferred. 

Exact calibration can be valuable if you have complex colors in your scans that cross multiple assets, such as rocks with gradients of strong color and if they are captured in different lighting conditions (even in shadow, the surrounding lighting condition can cast colored bounce light).

While calibrating and aligning colors comes a good opportunity to remove some of the brightest and darkest parts of an image, this will help even out the albedo to a more mid-gray value

When the images have been calibrated, it's time to run the asset in your photogrammetry tool, in this case, RealityCapture.

For the most part, I just run on medium image overlap in the align settings, if the asset for some reason does not align all the images, testing out the different align options can be useful, and also trying to use the down-res option for the images – it may help the alignment find other images and can sometimes resolve issues and connect detached components.

If this still does not work, and you by some lucky shot managed to have extra images of the same asset in the background when scanning another asset nearby, try adding those extra images into the mix, it might be just what's needed for RealityCapture to find and align the components.

I run my models on High Detail from the point cloud to 3D mesh to get the most out of the asset, while the quality difference between normal and high is variable depending on your scan. I usually run at High just to be on the safe side.

When the High model is done, it's time to trim and filter out excess geo that will not be needed. There is 2 reasons to do this:

  1. You don't need that geometry cost when the mesh is exported, this can shave off millions of triangles, it will also make it easier to work with outside of RealityCapture;
  2. If the model will be UV-mapped and textured, the more UV space for the area you will bake from the better. This point does not, however, matter if you bake from Vertex Colors.

Once filtering is done I bake the texture to the high poly, either as an UVed high poly to an actual image texture (at the highest resolution possible) or to Vertex Colors, its case by case but more often than not I use Vertex Colors. Sometimes baking before the filtering can be useful if vertex color is the preferred method and I am unsure where and what to filter out, this gives some backstepping flexibility if needed.

After that, I export it to a .PLY and call it _HIGH, and then decimate the model down to 200-500k tris (higher numbers for assets with more complex geo shapes that need representation) and export that as a _MID.ply. Further filtering can be done here if I know exactly what my low poly will look like and skirt extensions will be added later.

This _MID model is used as a reference for the low poly creation as the high poly mesh can be too big for other software to handle. 

At this stage, it's time to make a low poly to bake from. It's up to the user how to make the low poly over the MID as a reference – you can either opt for a procedural solution in Houdini, use a subdivided mesh from RealityCapture that is then cleaned up and UV-mapped, do your low poly in the preferred 3D app, or use the more manual but more accurate approach in TopGun for example. There is no right or wrong in terms of choice of approach. A regular low poly that represents the shape which we will bake to is what we are after. I tend to use a mix of RealityCapture decimation that I clean up in Blender, depending on the complexity of the mesh. 

I usually stick to "game-friendly" budgets and not movie/VFX  budgets in terms of triangle count cost relative to objects size, while the content in this project is not meant for any one particular game or game type, I still kept within budgets learned from AAA experience and looked at the shape of the asset and give more triangles where the geometry is more complex.

Once the low poly is created, it's worth taking a look at the asset to see if it needs a cap (filling in the empty spot underneath if close to a solid shape) or if it needs a skirt extension in the case of a ledge or rock wall. 

These skirt extensions are meant to help merge the asset into the terrain and into other assets better. Where the skirt extends will then later artificially be textured through a tiling texture by masking in a surface. This skirt can also, if that approach is used, work as a bridging area for Virtual texture blending. I usually just extrude the skirt straight backward or try to follow the angle of how the ground extended on the real-world asset. This area does not need to have perfect geometry as it is usually covered with other geometry from other assets, but should not have any obvious mesh issues either.

With a UV-mapped low poly, it's over to baking the maps.

As mentioned before, I often use .ply for the models because I will be baking Vertex Color to the diffuse low poly, while the highest texture quality you can get is to bake a texture to a UV on the high poly model. I still go for a .ply with Vertex Color as I know, down the line I will manipulate the texture and size it down to a game-friendly standard together with detail textures layered on, the highest resolution texture possible from the high poly bake for the albedo will not be too noticeable anymore because of this.

If you are after the highest possible resolution for a single asset, however, baking from a UVed high poly is the way to go.

I bake the Color, AO, Height, Cavity, and Normal (Tangent- and sometimes an Object-Space Normal, if needed for any cleanup or specific in engine cases). With a baked low poly, the asset can either be ready for importing into the engine or it can require some texture cleanup.

There are two main things to look for when cleaning up.

The first is Broken/Blurry spots on the mesh where there was missing info. These areas are often easy to clean by setting up a clone stamping layer in Substance 3D Painter and simply sourcing all the channels (Albedo, Height, AO, Normal, etc..) from another source on the mesh and clone stamping it onto the broken and missing areas.

The second is lighting information removal. If there is heavy AO darkness in cavities, it can be countered using a baked AO as a mask to brighten up the darkest spots, or if the asset has an even color you can invert the color and grayscale it and use that as a brightening mask, although it's generally not the best approach, but can help if an AO isn't matching where the dark areas on the Albedo are.

For strong lighting in your textures (if the asset was scanned in strong sunlight) – this is the hardest of the lighting artifacts to get rid of and can require quite a bit of work – I recommend Agisoft De-Lighter. It does a good job at finding and delighting overly strong sunlight. The best way to counter this issue is however to not have scanned in strong sunlight at all.

Using the Content In-Game

After the cleanup is done is just a matter of exporting to Unreal Engine. At this stage in my project, I had master shaders and an import pipeline established from my last project with some minor updates and optimizations added.

Overall the important part here is to get to the point where a master shader covers your needs for the assets in a set. In a natural setting, for example, all rocks will use the same set of detailed textures and if present, skirt extension materials. My nature shader also covered the unpacking of the textures and contact shadow adjustment options.

With a master shader in place, it becomes very efficient to produce content through the entire "Scan – RealityCapture – Cleanup – Import" pipeline and you can quickly produce a lot of content.

The Normal map and diffuse texture usually get a slight detail blurring pass where I smooth out the smallest details because the detail textures will replace these details, leaving them in can cause quality conflicts, and blurring the texture actually makes it look better when combined with detail textures. The win here is that the texture can also be sized down to save on texture space.

I do texture packing and detail texture "slice" masks at this stage, where I pack the Color and masks into RGB + A and Normal Roughness and Height into an RG+B+A  setup.

The "slice" mask is basically just a black/white image where a grayscale value will map a detail texture from a texture array onto a specific surface. In my case for a ledge asset, white will mask where a top-down surface layer of pine will mask in, this is driven in the shader where it will clamp the highest white and consider anything below white as black and then mask in the pine surface. Bright white (not full-white, but close to it), gray and black will mask in smooth, rough, and very coarse rocky detail textures on an asset.

Once the asset is imported it's just a matter of using it! When the asset is in I usually try to test the shape a bit and see if I need to bend the asset slightly to make it more useful or adjust any textures to align better with the other assets. Calibration in the start usually gets the textures so close to the correct value that this is not always needed, but a real-life rock might still look a bit off from the majority of the other rocks scanned for the set, so bringing them closer to each other is worth doing even if it can potentially stray slightly away from the "actual" colors.

Conclusion

When working with photogrammetry and sets of natural biomes in RealityCapture and Unreal, the most important aspect overall is to make sure that everything fits together.

Scanning everything from the same location, making sure that the assets are scanned in similar lighting conditions (overcast being preferred), that every asset is always scanned with a color chart for reference, and that different sizes of assets in a set are captured.

When a good set of assets are captured in the field, it's quite straightforward to establish a production line of sorts that moves assets through calibration, alignment, 3D mesh, and texturing in RealityCapture and then low poly baking to in engine result. You can efficiently get quite a large amount of content for a nature biome if the base setup and process for every step has been established for one asset, making a whole environment efficient to produce.

Big thanks to 80 Level for letting me do this breakdown, I had a lot of fun making this project, and it's been great to be able to share a part of that process.

Pontus Ryman, 3D Environment Artist

Interview conducted by Arti Burton

Join discussion

Comments 1

  • Anonymous user

    太酷了

    0

    Anonymous user

    ·8 months ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more