Creating Photogrammetry-Based Materials

Olivier Lau talked about using photogrammetry techniques and tools when creating realistic materials.

Olivier Lau talked about using photogrammetry techniques and tools when creating realistic materials. 

1 of 2

Introduction

I am a programmer from France who worked in the fields of enterprise, networking, imagery, mobile and audio software for various companies. I started my own a few years ago and I am now working as an independent. I touched many areas of development, from drivers to end-user apps, but even though in my free time I was playing with graphics software, I had no real knowledge of the CG industry and associated skills. In 2017 I decided to switch to game development and related activities. I was fascinated with the artistic side and interconnections between the many disciplines in this area. Before I could start anything I had a long time training on the various tools and techniques involved, and to this day I still do!

1 of 2

The series

I began in 2017 to work on a game project with Unreal Engine 4. I ‘ve been practicing photography for a long time and when I discovered photogrammetry works such as those of Megascans, I found that amazing. Since I already had the gears I experimented a bit myself, then found Grzegorz Baran’s excellent photogrammetry guide presenting a complete workflow to scan and process surface textures. This guide has been an enabler in my photogrammetry path, I would highly recommend Grzegorz’s materials on photogrammetry as his workflow is very efficient and he also provides materials such as Substance Designer graphs, pre-UVed meshes that are very useful to the process. As game development can take a long time, I thought I could start a side activity based on photogrammetry, this could be both a learning experience and also help fund my game project.

I chose an area I know well near Saint-Jean-De-Luz, on the south-west coast of France, close to the Spanish border, for this first environment pack I called The Plates Creek.

The place and the surrounding areas have amazing rock formations with folds, faults, strata rising from the ground, large eroded rocks etc.

In some areas, strata are stacked together, we commonly call these “plate piles”, hence the name of the project.

In this project there were basically two main challenges: one was to implement the photogrammetry flow properly, from on-site photo shooting sessions to final assets. The other was to build a flexible, modular and optimized set of materials in Unreal Engine 4 which could be used in real time, along with a demo scene.

Details

I am using a Canon EOS 6D (full-frame sensor) camera mostly with the 24-105mm kit lens. In photogrammetry, it helps to have good-quality pictures, and we must use extra caution to avoid blurry shots, even slightly. For this, we either need to have the lens or sensor stabilized (tripod or stabilizer) and/or a fast shutter speed especially if shooting freehand. When using a tripod, I don’t use the lens stabilizer and try to get a shutter speed no slower than 1/60s (can be slower if needed since on tripod, but just to be safe). With freehand, I don’t go under 1/200s. I also ensure the aperture is small enough to avoid blurry out of focus areas, usually no larger than f/8.

Knowing we don’t like a direct light when doing photogrammetry to avoid baked lighting, the speed and aperture requirements may lead to not having enough light for the sensor overall (using a full-frame sensor may help there, larger it gets more light). When this happens, I either bump the ISOs or simply underexpose the shots. Underexposure is not a problem as long as this is done reasonably (like 1 EV) and the shots are made RAW, hence using the full dynamic range of the sensor (14 bits for my camera). This way underexposure can easily be compensated when processing the image without quality loss.

For surfaces, I usually use 100 to 200 pictures of a rectangular area with a good amount of overlap between shots. I didn’t find it useful to take more pictures as 3D reconstruction time increases proportionally with no obvious benefit on the final textures, at least in my experiments. I am shooting a rectangular area in order to extract at least two variations of a given surface, and also to have more options when deciding what to take and what to leave.

I am using DxO PhotoLab (formerly DxO Optics) to process the pictures.

In particular, this software has a filter named Lens Sharpness, only working on RAW pictures, which brings sharpness to the last details of an already fine shot to an amazing level of clarity.

This again shows the benefits of using RAW format when taking pictures, these use all the sensor’s capabilities (whereas JPEG clamps to 8-bits and uses lossy compression) enabling exposure fixes, after-shot white balance setting and access to certain features such as this filter. That being said, it doesn’t mean the final photogrammetry textures will necessarily benefit from all we can do to have sharp pictures. There are so many steps in the process and many of them can more or less degrade things. The idea is to maximize the quality at each step so the sum of these efforts can participate in providing a good final render. Regarding lens distortion correction, I could not see any effect on the 3D reconstruction whether it was applied or not.

From PhotoLab I export the images as TIFF and import them into PhotoScan for 3D reconstruction. In my workflow, I am generating 8K textures which are then reduced to 4K for production. I need to ensure the output mesh has enough vertices, that is around 67 million for 8K textures. A set of photos can also be used to generate multiple meshes/texture sets. For this, I generate a single dense cloud in PhotoScan, then create new chunks with a duplicate of the dense cloud. In each new chunk, I delete the parts of the dense cloud I don’t need, then build an individual mesh with what is left.

In practice, I most of the time choose the highest quality setting for the dense cloud generation (PhotoScan then works on the images at their actual resolution), unless I have too many pictures then I opt for a lower setting else the calculation could take very long. This is especially true for 3D objects for which calculation time is usually much longer than surfaces (can take days!).

Regarding 3D meshes, one difficulty can be to reconstruct underneath parts of an object. Using PhotoScan, I am using one of the two following methods: the Apply Mask To Tie Points is taking a picture of the background without the object, the latter can then be placed in about any position, the software will automatically subtract the background from the shots. This is a great option, but the environment doesn’t always permit it. Another method, if the object can be manipulated, is to take two series of shots, one with the object seen from the top, another one with the object seen from the bottom, generate two dense clouds, then merge them into a single one. This method is a bit tedious as the alignment of the two clouds is always a challenge. It also requires more clean-up work in Substance Painter at the junction between the two sides. But it works!

The generated very high poly mesh is so large it cannot be opened into most 3D modelers, so I duplicate it into PhotoScan and decimate the copy to about 2 million polys to work inside ZBrush. A low poly mesh needs to be generated for the bake, either a plane for surfaces or a low poly mesh when working on 3D objects. I usually use a combination ZBrush’s ZRemesher, Instant Meshes and Blender (for the clean-up) to generate the low poly mesh.

Sometimes this 2M poly mesh can be used for the bake too. I had the case of a surface consisting of medium-sized pebbles. The photogrammetry process can generate artifacts on the reconstructed surface in the form of small bumps, holes or bridges. These aren’t usually a problem for the bake but may become very visible on smooth surfaces. For these pebbles, I baked my textures as usual, but when testing in a game engine, I noticed the pebbles looked rugged due to artifacts present in the normal map. I opened the 2M-poly mesh back into ZBrush, reduced its poly count then added subdivisions to smooth it. I then baked the normal map alone from this new high poly mesh and the problem was fixed.

In terms of tools, I am using Knald for baking textures. Among the baking solutions I tested so far, I found it to have the best output quality (with 16x anti-aliasing), the fastest render times, and it has a visual cage alignment system which is helping a lot with 3D meshes.
For UV mapping (only for 3D meshes since an already UV-ed ZBrush plane is enough for surfaces), I am now using RizomUV. I wish I had discovered this software sooner as it saves me a lot of time to precisely generate my UVs and pack them optimally.
I use Substance Painter to fix the textures and make them tile and Substance Designer to derive maps and homogenize them. This workflow is well described in Grzegorz’s guide.

1 of 3
1 of 4

Regarding height maps, for surfaces, I derive them from bent normal maps. This is something I learned from Grzegorz and you can see this on his Artstation pages such as this one. These work very well when there is not too much relief.

One last thing to get the best quality output is to bake at a higher resolution than what is used in production. As mentioned earlier I am working in 8K all along and reduce textures to 4K for production. I find that a sharpen filter usually performs best when applied to a higher resolution which is then downsampled. I sharpen the 8K textures then resize them to 4K, I find the result more natural than sharpening directly in 4K.

Colors

I always begin a series of shots with a small reference neutral gray card on the scene. This way I don’t have to worry about the white balance setting on the camera (I use automatic) since I will set the white balance later at processing time based on the card. Again, this is important to shoot RAW for this to work properly!

Most of the time I find original pictures to have too much contrast and saturation. This is usually what is expected in traditional photography, but for photogrammetry, I prefer flatter tones, so I usually decrease the contrast and saturation. In PhotoLab there is a preset which is doing exactly what I need called “Neutral colors”, it decreases contrast, slightly reduces saturation and boosts vibrancy (saturation of lesser colored areas). Later in the processing, Substance Designer’s Color Equalizer node can help to bring homogeneity to the overall lighting. Original color is not a determining factor to me, textures that will work together in an environment will need to be harmonized in a final pass so they are likely to change the tone. I am using Photoshop’s Match Color function for base color textures that will work together in an environment.

For this surface, the original material did not have the color tones we see on the render. It didn’t look very attractive in fact, I made the photo session mostly because it was easily accessible. I left this material aside for a long time, and once I was working on tone matching between different rock textures, I included it and it revealed that bluish color, which mixed to other peculiarities of the surface made it actually look cool! So color is really I think a relative notion, I would not try to decide what material to select or not based on its original color, things can change a lot during the processing.

Lighting

So far I only shot in conditions where there was no direct lighting so I did not have to deal with light issues later on. For the Plates Creek project, most of the materials were at the bottom of cliffs. I made sessions early in the morning when the cliffs are shadowing the shore. When the weather was sunny, I had perfect light conditions as the sky was pure blue but the scene being in the shadow of the cliff, I was benefiting from smooth light while having a good amount of ambient light to keep the lens aperture small and ISO low. Other times I was less lucky and had to wait for a cloudy sky. I am aware the light conditions for my environment were close to ideal, this is not always the case for other environments such as forest undergrowth, or where there is a lot of reflection.

Rock plates

This piece was one of the first surfaces I shot into my “reference creek”, the place that inspired most of this project and where I shot several items. I immediately spotted this surface as it had many variations, layers, cracks, this was just a perfect surface to begin with. From what I could experiment, the more chaotic a surface is, the easier it is to tile. This one was not so difficult to tile and required very few fixes. For other surfaces, I had much less manageable parts, sometimes simply because I could not cover a large enough area, and had to heavily rely on cloning in Substance Painter in order to create new patterns. Grzegorz’s guide comes with a great plane mesh specifically UVed to work with edges to make them tile properly and this is what I am using.

1 of 2
1 of 2

Testing scans

As the target for my asset pack was Unreal Engine 4, I mostly tested my scans into this environment. This way I could see them in context, how will a specific surface fit onto a given mesh or set of meshes. I can also see how one material looks besides one another and work on the tiling factor. I do not necessarily try to match the original sizes, but the one that looks best for a given material. It is sometimes surprising to see how a material can look different when tiled at different sizes. To break the repetition effect when a surface encompasses a large area, I sometimes use two texture sets and mix them together with a mask using UE4 materials (main motivation to generate two variations of a given surface). The same technique can be used to smoothly move from one texture set to one another.

This is not so demanding on hardware as it would seem since we don’t need to use the same resolutions for the various maps depending on how much they contribute to details. Also, some maps can be derived from others (like roughness). For static renders such as the ones I posted on Artstation, I am mostly using Marmoset Toolbag.

Thoughts on photogrammetry 

In photogrammetry, there is a lot of automation in the process which means things can get fast when having a proper hardware to do the job. And the result is very good looking provided the workflow is respected. There is still room for expression in the selection of materials, their potential combination, the phase where we tile and fix textures as well as tone mapping. One obvious downside to photogrammetry is the need to go to a specific location to take pictures, and what you produce is “limited” to what the scenery has to provide. This is not entirely true as textures can be combined and Megascan tools are a great example of that. But the base materials are definitely limited by where you can get and what you can access. On the other hand, this can also be seen as a cool way to work, like going on the field is a very entertaining activity for us developers used to sit behind a screen all day! Another difficulty in photogrammetry today is the quite heavy hardware required to produce high-quality material in a reasonable time, but I haven’t yet tested all possible solutions. On my hardware, this is not uncommon to have 10 hours or more of calculation for a single surface or 3D mesh.

While working on my photogrammetry project, I had some issues with mesh file formats. It is actually not so simple to find a format that combines support for vertex colors, proper import/export in all software of the workflow chain and very high poly count (67 million polys or more). I am using FBX, but 67M polys were the max I could use in my experiments. Using other formats I had interoperability issues.

1 of 2

I think there is also room for improvement in the processing tools. I would love to see an equivalent of Photoshop’s Healing brush into Substance Painter to complement the Clone tool. Making a texture tile while preserving its original aspect can be a long manual process and there aren’t many tools to help in this area. Mixing texture sets is also an area where I would like to see more options, very useful when taking multiple variations of the same surface. Photogrammetry 3D objects often come with various artifacts and holes as the output of the 3D reconstruction process. PhotoScan has an option to close holes in the mesh, but these still need to be filled with valid information to make the object look good. There are great tips in this video to clean up single sided meshes with ZBrush. I am often using this technique but it doesn’t always work depending on the shape of the mesh. Another option with ZBrush is to use Panel Loops to give thickness, then go into Dynamesh, fix the mesh and project the hires mesh to get details back. All these work but take time and the result can vary depending on the mesh being used. I think there could be many ways to make life easier for people working with photogrammetry, I am eager to discover what software publishers will bring us in the coming years!

Olivier Lau, Developer and Technical Artist at Eyosido Software

Interview conducted by Kirill Tokarev

Join discussion

Comments 2

  • Anonymous user

    Any plans on updating this for group of assets for the latest version of Unreal?

    0

    Anonymous user

    ·4 years ago·
  • Anonymous user

    Nice write up, I found your page after searching for the max bit depth xnormal can use to bake the base colour map as my workflow has just broken. I exported 128 bit files and not sure if they are the cause or not, just baking out 32 bit as I type. whats happening is xnormal is writing to the UV tiles, but just red no colours from the tile that RC spits out. Also I use a colour checker by xrite to help get colour space as linear and accurate as possible have you considered that?. I wish there was an easier way to process the geo than decimating in zbrush and then UV/tweak in Maya/3d coat etc. Hate fixing meshes.

    0

    Anonymous user

    ·4 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more