Creating Materials Using Photogrammetry and Manual Work in RealityCapture

CB3DART has talked about material creation with the photogrammetry and manual work approach, discussed its advantages, and shared some tips on capturing images.

Introduction

Hi, my name is Chris but I usually go by CB3DART online, I have always loved games, movies, art, and design, this led me to start a Product Design degree in Edinburgh, made some lifelong friends, and learned a lot about design that has helped me throughout my career. However, I always wanted to make video games, so after completing my degree, I began a Master's degree in game development in Liverpool. This was a great course and it helped me land my first job making license-based racing games in Newcastle. The games were very short development (often 6-12 months each) things like Fast and Furious, Ferrari Challenge, etc. I loved the people but I really wanted to work on games where I got to do something more artistic, I was fortunate enough to land a position at Crytek.

I worked as a lead environment artist on various Crysis games, Homefront 2, and a few projects that sadly never saw the light of day. I loved that we built both the games and an engine at Crytek, it really hit my artistic and technical sides.

After 4-5 years of working with the awesome team at Crytek, I decided to try working for myself. Since the switch, I have been fortunate to create work for companies like Valve, Digital Extremes, and others. The larger pieces of work I do are generally more focused on indie titles like Everybody's Gone to the Rapture, Observation, and a few yet-to-be-announced projects that I am excited to see shared with the public in the future.

I really enjoy doing game jams and making game design prototypes in Unreal Engine. Recently, I have shifted focus a little and started working on a few game ideas I have had. I also started a YouTube channel that releases free tutorials on topics people might find useful, for example, I have one about creating “quick and hacky” modular meshes from photogrammetry models, another on creating a simple snow generator in UE5 geometry script, and some on Unreal's modeling tools. I would like to release some about my photogrammetry learnings in the future too, so if any of that sounds interesting to you, welcome to my YouTube channel.

The Materials

When new techniques or technology appears, I like to see what you can do with it artistically (I love mixing art and technology). Because of this, I picked up photogrammetry pretty early on and I could see this was going to change the way game art was created straight away. I made lots of things from small props to fully captured environments (which you can see in the images below; more images/info can be found here and here).

This was at a time when photogrammetry software was not as user-friendly and fluid as it is today. I enjoyed experimenting with it but felt frustrated by the lack of tools to help turn these models into textures/game-usable assets, so when new tools appeared, I would generally try them and see how things had moved forward. All the materials in my first post were captured using my mobile phone (while on holiday) and manually tiled using an early version of Quixel Mixer. At the time, this was the best thing I had found to manually edit all material passes at once (diffuse/albedo, normal, roughness, displacement, etc.) and see the results. It definitely had limitations but it felt like a step in the right direction. After trying many other things, I kept going back to Mixer, however, for these latest materials, I moved over to ArtEngine (which used to be called Artomatix, I think).

I always felt tools that blended AI with manual work would work perfectly for photogrammetry-based materials so was very keen to see what they could do. ArtEngine is good, but I have encountered a lot of issues with it (crashes, bugs, usability issues, login issues, etc.) However, over time, I figured out ways to work around these issues; sadly, I think development on it might have been put on hold, which is a shame as I really feel there is a gap in the market for AI-based texturing/materials tools. Any programmer reading this that might want to work on something like that with an artist/me, I would love to hear from you!

I have not used Substance 3D Sampler since it was called Alchemist, so I need to see how it compares.

The Seaside Materials

For these materials, I use a pretty simple/standard setup. I have a Nikon D5500 (which I bought about 7 years ago), so it's pretty old and relatively affordable, it had good picture quality at the time ( as little image noise as possible and a decent picture resolution). This is pretty old now, I imagine new cameras available now will have much better image quality and resolution.

I genuinely believe that you can make good photogrammetry from most half-decent cameras. As I mentioned above, good mobile phone cameras also work but phones' lenses and image post-processing can impede quality. Ultimately, the better image quality (sharpness/low noise/resolution), the better the quality of your generated photogrammetry model.

My General Equipment:

  • Nikon D5500
  • Datacolor SpyderChecker24 (more on this later)
  • Small Tape Measure

My Additional Equipment:

  • Riko 400 Ring Flash (more on this later)
  • Polariser Filters

I only recently got these additional items above, and out of all my materials shown on the ArtStation page, I have only used them for one material so far.

Disclaimer: I am self-taught, I started photogrammetry early and learned most of what I know by trial and error, so please bear that in mind.

Tips for Photogrammetry Equipment

1. Camera Image Quality

I personally think that having a camera that has a good clear image quality is the most important thing. You can buy a 100mp+ camera, but it will probably be costly and you will likely end up scaling the image down to help with processing time and image sharpness. I think RealityCapture has an image size limit too but I could be wrong. So if you can get an affordable camera with very clear picture quality, that would be my preference. Software changes quickly in video games/movies, so you never know what might be the best option a year or two down the line, which might be another reason to pick a cheaper camera rather than a more expensive one.

2. Camera Shutter Speed (in low light)

A camera that can take sharp clear images in low light conditions is also key. I like to capture my scans handheld (i.e., without a tripod) as it’s fast and fluid, so I need as fast a shutter speed as possible to keep the images sharp. My camera is definitely not the best at this but it's fine and works in most overcast or shadowy conditions.

3. ISO Noise

I generally try to stay below ISO 200 but of course, you can go above this as long as your camera does not introduce too much image noise or artificial smoothing at higher ISO. I generally try to work at 100 ISO, which is the lowest available on my camera. Something else to think about it is every camera has its own “ideal” settings, basically when the image quality is highest. This might be a certain lens (or lens setting) or a particular aperture or ISO, there are websites that test a lot of this stuff, so search them and make the best decision within your budget.

4. Flash and Cross Polarisation

As I mentioned above, I have recently purchased a Riko 400, which is a 400-watt ring flash, I have only used it on one of my materials shown (the first material in this link), this allows me to capture scans in any conditions (except rain, which is unfortunate as I live in Scotland). I use cross polarisation to remove as much reflection/specular light as possible as this gives cleaner scans (particularly on smoother/shinier surfaces); it has its limits but does help a lot with certain objects/surfaces.

My main issue with this light (Riko 400) is it's large, heavy to carry, and the flash is very bright, so not something you can use discreetly (e.g., in busy public spaces). If I make very rough surfaces like stone walls, I generally just use a shade, or if it's overcast, that works well too, but the light gives you much more areas that you can capture. The other big advantage of using flash is that it gives you a flat-lit albedo texture; when capturing very rough surfaces, this is the biggest advantage a good flash provides.

If you are trying to do photogrammetry of something indoors with low light, a flash is essential to get good clean results in a decent amount of time. You could use longer exposure and a tripod, but that adds significantly more time to capture (and might add noise), so it’s something that needs to be kept in mind.

Tips for Capturing Images

1. Measure and Mark Out the Capture Area

I place a marker, usually an object I found lying around, like an obvious stone, remember to pick something that will not move in the wind or take something with you. I mark the starting point of the wall/ground I am going to capture and then measure 2, 3, or 4 meters across depending on the size of capture I want, then place another stone to signify the end. If I don’t have my tape measure, I have a measuring app that came with my phone that works reasonably well for this.

2. Datacolor Checker

I place my datacolor checker on the surface pointing in the direction of the surface. For example, if capturing the ground, I place it flat on the ground facing up to the sky. I capture the side with the coloured squares as this has both white balance and colour correction. I mainly use the white balance, but you can full colour calibrate if you wish. 

Try to find a surface with as little coloured bounced light affecting it as possible, things like walls over grass will often have a green bounced light, using a strong flash can negate this but if don't use flash, you will have to try and fix it in post. Remember, the colour will fall off as the wall gets further away from the grass, meaning that the colour correction at the top of the wall will be different from the bottom, for wxample. So if the same wall is also over mud, then you might find that you get a better result capturing that part.

3. Image Overlap

The most important thing with photogrammetry capture is image overlap, where each photo overlaps with the previous one. Basically, when you take a photo and move to the left/right and take another one, only the parts that are in both images will be able to be converted into a model. Make sure to overlap your photos and try to get a single surface “feature” to appear on about 3-4 images (I normally aim for more than 4). This means that you can only move around 25% of the “image view” between each image. In simple terms, the more overlap between images, the more matched points and the better the result.

There is a balance between the number of images and the amount of processing time, more images you take will require more processing time. You need to find the “right” amount for the detail you need. If you don’t, at least have 2-3 images with the same surface feature on them, it’s likely your surface will not align/process at all.

4. Do the Snake

No, it’s not a type of dance, but I do look very odd to anyone watching the process. To help make sure I get enough overlap between images and cover the whole surface, I like to capture images in a snake-like pattern. I usually start in the bottom left corner and capture the lowest part then move up by 25% of the “framed image”, then capture another image and repeat until I have reached the top. Then I move to the left 25% of the image and capture it, then move down 25%, capture until I am at the bottom again, then repeat until I reach the marked endpoint. This is just one example of many ways to capture a whole surface with enough overlap, some are faster and some are more accurate, but the key is to make sure you have enough overlap and cover the full surface.

This is a tiny list of recommendations to get people started, but there is so much more to this than covered above. Equipment pros and cons, shooting techniques, image processing, photogrammetry software processes, etc. Sadly, it's way too much to cover in an interview, but I hope some of these points might help people starting out on their photogrammetry journey.

RealityCapture

With regards to image processing, I generally use Adobe Bridge, it's free and works well for my needs. I use the image with the datacolor checker on it to get the correct white balance value (there is a white balance eye dropper tool in Bridge Camera Raw). Then I have a preset that lightens the shadows, reduces the contrast very slightly, tweaks the highlights, blacks and whites a tiny bit, and I run this on all the images and save the processed images at the highest quality JPG settings into a standardized folder structure. This helps with automation inside RealityCapture.

For most surfaces, this process could be completely automated, but the conversion/saving to JPGs takes quite a long time, so I like to spend 5 minutes manually checking that it’s all correct before converting them. 

Then I move on to RealityCapture. It has streamlined the process so much over the years that you can basically drop the JPG images into a blank RealityCapture project, click the start button, and it will work (if you have taken the images correctly).

I personally like to run the various RealityCapture stages separately so I can fine-tune the settings for each, however, if I am doing surfaces, then I can be confident they have been captured in a particular way and I can make use of RealityCapture's Command line processing (example scripts can be found here). This basically allows me to write a simple script (.bat files) that I can run, and it will create a new project, add the images, do alignment, rename the model, process a high-quality model, create the highest quality texture, and save to a specified folder structure. This allows me to process multiple materials overnight (or in the background on a second PC while I work on other things).

Manual Input

A lot of the manual work is making the material tile well, I find automated solutions work quite well for noisy surfaces like small stones, noisy tarmac, etc., but often look pretty poor for anything with a clear pattern or structure. In these cases, automated solutions either do a poor job or you end up spending a long time trying to find the magic settings to get a good result. I usually start by trying the automatic solution (if I get lucky, it might get me 70-80% there), but more often than not I do this completely manually inside ArtEngine or Mixer, it all comes down to the time you have and the quality of the final result you want.

The other thing that I often fix manually is very steep areas of the surface. Since a displacement texture can only push outward, if you have a brick that has a flat sharp steep edge, then you get a lot of stretching, so I use a mixture of methods to fix this. Some are completely manual (like clone stamping areas) and others are a bit more automated (like generating a new height map from the normal map). Often, this creates a more angled slope on any area with a steep edge, which is great, however, doing this creates a "fake" height map, so I (when needed) mix this back in with the original photogrammetry height map to get an optimal result. I will try to make a tutorial on it at some point on my YouTube channel if anyone is interested.

In addition to this, I remove any large gradients to help with tiling. I squeeze as much detail out of the exported texture data and into the height/normal as possible and spend a decent bit of time working on the roughness maps (if needed). This is done using the albedo in combination with the roughness node generator, masks, and levels until I have something that looks similar to the reference. 

When out doing your photogrammetry, I would highly recommend taking a couple of reference shots of how the sun interacts with the surface (e.g., the roughness/glossiness amount), these can be really useful for making sure your roughness looks “correct”.

Presentation

For these images, I used Marmoset Toolbag, I set up a scene that has a timeline, each frame in the timeline has different views, models, and lighting setups in it. This allows me to quickly swap out the materials and then scrub the timeline to capture various material presentation images.

The scene is very simple: basically a half-sphere and some subdivided planes behind it. I use Displacement or Tessellation (depending on the Toolbag version) to give the surface depth and use a relatively simple lighting setup of a skydome and some directional lights to add light and shadow. Sometimes, I add an area or a spot light to help fill in dark areas.

Finding the right lighting balance is hard as it's very subjective, some people like more contrasty lighting, some love blue and orange, and some – more muted palettes. I like a bit of contrast, so I work this way more. I use the settings in the lights to soften the shadows (in Toolbag 3 I think it’s called Width and Height and in Toolbag 4 it's under Area/Shape, I use a sphere type and then adjust the diameter to my liking). I think this is important as really harsh shadows can look odd depending on the type of lighting you are trying to create.

For two of the image presentation frames in the timeline, I added some basic props that are mainly there to cast object shadows onto the surface. A good friend of mine, Adam Dudley, had used a light gobo on his materials presentation images, and I wanted to try it. I wanted to have more control, so rather than use a gobo texture, I decided to place actual objects in the scene (in most cases there are just floating in space off-screen), having shadows run over your surface helps break it up but more importantly, it helps show off the height and shape of the surface as the shadow gets distorted by it.

Lastly, I generally add a bit of “film” noise in post-processing (but only in the dark areas). I like this but I know it’s not to everyone’s taste. I added a very subtle bit of sharpening and tweaked the exposure if needed, but that’s about it.

The Photogrammetry Workflow Advantages

For me, there are two main advantages: speed and quality/realism.

I don’t spend that much of my day making materials, but even though I don’t do it regularly, I have managed to get my material generation time down to around half a day to two-thirds of a day per material, including photo capture time (simple noisy materials without a pattern are usually even faster). The capture time doesn't include RealityCapture processing time, I process this overnight or on a second PC, and travel time. Most of my materials are local, and I usually capture 8 or more materials at a time, when you take 2-3 hours traveling/walking and divide it by 8, it does not add up to much.

As for quality/realism, if captured correctly, materials are very accurate both in colour and shape. Basically, it’s a very close representation of its real-world counterpart, so the output quality and realism can be high.

However, compared to a Substance 3D Designer material, photogrammetry materials are more difficult to “art direct” or make variations of. Tools like ArtEngine can be used to generate variations (mutations) of a real-world material, but the quality of this output varies massively depending on the type of material you are trying to make. Like most tools in game and movie development, neither process is best 100% of the time, in my opinion. Sometimes, it will be better, faster, and easier to use photogrammetry, other times it would be better to invest a bit more time and have the flexibility of more generative software, like Substance 3D Designer.

As for bottlenecks, obviously, there is the capture and processing time, but these will change and improve as PCs and software evolve. I think that at the moment, the biggest issue is tiling, tidying up materials, and making variations. The software available now is great (a huge improvement on what we had 3-4 years ago), but there is still so much more that could be added to improve this part of the process and open up a whole world of new opportunities.

If anyone wants to see more of my work, watch my tutorials on YouTube or check out my ArtStation, Twitter, or Instagram.

Lastly, I wanted to say that I have enjoyed reading many articles on 80 Level over the years, so thank you to everyone from the community that has shared/contributed and thanks to all the team at 80 Level for their hard work!

CB3DART, 3D Artist

Interview conducted by Arti Burton

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more