Using Photogrammetry to Create a Hungarian Miska Jug in RealityCapture

Ivan Carvalho talked about the workflow behind the Hungarian Miska Jug project, shared how data is captured for photogrammetry, and discussed the texturing and rendering processes.

Introduction

My name is Ivan Carvalho and I'm the founder of The Vertex Guild Studios, we are a 3D art outsourcing studio specialising in real-time art production. Our artists have contributed to several AAA titles and other projects with a range of different budgets and different styles. We are excited to share some of our workflows and hope you find them useful.

Photogrammetry

I've had an interest in photography for a long time, and as I'm a 3D Artist, photogrammetry is just a natural product of those two interests, it's also just another tool in the toolbag, and you can never have too many tools. We use RealityCapture because it's the best one hands down, with the most control/flexibility and compatibility of dataset inputs – DSLR photo, drone photo, laser scans, video input, etc.

The Hungarian Miska Jug Project

While developing our photogrammetry workflow, we started with some small pieces so we could go through the pipeline multiple times in a short amount of time and refine the process with each iteration.

These consisted of objects I had around the house, and the Hungarian Miska Jar specifically poses an interesting challenge I've been wanting to tackle for some time, glossy reflective surfaces, which are notoriously uncooperative.

Here are some early examples of reflective surfaces that weren't good enough for further processing:

So we had to use a technique called cross-polarization. Cross-polarization is just a way to filter light so we avoid the highlights of the light source on our subject. It's achieved by using a polarising filter on both the light source and the lens.

When done correctly, it eliminates the reflections of the light source on our subject, and all we are left with is the reflected color of the object, with no lighting information – in other words, a near-perfect albedo of the surface.

Capturing Data

The first try was of an Adidas sneaker. We wanted to get a complete 360-degree image of the object, not just the usual ''top scan'' where the bottom is lost, so we needed a static background in order to flip the shoe and scan the bottom of it. So we used a bunch of black t-shirts propped up on some wood to make a background.

We also used a small paint can as a turntable and just hand-rotated it every time. This obviously took a very long time and the results were not ideal since hand-rotating a paint can introduce inconsistencies in the dataset that are better avoided.

Our current setup is very simple: a black velvetine backdrop, a black velvetine-covered turntable (no more paint cans!), a polarised flash, and a polarised filter for whatever lens we’re using.

For the camera, we use a Canon M50Mk.1 and usually a zoom lens and a 50mm prime lens, depending on the subject.

The most important part of the whole process is, and I can't stress this enough, the dataset capture. If the dataset is bad, the scan is going to be bad and it's going to take large amounts of time to get it cleaned up. My motto here is "garbage in, garbage out".

To have a clean dataset, you need 3 things, enough uniform light, enough overlap on the photos, and a focused subject. For the method we are using – cross-polarisation – our light source is a flash, with a DIY polarizer in front of it so that's the light part taken care of.

To ensure a focused subject we have to keep the F-Stop on our camera fairly high, as to ensure a deep depth of field and minimal blur on our subject, we want these pictures as sharp as possible so the software has as much information as it can to accurately align the dataset.

For the overlap part, I recommend anywhere from 50% to 70% overlap to get the best results possible, like in the image below. 

We shoot all photos in RAW to preserve as much info as we can and post-process the photos later in Lightroom, before importing them to RealityCapture.

RealityCapture

Before importing the dataset to RealityCapture, we run the photos through Lightroom to get them as flat as possible, so we're basically looking at the diffuse of the object, with as little lighting information as possible.

After that, we import the dataset to RealityCapture, align the images, and use control points if needed. I won't go into much detail about each step in RealityCapture since they have so much quality content on how to use their software.

After we get our initial point cloud, we inspect it to see if everything is working and correctly aligned. If so, we proceed to meshing, unwrapping, and exporting.

UVs

We wanted to find a workflow that would allow us to get these results really fast. This piece was made as a 1-day prop, it took around 8 hours to make from dataset to Unreal integration. The most time-consuming part is always the raw scan cleanup, but even that we manage to minimise with quality datasets.

Mesh cleaning was done in ZBrush, and it was mainly to reduce the exaggerated noise on the surface of the mesh. The Smooth brush/Polish by Features usually does most of the job together with Dam Standard and some HPolish.

For the low poly, we use ZRemesher with some polygrouping to control the topology, but the priority here is speed since Unreal can handle a ridiculous amount of geo – so no manual retopo.

For UVs, we relied again on automatic solutions, this time ZBrush's UVMaster, again with polygroups. We wanted to get some good quality results fast. Splitting the mesh into polygroups helps ZBrush make better UV shells.

Texturing

The texturing is really straightforward for these assets since we want to get from scan to a real-time asset in a very short amount of time. The dataset capture is key here once again, blurry datasets will produce blurry Albedos, which means blurry textures – nobody likes that. In Marmoset Toolbag, we bake Normal/AO/Curvature maps using the clean high poly mesh we get from ZBrush and the Albedo map from the raw scan. We do this in Marmoset since you cannot bake Albedo maps in Substance 3D Painter's native baking suite. We bake at 4k since 8k is overkill for something this size.

Almost no hand painting is involved as we use the Albedo map we get from the scan to drive the Roughness and the Height in Substance 3D Painter.

It's just a matter of tweaking the Roughness and Height values with levels until you get the desired result. The speed at which you can get unique micro details is amazing.

There are other ways to texture these assets that can provide better, more accurate results, but the speed here is practically unmatched. This texture method and a high-quality raw scan are what enable us to go from dataset to a real-time asset in less than a day.

Rendering

We use a simple lookdev scene where most of the lighting work is done by an HDRI. We do have one rectangular top light to get a bit more contact shadows overall. We use a color checker and some matte/chrome balls to get more material context, it also makes it so the scene isn't just a floating prop.

These are the assets in the scene that belong to the jar, just the textures, one static mesh, and one material instance.

This is what the material instance looks like, just simple parameters to serve a general purpose – tiling, tint, etc.

This is the spaghetti driving the material instance, nothing fancy here, just a basic Metal/Roughness map setup with a couple of controls.

Other than that, it's just the camera in the scene, that's it. A plane, the asset, and an HDRI. We render out the frames using the Movie Render Queue and run them through After-Effects to get the final video.

Conclusion

To people getting into photogrammetry, I’d recommend starting outdoors until you can get some good results since you don't have to invest in photo equipment. Start with simple objects, this will naturally lead to more challenging and complex assets and to learning about the problems/nuances of photogrammetry. You don't need a lot of gear, you can do it on your phone if you don't have a camera, and the results are usable as long as you remember to have enough overlap, light, and a focused subject.

Thanks for reading through! We hope you found it useful. You can find more projects on our ArtStation page.

Ivan Carvalho, Founder of The Vertex Guild Studios

Interview conducted by Arti Burton

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more