Stefan Oprisan gave a little breakdown of his 3d-scanning project. He talked about the way he uses photogrammetry to capture natural objects and how he presents the final objects in Marmoset 3.
Hi, my name is Stefan Oprisan. I’m originally from Romania but have lived in London for the past ten years. Currently studying Games Art in my final year at Teesside University, I am primarily a 3D Environment Artist. One of my most recent projects was Journeyman, in which I was Lead Artist. I was part of a group of students who created an Alice in Wonderland game using Unreal Engine 4. During the summer, I undertook a 6-week internship as a 3D artist at R8Games, a local studio.
Photogrammetry is something that has interested me for the past year or so, and as I’m planning to integrate this workflow into my final year project in conjunction with Megascans and procedurally generated textures, I saw it as the perfect opportunity to learn the software and workflow in advance. I’m hoping that by using a combination of these three applications, the results will be unbelievably realistic, especially if rendered in Unreal Engine 4 or Cryengine.
What makes photogrammetry so interesting is being able to combine real life objects and placing them into a scene with some lighting and you have instant great looking results. Whereas if I was to do this using multiple programs, the process would take substantially longer and wouldn’t look anywhere near as good. This doesn’t mean I’m completely avoiding the use of Zbrush or Megascans but if you are able to blend them together, then you’ve hit two birds with one stone. If I am able to learn the process of photogrammetry while learning how to effectively render them, that shows a range of skills to a potential employer. Although only some games adopt this workflow, learning it early on and developing my skills, within a few years more games will use it.
Given the fact that I’m still a student, buying the whole camera rig is out of the question. However, I recently rented a Nikon D3300 from the university and used a tutorial I found online. I then went out to my local park and found trees and rocks that struck me as different or interesting. I use Agisoft Photoscan for all of my image processing.
Before I go out to shoot my images, I make sure that the clouds are overcast to eliminate shadows from the sun. Once I have chosen a tree or other object, I try and capture as many images possible from every single angle at the highest quality possible. Keeping in mind the ISO, aperture and shutter speed, I often find a middle ground. However, leaving the camera on auto shoot mode works great for me – the higher the resolution on the images the better.
After taking all of the images I need, I put them in Agisoft Photoscan and compile them until I get a clean high poly, followed by exporting the diffuse at 8k. I then use Zbrush to retopologise the asset to a decent polycount without losing too much detail and retaining its unique silhouette. Next, I import both the high/low poly in Maya and align/scale them properly to the grid. For some reason Agisoft has an odd viewport and the assets tend to be tiny and in an odd angle. To avoid this, I use the Maya navigation system to get a clearer image of how the asset looks before export. Following this, I unwrap the low poly and export both assets ready for bake. Quick tip: the high poly will have hard edges and zooming into the normal map will show them, so putting a soft edge should fix that.
When it comes to my lighting in the scene, unfortunately, I have to work with what the weather gives me. Having all of the right equipment to remove shadows from the asset isn’t something I can do right now, but I’m hoping to be able to build my own rig after I graduate.
Once I have the baked diffuse onto the low poly, I use a combination of Photoshop and Bitmap2Material to create the albedo. Using the AO/Light cancellation sliders often works well, but using too much will do more harm than good. I’ve noticed that if I’m using values that are too high, it tends to desaturate the entire image and creates obvious seams on the model, which also is important when knowing how to unwrap the asset. Using the sliders mentioned in moderation will be more than enough to remove the AO.
When it comes to cleaning up the asset, personally it comes down to how many pictures you have taken and knowledge of the software. Rule of thumb for doing photogrammetry is take more images than needed, that way you won’t waste any time having to go back to the location. If there are gaps in the mesh since there’s not enough data to fill them, sometimes it’s easier to delete the extra polys to have a cleaner edge. However, with Zbrush/Mudbox you might be able to use similar noise patterns to fill the gaps, it won’t matter if they don’t match since the camera won’t be focusing on that area. Optimizing the entire asset also comes down to which software package you want to render it in. Given that most engines can handle millions of polygons, you should still have a game ready asset just to show you have that skill.
Rendering in Marmoset Toolbag 3
Once I’m happy with the textures, it’s straight into Marmoset Toolbag 3 since its one of my favourite renderers, besides UE4. One of the reasons I prefer Toolbag over UE4 is because sometimes I don’t have the time to create a complex material and lighting rig, whereas in Toolbag I’m able to drag my textures into the correct slots to give me a great result very quickly.
Once I apply the material to the asset, I then search for a sky pre-set that fits best the visuals I’m aiming for. If it’s either a blue clear sky or a green type forest, I create a 3-point light rig. One main light for the sun is usually orange, a blue for the skylight and lastly a white that is behind the model to brighten the dark parts. In the latest version of Toolbag they’ve added Global Illumination which works fantastically for organic environmental assets because it looks very realistic.
A few tricks I like to use when I’m ready to render the images is to use a depth of field and focus on specific areas of the mesh. Blurring out the bottom of the ground and the top of it really makes the asset more captivating, drawing the focus to the detail of the asset. I also like to increase the sharpness in the camera settings to give the illusion of having stronger textures. However, using too much will create very obvious noise.
Finally, If I’m really happy with how the asset looks, I make a turntable of it and use the shadow catcher feature in Toolbag 3. This gives it more depth and makes the asset feel grounded per se. I believe that the new Toolbag and its new Global Illumination feature is already a step in the right direction when it comes to making an asset look believable. Despite the extra effort required inside Unreal Engine 4 for making the material and lighting, if done correctly the results could look incredible.