Hard-Surface Artist David Belov has shared the workflow behind the Soviet DP-3B Roentgenometer project, discussed modeling in Fusion 360, and explained how the obsolete Soviet font was extracted from reference images.
Greetings, my name is David Belov, and I’m a Junior Hard-Surface Artist in my spare time. Unfortunately, I have no production experience, and I haven’t attended any courses so far. All my skills come from Gumroad, YouTube, and of course 80 Level articles. People who had the biggest influence on my workflow and my aim for the level of quality in my art are Alexey Ryabtsev and Eugene Petrov.
In the summer of 2020 during the pandemic, I decided to acquire new skills and learn 3D. Previously I had a background working as a CAD engineer that influenced my preferred software for modeling.
Every great artwork starts with good references. Since DP-3B was created in the Soviet Union, we will find much more references and info searching in Russian on Google and Yandex. First, I’m trying to find as much information as possible about the dimensions of the object:
I found relatively realistic dimensions on some Russian online shop and decided to use those for calibrating my reference image in Fusion 360. After that, I’m collecting photos of objects from every possible angle and in good resolution from all available sources.
Modeling in Fusion 360
First of all, I used Fusion 360’s Image Calibrate tool on the sketch I found on Google and used the dimensions that I found previously to scale the reference image according to real-life sizes.
After that, I modeled everything except bolts and cables that I was planning to make later in Blender using Curves and Bolt Factory add-on. It’s important not to avoid putting bevels that you aren’t planning to keep on low poly, otherwise, during topology clean-up, you are going to spend lots of time cleaning them.
CAD to Poly Conversion
When I was done with the modeling stage, I exported my model as .step file and imported it into MoI 3D. In MoI I exported 2 variants of my model in .fbx format.
Low poly variant with N-gone output option and dense high poly variant with Quads & Triangles output option:
High Poly in ZBrush
I imported the high poly version that I had made previously to ZBrush and started DynaMeshing SubTools one by one. Usually, I’m aiming for density closer to 1 million polygons and going farther if needed. It’s important to point out that DynaMesh’s resolution is size-dependent, so you need to scale up smaller objects to achieve the desired density. When I’m done with DynaMeshing all SubTools, I’m starting to apply bevels using Polish in the deformation sub-pallet.
For 95% of the time, I use Polish with Dot since it doesn’t deform bevels as much as “dot-less” and doesn’t create pinched corners like Polish Crisp Edges.
Here is a little comparison that I made:
If you want to create a variable bevel on one SubTool you can mask out needed parts using mask brushes and blur the mask by left clicking on mesh while holding CTRL.
When I’m done with beveling, I’m decimating all my objects with 15% quality using Decimation Master and exporting everything back to Blender.
For UVs, I prefer to use a combination of Blender and RizomUV. In Blender, I’m using HardOps’s sharpen feature to automatically set hard edges on my low poly, then selecting them using select similar (Shift + G) and making UV cuts on them.
After that, in RizomUV, I’m adding additional touches like overlapping symmetrical islands and scaling down islands in parts that aren’t going to be shown in the final render for optimization. When I’m done with that I import my UV back to Blender and use Packmaster add-on to pack my UVs since, in my opinion, it has a better packing algorithm than RizomUV. In Packmaster settings, I’m enabling the heuristic option (analog to mutations in RizomUV), lock overlapping option, and leave it packing for a couple of minutes.
Then I import packed UV back to RizomUV and use the free auto-shift shells script by Vadim Rychkov to automatically shift overlapping islands to the second UDIM to prevent baking issues.
Extracting Alphas From Photos
Firstly, before starting the texturing process, I began searching fonts, extracting decals from references, and studying materials. Usually, to identify fonts, I use websites like Font Squirrel and MyFonts, but since it’s soviet era tech, there’s a low possibility to find an exact font, so I decided to extract BW alpha from a photo.
I found a photo with clear contrasting text and used Topaz Gigapixel AI to upscale the photo’s resolution. Then I imported the upscaled photo to Photoshop and used Adobe Camera RAW to straighten the image with guidelines. Back in Photoshop I applied a BW filter and used levels to cut out unwanted details.
Using the same method I extracted some alphas for wear and hand-written numbers to use later in texturing process.
Usually, I start the texturing process with a search for some photo-scanned textures on Quixel and textures.com, but since I couldn’t find any info about materials used in the production of the detector and I didn’t find anything that resembled my references, I’m going to have to eyeball it.
I used Adobe Color to extract RGB values from the photo and color-pick it into Substance 3d Painter. After that, I created 2 fill layers with only color channel selected and drag ‘n dropped AO and Curvature Maps into them with Overlay Blend mode on Curvature and Multiply Blend mode on AO with the opacity of ~30-40% to create a nice outline and darkening on Albedo.
During texturing I’m layering details layer by layer, just as they would be layered in reality:
- From factory.
- Roughness variation.
- Light discoloration.
- Light dirt/dust.
- Wear and scratches.
- Smudges and oil.
- Heavy dirt and rust.
Since I forgot to create the metal outline on the black “Instruction” text plate, I decided to create it procedurally in Substance, using the Curvature Generator with some blur and sharpen on top and that came out looking quite good.
For rendering, I usually use Marmoset Toolbag since it has a good-looking and fast render. Settings I usually enable are:
- Ray tracing with 3 bounces and 7000 samples.
- ACES tone mapping (an interesting article to read about ACES from Chris Brejon)
- Chromatic aberration.
For my Fill Light I created a Spot Light behind my main camera and set its shape to square with max scale values and drop it under the Main camera to lock transforms; that way my fill light will always follow my render camera.
In conclusion, I want to thank the 80 Level team for giving me the opportunity to publish this article and the artists who helped me by providing useful feedback during my writing process, mainly Javier Benitez from the Game Art Blog, Leon Johnson, Adrien Roose, Bohdan Stetsiura, and Sylvain Bouland.
You may find these articles interesting