Making a Mechanical Sloth in Blender & Substance 3D Painter

Oleg Senakh has returned to tell us about the Mechanical Sloth project and share an interesting workflow that skips high poly modeling and retopology processes entirely.

In case you missed it

You may find these articles interesting


Hello everyone! It’s been a while since the last time I took a part in an interview, mostly because of the hardships of moving to another city and preparing for the first university year. But now I’m ready to share some information on a hybrid pipeline that I figured out!

I’m Oleg Senakh, a freelance 3D artist from Russia. I picked up 3D modeling in school and have been tinkering with it for 4 years now.

Every summer I choose a really challenging concept to recreate and do my best to make a decent portfolio piece. This time it was a stunning concept art by Longque Chen from ArtStation. I think anyone interested in CG might have seen it at some point, and throughout the year reposts of this concept kept appearing in my feed. It looked like it was just asking to be put in a 3D medium. And so I started thinking about how to approach such an object with my current skills.

Mecha Sloth Project

In short: I had only the low poly/mid poly model without bevels but with geometry dense enough so that cylinders and other curved objects won’t look too edgy. Then I baked a filter-based Normal Map in Blender, creating bevels and Boolean unions. From this Normal Map, I derived the World Normal and Curvature Maps. The Ambient Occlusion Map was baked from low poly mesh, so edges looked sharp. But it was easy to fix by blurring the Normal Map in Substance 3D Painter. Using all these maps, I was able to work on textures as if it was a high poly to low poly bake.

And now the details!

As I said, I didn’t use any mesh baking in this model. It brought some difficulties: the surface shading had to be at least somewhat decent, as there’ll be no chance to fix it easily. Most of the time it was pretty simple lowpoly modeling and curve elements, but also there are some tips for the tricky parts:

  • Boolean + Data Transfer. In some cases, Boolean really messes up the shading, but with Normals transfer from the original meshes, it’s quite a simple fix. Combining a slightly bent surface with a straight cut looks pleasing to the eye, and you can see an easy approach to such shape here.
  • SubDiv + Boolean. It’s really easy to make some interesting shapes with just a SubDiv cube with a couple of Loopcuts and a single iteration Boolean. Here’s an example.

For the cables on the side of an engine assembly, I used Cablerator add-on to easily add cable caps and instance them to add some fuzz to otherwise strict and plain shapes.

I’ve built the robot from the body to the limbs, adding and growing a simple rig in the way. This design doesn’t look very practical and has limited freedom of movement, yet I needed at least some posing abilities for main and additional shots.

While modeling I tried to separate mesh parts keeping in mind the Normal Map bake. Geometry in one object will be “fused” together on the bake, so all the moving joints and pistons had to be separated.

UV unwrap was done in Blender with UV Packmaster and UV squares addons. The UV’s were packed in several UDIMs to maintain good enough texel density for the second plan/background object.


So, at this point, I got a finished (somewhat) low poly model. Adding bevels and smooth Boolean unions is a nice feature of TexTools add-on baker. But if those details would be the same size throughout the whole model, it might come out dull-looking. To solve this, I baked 3 Normal Map variants – with 0.01, 0.02, and 0.03 relative bevel sizes.

Then all the maps were imported to Substance 3D Painter and mixed with masks as fill layers. This is how the surface looks with a smooth metal matcap.

At the same time, I processed the low poly-based AO Map with Blur filter, which came out surprisingly nice with very few artifacts. We all know how Substance’s blur works. Note: AO was baked in a double-sided mode so that blur won’t reveal blank white areas behind mesh faces.

Before working on textures, I baked additional maps that don’t require high poly baking like Curvature, World Normals, and Position.


Texturing process was the one I kept revising until the final renders, adding some little details, reworking the dirt, and tweaking Roughness for painted parts.

Here are some tips on how I textured a large industrial purpose object:

  • All wear and scratches are multi-layered, first revealing the primer, and only then the raw metal. Using the Rust MatFX filter in Painter I mapped the metallic parts as a source of rust in a few clicks.
  • Dirt also has two components – cavity dirt buildups and water-stained/washed smudgy dirt.
  • Moss clumps are added by world normal orientation mask multiplied by cavity mask.
  • Surface details like indents, panel seams, plug sockets, vents and everything else are done with Alpha Maps from ArtStation Marketplace. These details are overridden by invisible height fill layer on top with a “passthrough” blending mode, and with an anchor on this layer fake hardsurface details also affect generators for scratch and dirt effects.
  • Decals like warning signs, letterings, patterns are a taken from a huge decal atlas from the same marketplace.
  • Slight dust fading is added by World Normals, mostly affecting roughness.
  • Insides of the cabin are KitBashed with height alphas and stamps, making some “noise” to break up the plain surfaces inside. I didn’t go for an elaborate process of hand-modeling everything behind the glass, because inside or close-up renders weren’t planned.

When base materials were more or less defined, I started working on the main render scene. I wanted to follow the concept frame idea, but alter it a bit in a favor of better model presentation. For vegetation, I used the Megascans library, as well as Graswald and Botaniq add-ons. Graswald surely came in handy with its scatter features for multiply particle systems, allowing for a seamless blend between moss, small leaves, and little bushes on the big branch. 

A somewhat decently rigged model was very easy to position on the branch the exact way I wanted to.

Lighting and Rendering

The lighting is made with two HDRI maps blended together and a sun lamp. HDRI maps were mixed to make a mist or waterfall effect under the robot because my initial idea is that it’s hanging over the valley far away from the ground. So, the bottom of the environment map is some sort of an over-the-clouds HDRI, and the top part is a forest one. Also, I placed some branches in front of the sun to cast shadows on the robot. 

To make the shot a bit more lively, I took some scans and random 3D models from my library and placed them across the whole scene. Birds flying and sitting. Robot operators taking a rest. My old boombox model saw its use as a background prop after a couple of years of sitting on the hard drive. I like to think that it’s happy about it. And I bet most of the viewers didn’t notice a Calibri near the top branch part.

The background is set up with a jungle panorama and some baobab clip arts. In between layers, I inserted some nearly invisible blue fills to simulate sky haze. The huge deal in such renders is post-processing, just take a look at what impact it has on the mood of the scene:

To easily adjust effects like shadows, reflections, indirect light I used separate render passes like Raw Composite, AO, Diffuse Indirect, Glossy Direct, Glossy Indirect. 

In short, I bumped up all passes to increase contrast. It’s really noticeable with the direct reflections. The indirect passes were used as enlighten layers with masks to bring up very dark areas on the robot, like vents, cable mess, and such. All post-process work was done in Adobe Photoshop, this is how the layers stack looks:

As I said at the beginning, I also wanted to figure out a way to showcase the whole model. A nice way to do it would be a series of shots in a huge hangar. I found a mid-res photo of a hangar with a perfect reflective floor surface and upscaled it. Using fSpy I created a camera setup for Blender with correct perspective and camera parameters.

Combining a 3D model and a photo, especially with a shiny floor, is quite a fun task. To achieve a more or less believable look I used multiple render layers in Blender:

  • Object layer – objects are visible directly, floor plane is visible indirectly, shadow catcher disabled. Raw Composite and post-process passes are used.
  • Reflection layer – objects are visible indirectly, reflection plane is visible directly, shadow catcher disabled. Only Glossy Indirect pass was used.
  • Shadow layer – objects and reflection plane are visible indirectly, shadow catcher visible directly with transparency.

Here is the layer setup for all of the shots:

With little cleanup and denoising these shots came out the exact way I wanted:

As an afterthought, I decided to complete the artwork with a poster. The idea of a retro-futuristic weathered industrial poster came to me while I was browsing Pinterest for inspiration. All in all the entire poster is a combination of some alphas, grunge overlay, and folded paper overlay. But there’s one tricky thing – “blueprint” orthographic drawings from different angles. It’s actually very easy with Blender 2.93 Grease Pencil modifier called Line Art. I marked all the edges to appear as Grease Pencil lines and enabled only them in the line art generator. Then I rendered the Grease Pencil layer only with transparent background and added it to the poster. These are the settings for the Line Art modifier:


That’s all, folks. But before clocking off I’d like to share some advice for anyone preparing to execute a big project like this one. First of all, you have to know what exactly is required from you from start to finish. I’ve been thinking of the plugins I’d use, the pipeline, the shots, and the techniques for this project for a week before even creating the project folder. Second advice – organize everything. Name folders, project file iterations, objects and collections, baked images, exports, and scene descriptions. These projects can stretch for weeks and even months, and don’t expect your already overloaded brain to remember “where did I put this one part export from the beginning to reuse as a kitbash”.

Also, I highly recommend searching for inspiration in any related field, for example, I used Sci-Fi and industrial references and at the same time to finalize the look for all shots. And remember – planning is more necessary than any existing knowledge. With a good in-depth plan, you can thoroughly learn new things on the go. Without one there’s a chance to fall into the procrastination pit, losing track of what you’ve already done and what’s awaits you in the future.

And regarding the workflow I used: I think its main advantage is the model/time ratio since the high poly modeling and retopology processes are skipped entirely. It can be adapted both to gamedev and indie film requirements, or be used as a prototyping stage for a higher quality model. I’d like to know if you got any other ideas for how to apply this pipeline in real cases.

This interview was a great opportunity to share my latest pipeline studies on 80 Level, big thanks to the site staff and Arti Sergeev especially, for interesting questions!

You can check out this artwork and many others on my Artstation profile

Oleg Senakh, Freelance Artist

Interview conducted by Arti Sergeev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more