logo80lv
Articlesclick_arrow
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_login
Log in
0
Save
Copy Link
Share

Creating Realistic Portrait of Anton Chigurh with ZBrush, Substance 3D, & UE5

Vladimir Fedchenko talked to us about creating a 3D portrait of Anton Chigurh from No Country for Old Men, discussing modeling and texturing a realistic character model using ZBrush, Substance 3D Painter, and Unreal Engine.

Introduction 

Hey, I’m Vladimir Fedchenko, a 3D Character Artist with over 6 years of professional experience creating realistic 3D characters. I chose this niche because I felt drawn to the challenge. When I started my 3D modeling journey, nothing felt more challenging and exciting than creating characters, and I found myself genuinely enjoying the process. I still do!

My journey into 3D art began during my studies, where part of my university education covered major aspects of 3D design. I took all that I learned as a starting point before going down the self-taught path and acquiring more advanced skills online through courses and other available content.

I’ve spent the majority of my career so far doing freelance work, and I have contributed to a wide range of major projects that span games, trailers, and more, delivering production-ready character models for studios, indie teams, and private creators worldwide.

Inspiration & References

I was deeply impressed by Anton Chigurh’s character in the Coen brothers’ film, No Country for Old Men. It’s a film that stands apart from typical Hollywood fare. Even though the central figure is a villain, you can't help but admire Javier Bardem’s incredible talent and how convincingly he portrays a sociopath.

With this likeness project, my primary goal was to perfectly capture Chigurh’s psychological demeanor and overall mood so that even a viewer who had never seen the movie would instantly recognize him as a psychopath – a presence that gives chills just by being looked at.

To achieve this depth, I typically immerse myself in the character. I find a key scene or short clip on YouTube and listen to it on repeat, sometimes for hours, while I sculpt. It functions almost like white noise, helping me concentrate and stay deeply immersed in the character's psyche.

I am very self-conscious about my work, and a main goal for me has always been capturing the essence of a character so that the models feel alive. With Anton Chigurh, it was his psychological demeanor; with the Severus Snape likeness I did recently, it was the very subtle, almost imperceptible sadness on his face.

Setup & Scale Calibration

Before starting the likeness sculpt, I perform a critical initial setup to ensure seamless transfer and Global symmetry calibration in both Blender and ZBrush with the use of the GoZ add-on.

I open a new document in ZBrush and load a basic primitive, like a Cube, then convert it to a dynamic mesh using Make Polymesh3D. This establishes a known object at the center of ZBrush’s coordinate system. I use the ZBrush GoZ add-on to send this new Polymesh Cube to an empty Blender scene. This initial calibration guarantees that all future model transfers using the GoZ/GoB add-ons will correctly place your models at the center of the world coordinates, preventing frustrating alignment issues.

Likeness Sculpting

I start the face mesh with a base mesh of a human head, which already has UVs and basic eyes set up.

Camera Matching: Before beginning the sculpt in ZBrush, I set up my main camera in Blender, adjusting the focal length to 250mm. This extreme focal length is chosen because it closely matches ZBrush’s orthographic view, which minimizes distortion while sculpting. While exceptions exist based on the reference image's focal length, this method works in most cases. After you finish your model using a 250mm focal length, you can easily go down to 150mm or 100mm without huge distortions of your model. Meanwhile, you might experience huge troubles if you model with a focal length of 50mm and then try to increase your camera’s focal length.

Real-Time Review: It is critically important to check the proportions of your head sculpt in Blender while sculpting in ZBrush, as this is your final render camera, which is far more accurate than the ZBrush viewport (since its focal length options are not physically accurate). For a faster workflow, I use the Blender GoB and ZBrush GoZ addons. Having two screens set up helps immensely, allowing me to check both ZBrush and Blender views simultaneously, making it easier to maintain proportions and see the sculpt from the final render camera in real time.

Best Practice: The best practice remains modeling the core likeness in ZBrush using orthographic projection and validating the result by setting up the correct focal length in the rendering software.

Reference Images: It is important to compare your model to reference images and try to get as close as possible in major proportions. For that, you can use the Blender reference option or a simple Pure Ref image overlayed on top of your model with opacity turned on and mouse in Pure Ref disabled.

Full Likeness vs. Likeness for a Single Picture Shot

In the case of my Anton Chigurh project, I aimed for complete likeness, meaning he looks like the character from all angles. To achieve this result, it’s crucial to first work on the face in its neutral position and get it to a decent result in likeness using various angles and different reference images. It should not be perfect, but should be good enough.

At this initial stage, your model will not look exactly like the final shot you are aiming for, as it is a neutral likeness. However, this base is extremely important later in the process. If you achieve high likeness on a symmetrical base model, when you start setting up your specific final shot, you will be able to focus on the desired angle while still maintaining correct proportions from other viewpoints (with only minor tweaks required). Additionally, you can easily reuse this basic neutral face for many more interesting poses.

If you choose to aim for a likeness based only on one specific shot, the situation changes. Your likeness might look great in that particular shot, but from other angles, it will likely not look as convincing. This is a different technique, and its main benefit is speed. However, you sacrifice the overall quality and versatility of the model. If it is only a portfolio piece and you do not require production-level quality, you can leave it as a single-shot model and still be fine. For this particular model of Anton Chigurh, I went for full likeness from different angles.

Clothes Modeling

There are three main approaches to creating clothes:

1. Clothes simulation using Marvelous Designer or CLO 3D.

2. Using ZBrush sculpting, after that, assigning different polygroups to different parts of your clothes and using ZRemesher with the keep polygroups function enabled to ensure a clean result. Also, it is easy to do unwrapping based on already existing polygroups.

3. Creating clothes with polygonal modeling. For likeness, I like creating clothes manually, while for characters, I like working with simulations. The main reason for this is that for likeness, you need clothes to look as close to the ref image as possible. In this case, it is easier to create low poly mesh for your clothes using Blender polygonal modeling. While creating a base for clothes in Blender, I also make very nice topology, and it is simple to create UVs for clothes afterwards.

Jacket Mesh & UV Process

My process for creating the clothing mesh starts simply: I first establish a low-polygon base mesh for the jacket. After that, I add one or two layers of subdivisions.

When applying these subdivisions, its generally best practice to use a simple subdivision algorithm for the very first step. This is important because it helps the mesh retain the sharpness of its corners, giving a much better result, and then switch to the Catmull-Clark subdivision algorithm for subsequent layers. Also, it is recommended to crease the outer edges so they will not be much deformed.

This is the result I got using polygonal modeling and simple ZBrush sculpting

UV Mapping & Texel Density

When I worked on the UV unwrapping for this jacket, I didnt prioritize an extremely high texel density. The key considerations were simpler and more practical: I focused on ensuring there was absolutely no stretching on the UV islands and that all islands were oriented correctly. This consistency in orientation is crucial for texturing later on, as it guarantees that any procedural fabric patterns will have a consistent size and direction across the entire garment.

Baking & Cage Mesh Creation

For this specific project, I also had to create a separate cage mesh for baking. This was necessary because some elements of the jacket were slightly overlapping, which was creating noticeable artifacts in both the ambient occlusion and normal maps.

Creating the cage is a straightforward process: I copy the low-resolution mesh and then slightly inflate it using Blender or ZBrush. Its essential to inflate the mesh enough to enclose the high-poly detail without introducing new overlaps. The goal is to create a very compact cage to minimize the distance of the baking rays. For any areas where the automated inflation doesnt fully cover the mesh, I manually tweak the cage using sculpting in ZBrush or proportional editing in Blender.

The most critical step here is to never alter the geometry or the UVs of the cage mesh itself, as doing so would invalidate the baking process.

Here is an example of Distance-based Substance 3D Painter cage 16 vs Custom cage that I created.

In most cases, using the Automatic Substance 3D Painter cage can solve most of the issues, but it is a new feature and not 100% effective, so it is still important to be able to work with a custom cage for baking.

This is an example of Ambient occlusion baked in Substance 3D Painter with the usage of the custom cage.

Hair Grooming

For the hair, I used Blender’s node-based hair system. This is a big advantage compared to making hair in XGen, as it keeps the entire workflow within the same rendering software. The process of creating hair in Blender is almost identical to XGen – if you can groom there, you can do it in Blender – but it requires learning Blender’s specific hair nodes to achieve high-quality results.

Density Strategy: I created a hair density map using my existing face mesh to leverage its proper UVs. Creating separate geometry is unnecessary and counterproductive, as any adjustments to the face mesh won’t automatically update the hair.

Realistic Count: To look realistic, you generally need 50k-70k strands for the head; 150k is often overkill. In 3D, hair strands are typically made slightly thicker than real-life hair, as using the real-world thinness would make them look like hay.

Manual refinement is key: in hair creation tutorials, people often overemphasize the use of modifiers. Modifiers are extremely important, but almost nobody speaks about the need for manual touch after the work with modifiers is done. Rarely will procedural hair alone look fully convincing when it comes to likeness that is based on a specific image. Though it will be fine if you work on a character where you do not need to tweak each single hair strand. While creating a likeness, it is best to create a very solid base first by using hair guides with modifiers, and then add all the fine details and realism using Blender’s manual grooming tools.

Creating realistic hair in Blender is definitely a tricky and complex subject. It’s difficult to explain the entire process here, as it really deserves a dedicated post of its own.

To summarize:

1. Creation of a Density Map.

2. Placing Hair guides.

3. Generating procedural hair and applying modifiers.

4. Applying procedural hair and fine-tweaking hair with manual grooming tools.
I’ve included a very basic Blender hair node setup. While this setup is sufficient for simple hair needs, you need to know a lot more for something more complex.

Compared to systems like XGen, there isn’t as much comprehensive information or as many tutorials available online for Blender’s hair system. However, I believe that if you put in the time to master it, the payoff is absolutely worth it.

To be completely honest, most of what I know about it I learned through trial and error. I prefer working in Blender, and I find its hair system to be much more convenient than using XGen in combination with Blender for my workflow.

Basic Hair Curve Node System Explained

Interpolate Hair Curves generates new hair curves on the mesh, usually filling the space between existing guide curves to achieve greater density.

The Image Texture node holds your density map, which is a painted texture that controls the concentration of hair strands. This texture’s output is connected to the Density input of the Interpolate Hair Curves node and references the UV Map of your underlying mesh to ensure proper placement.

Clamp Hair Curves – adds clamping to the hair. Clamping guides can be generated by tweaking the Guide Distance parameter.

Set Hair Curve Profile determines the final thickness (radius) and shape of your hair strands as they appear in the render.

Important: After you are done with the procedural hair generation, if you need detailed, final adjustments to fully match the likeness, you must apply the Geometry Nodes modifiers to convert the procedural result into static curve data. If you do not apply the modifier, you can only groom the initial, low-count guide curves; applying it allows you to manually groom and refine every resulting hair strand.

Retopology & Unwrapping

My base head mesh already had clean topology and UVs. As long as I preserve them, I dont need to redo anything, which is a major speed advantage in production. But for this work, I used a Hybrid Workflow.

Sculpting Techniques: Working from a clean base mesh requires strong subdivision sculpting skills, since non-subdivision tools like DynaMesh, Sculptris Pro, and Decimation would destroy the existing topology and UVs. However, when I start from a sphere or ZBrush face-planes, I can use those free-form tools to gain flexibility in the early stages.

Hybrid Workflow: I often combine both methods: I first sculpt freely on my dynameshed base mesh 20 that allows me to use DynaMesh or Sculptris Pro, then use ZWrap to accurately project my clean base topology and UVs onto the high-resolution sculpt and then continue refining using subdivisions. This delivers both creative freedom and a production-ready model. When you make a base model with a purpose in mind to create some different poses off of it then the basic model might not be ideal. It is enough to catch main shapes correctly. But it heavily depends on what is your end goal. In the case of likeness what I had achieved was enough.

Extreme Detail: Subdivision modeling is also essential for extreme detail. DynaMesh caps at around 8 million polygons, while regular ZBrush subdivisions can reach 100 million per subtool. In my case, 8 million polygons were more than enough to hold this amount of detail.

For ultra-high-definition details (necessary for cinematic close-ups), ZBrush has HD Geometry, which pushes the resolution up to about one billion polygons. Though for average work, I would not recommend it because you cannot use layers there, and sculpting in HD geometry is a bit harder because you cannot fully see your model and have to work on it part by part. I personally use it very rarely but find usefull when I need to achieve high definition quality for my work.

Texturing: Context & Calibration

Texturing was primarily done using Blender and Substance Painter.

Software Choice: For this project, I didn’t plan to show extremely high-definition pores or very tight close-up shots, so a UDIM workflow wasn’t necessary. Because of that, Substance 3D Painter was the most efficient choice, using a single 4K texture per map.

As I mentioned before, my head mesh was around 8 million polygons, so anything above 4K would have been unnecessary. A 4K texture can theoretically store up to around 16 million pixels (4096 × 4096 = 16,777,216), which represents the maximum amount of detail it can hold. In a simplified way, you can imagine one pixel roughly corresponding to one polygon, which means that a 4K map is more than enough to capture the level of detail present in the model of Anton Chigurh.

However, it’s important to remember that this is only an ideal case. Actual captured detail depends heavily on texel density, UV layout, padding, and how the texture space is distributed. In production, you almost never get the full theoretical resolution, so the real amount of captured detail will always be somewhat lower.

If this were a high-end cinematic model requiring many UDIMs, I would have chosen Foundry Mari, as it is better optimized for heavier meshes and large sets of texture tiles. Though it is worth mentioning that Substance 3D Painter is also well optimized for UDIM workflow at a moderate level, not for very high-end cinematic models.

Technique over Resolution: It is absolutely possible to create very realistic, convincing textures using lower-resolution maps as long as you fully utilize the textures and focus on technique rather than resolution.

Contextual Texturing: To achieve realism, proper lighting and camera setup are just as important as the textures themselves. You can’t assume your textures are finished only because they look good in Substance 3D Painter. It’s crucial to constantly check your textures in the render with the final lighting and camera throughout the workflow. That’s why I don’t bother checking my textures in Substance 3D Painter’s material view – I focus on flat albedo colors while constantly exporting and evaluating them in the rendering software. The final result is determined in context, not inside the texturing program.

Color Accuracy: For cinematic shots, colors must match your reference very closely. In Blender, you can use the Hue/Saturation/Value node to adjust colors. Once the colors are right, you must bake the changes directly onto your albedo texture using Blender’s baking system. You then bring this corrected albedo into Substance 3D Painter to start layering details on top, ensuring your base colors are accurate and consistent. Also, do not forget to remove this filter from your node stack in Blender because you do not need it anymore – use the texture you had already baked

MeshMaps: While other maps are still very important, there is nothing extraordinary about them, and they all fall under the same rule as the albedo map. From mesh maps, I usually use the ambient occlusion Map to darken some areas on my albedo map. The curvature map can also be used for albedo map adjustments and roughness map adjustments. If you want to make subsurface scattering, you can use a thickness map for that – it can be used as a base for a scattering map. You can bake all of these maps directly in Substance 3D Painter.

In the PBR workflow, I mostly use albedo, roughness, and normal map. You barely ever need a dedicated metallic map for skin. A lot of people say that in 3D, it is either metallic or not, and a face is not. From an artistic point of view, you can still tweak some metallic values in Blender on the main shader, but there is no need for a separate map for that, and most likely, you will not touch it at all.

Displacement Map: If you want some high-quality close-up shots, you definitely need to use a displacement map. It is not possible to use Blender with a 100+ million polygon mesh without killing your GPU in cycles rendering – I can guarantee that if you try something like that in 99% cases, Blender will just shut down immediately. When working with a displacement map, my recommendation is not to go beyond 4 million polygons in Blender.

Even if you baked your displacement map from a mesh that has more then 100million polygons, using a 4 million polygon mesh plus a displacement map in Blender can fully transfer all the details, choose displacement and bump. Displacement will physically move your geo, and Bump will add an illusion of fine micro details. Sometimes 1 million polygons is enough, and no need to go even for 4 million. In case you want to go for real geo, then you need to use CPU rendering and have a lot of RAM memory. While you use GPU render, it takes VRAM memory from your GPU, and with CPU render, it takes this memory from your RAM. CPU render can handle much harder scenes, but it is also much slower, though much more stable.

Lighting, Rendering, & Post-Production

Lighting Philosophy: I didn’t rely on a fixed three-point lighting setup; I used a simple HDRI light. Lighting should never be fixed; it must always depend heavily on the final shot you want to achieve. For cinematic likeness shots, you need lighting specifically designed to enhance the likeness and make the final render stronger and more convincing. The most important thing is always thinking about the final result rather than relying on presets. In the case of Chigurh, he was standing outside under the sun and did not have any strong shadows, which is why I used only one HDRI and maybe one area light, and that is all.

Rendering Tools: I rendered the shot using Blender’s Cycles renderer. I made the render background transparent and saved the images as PNGs for easier work in Photoshop.

Post-Production: I used minimal post-production: Photoshop color correction, specifically the Camera Raw filter and some color filters to better match my renders with the background image. Post-processing only affects a small percentage of the result – it cannot fix a bad render, it can only make a good render look even better. Your likeness should look realistic in the rendering software before moving to post-processing.

Color Management: Blender has a color management feature under rendering settings, which allows you to tweak values and adjust the look of your render directly, which helps achieve a better final result without relying solely on post-processing.

Checking Values: Colors can look very different if the lighting is too strong or too weak, so keep it in mind when setting brightness for your light sources. When a scene is overexposed or underexposed, it can ruin your textures significantly.

Advice to Beginners

People often say that if you do what you love, you’ll be happy and it won’t feel like work. But in reality, if you want to become truly exceptional at anything, you have to put in extra hours and give it everything you have. You can’t expect to compete with professionals by treating it only as a hobby.

You might enjoy your work most of the time, but there will definitely be moments when you feel tired, frustrated, or completely burned out. In those moments, you still have to keep going, because stopping isnt an option if you want real growth. You should expect difficulties – they are inevitable, even when your work is also your passion. Everything has its price.

Getting good at 3D, for example, often means sacrificing your free time, working late at night, or even damaging your health because its hard to maintain an active lifestyle when you spend long hours sitting and focusing intensely. These sacrifices come with the territory if you’re aiming for mastery.

My advice is simple: keep going, stay consistent, and don’t expect high-level results to appear quickly or easily. The path is hard, and you need the mental strength to push through it if you want to become a true professional. Everyone goes through this phase, and everyone struggles on the way.

Vladimir Fedchenko, Character Artist

Interview conducted by Emma Collins

Ready to grow your game’s revenue?
Talk to us

Comments

0

arrow
Leave Comment
Ready to grow your game’s revenue?
Talk to us

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more