logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

How to Sculpt a Man with Exaggerated Facial Expressions in ZBrush

Daniel Merticariu shared the workflow behind The Man Who Laughs project created in ZBrush, which was based on a 2D concept, explaining how his exaggerated facial expression and pose were sculpted.

Introduction

I’ve started with 3D art, as most of us do, I think, because I was a gamer as a kid and teen. One of the biggest sources of inspiration in choosing to do 3D was the cinematic trailer from Wrath of the Lich King, which came out just as I was starting high school. Back then, there weren’t so many resources online or even sources of information about what it is to do 3D. So, it took me about four or five years to find out that you can do that sort of thing on a professional daily basis. And by that time, I had started college. I started studying interior design, and that got me introduced to 3ds Max. This quickly forwarded me to ZBrush because I didn’t have the patience to wait around and see how fast the 3D class I had would teach me all I needed to know, so I started working on my own. Using a pen and tablet to do what you’d normally do with clay and chisels or with a pencil and paper on a computer had a certain appeal to it. It brought inside the comfort of your own home all the perks of traditional art, which can require quite a lot of space and consumables and can be quite dirty, too.

Next to the department in which I was studying design, there was the sculpting department at school, and the guys that were studying there were spending so much time on faculty grounds that they had bought a fridge for themselves so that they wouldn’t need to go home and eat. They were sleeping in their faculty studio, eating there, and basically rarely leaving in order to be able to do their work. Whereas, with ZBrush, you were able to do all that from home. I think it’s the sort of thing that most people discovered with work-from-home during the pandemic. But, besides doing all that from home, in all the comfort that you’d like, there’s also the fact that you can bring characters to life. With all their intricacies, expressiveness, and quirks. Qualities that can be found, I suppose, in all other forms of 3D modeling; but, from my point of view at least, no matter how complex an asset you’d be modeling, it can’t be as complex and rewarding as sculpting the human face.

As for projects I contributed to, there are many, many of which I still can’t name. And many of them got canceled because I worked with a lot of indies, which can be quite risky due to sudden budget loss and other things. This is starting to happen more and more in triple-A companies as well. But the appeal of indies was a big thing for me because, with them, work-from-home didn’t start with the pandemic, it started much earlier on, which gave me quite soon the liberties that most people have discovered in more recent times.

But, to name a few names, one of the first projects that I worked on and that managed to get completed and go out with quite some success is Eastshade and also its prequel Leaving Lindow. In these games, I was the source of all the characters. I’ve grown a lot since then, but this is still a project that I’m proud of, even though the characters are very outdated now. I’ve also worked on projects like Scavengers, Echo VR, and others that I’ll be able to talk about in the future.

I’m currently working on another indie project that I’ve come to love. Its name is Once Upon a Puppet, and it’s soon to be released.

About The Man Who Laughs Project

The Man Who Laughs started as practice, as most personal projects usually do. It was meant to be a smaller one to prepare me for more personal and experimental projects that I have in mind. But, as it also happens, I ended up spending more time on it than I had intended initially. One of the reasons for this is that I wanted to experiment a bit with it, testing different rendering and shading techniques. But I ended up not doing any of that and going for a more realistic lookdev, although working with stylized shapes. The reason for this was the approach of Marmoset Toolbag 5 and their announcement that they now support UDIMs, Groom, and Displacement Maps. I wanted to try that for myself and see how well it works. Frankly, apart from a few crashes when overloading my VRAM, I was surprised at how smooth it all went.

Another reason for doing this as practice before the bigger things I have in mind was the fact that I spent a year and a half working on characters whose anatomy is inspired more by the way wooden puppets were traditionally created rather than the anatomy of humans. When working on different types of things, taking some time to do a portrait once in a while is mandatory. It’s the bread and butter of a character artist. And currently, in the gaming industry, we end up doing more clothes and other such character assets than portraits.

When it comes to inspiration, besides choosing the concept made by Nitro, the Joker was an obvious one. But I wanted to steer away from it a little and go for its sources. That’s why, while working on it, my mind constantly went to the reasons behind Gwynplain’s sad smile and struggles and not to the mischievous character of the Joker. I wanted him to be caught in that strait jacket without any complaint or struggle. Just as he was caught behind the message of his disfigurement, with no say in it and with the penance of his sadness being constantly misread.

My reference board for it was rather small, though — something I wouldn’t recommend. I only had the concept picture, some expression references, some references for the jacket, and a few characters whose styles I found compelling and that I wanted to try before deciding that I wanted to take a realistic approach, as I mentioned above.

One helpful trick that I used, though, was keeping a mirror on my desk while working and making faces in it whenever I felt I needed to see something. It’s more valuable than a picture from my point of view. But Googling on the spot when I felt something was needed was another constant. 

References are tremendously important. That will be re-emphasized in each such interview. But, as you gather experience as an artist, you also gather an inner library of references to which you can resort intuitively. It’s one of the biggest strengths that artists have. The moment I find that my self-contained library is not enough and I end up doing repetitive things is the moment I start looking for references again. And it’s something that happens permanently. But I also trust my process enough to work from intuition up to the critical moment when I simply need more.  

Head & Body

I’ll forcibly merge these two parts into one because my processes for the head and the rest of the assets aren’t that different, and they flow into each other.

When it comes to the techniques I used, everything is pretty straightforward and “traditional.” I started with a base mesh that I had from the 3D Scan Store because I wanted to keep its topology and skip the retopology step. Since this is a personal project, it won’t ever move and won’t run on any other PC, topology is less important. What is important is not spending too much of my limited spare time on technicalities. And there are workarounds that don’t make you sacrifice its cleanliness. One of those is starting from a good base mesh.

From there on, my process starts by trying to achieve a good base likeness on the lowest subdivision level. This assures that when I start subdividing, the vertices are where I want them to be. It can be harder in exaggerated expressions like the one we have here, but it is still achievable. Below, you’ll see what I did after my first save on this project.

You’ll notice that it’s already in the pose, which isn’t exactly helpful for adding the rest of the items needed in a blockout. To avoid any such nuisance, I created the pose on a layer. Toggling that layer on and off allowed me to continue working on the portrait both in pose and without and adding the rest of the items.

This pose will not, of course, be my final pose. It’s something that’s just enough to establish early on the feeling I want for the character. It serves as a blockout, just as the meshes do.

And here’s the blockout I ended up with on my second save of this project after adding the rest of the items — except for the hair, for some reason that I don’t remember.

The inner mouth is from an asset I bought off of ArtStation, which I later modified and retextured. When adding the rest of the items, my purpose was the same as that of the head. Get them in a close enough shape while trying to keep the topology as clean as possible. That was achieved by extracting the pieces I needed from the base mesh, which already had good topology and polygroups. I then used those polygroups to further refine my meshes by sculpting them as I needed and ZRemeshing them with KeepGroups turned on. Usually, it takes more than one try and playing with the settings you see in the image, but eventually, you get results that are good enough.

The fact that the base mesh had polygroups already is helpful because I can start from a base that groups the clothing into the main parts of the body. This enables me to easily separate them and work on them as pieces, as they would be sewn together on a real garment.

That’s about it for the blockout. When it comes to moving ahead for the high poly, everything is pretty straightforward as well. I added in some sculpted hair to help me with the likeness, and everything was made by observation, trying to keep things, even folds, as close as possible to the shapes that were suggested in the concept. I used the basic six or seven brushes that we all use in ZBrush; there’s nothing fancy about it. The only custom brush I used was SK_Cloth for sculpting the folds. On the 10th save of this project, here’s what I ended up with. You’ll notice that I had started adding details like the holes on the forearm belt but that I had stopped.

This is a point at which, before starting to add details, I start asking myself what I could do to exaggerate the pose and make it more believable. At this point, the head was still symmetrical in the pose, which helped me with the sculpting. But that never looks great on a pose. Even if the final shot is with the character looking straight at the camera, it’s good to slightly alter the position of the head. In this case, I changed quite a lot.

In the next stage, the pose was further modified, and I did that before starting to add finer details. I did it to enable myself to think about how the details would look if deformed by the actual pose and avoid having them generic and symmetric. It’s a way of letting natural accidents happen and giving it the sort of look that doesn’t seem controlled. You can achieve the same thing by using scanned maps, but sometimes you can have other purposes than just making it look good, and that was the case here: I wanted to make it look believable by hand. And here’s the result I ended up with:

Taken from the exact same angle as the previous image, I think the pose change will be immediately visible. Going a bit more into what I did at this stage, I had a base layer of pores and bumps that I added using the base alphas that can be found in ZBrush. I did that using the spray function of the standard brush, with Zsub option for pores and Zadd for bumps. Both were made on layers to enable me to control the intensity later on. That gives the skin a rugged look as a base and helps me decide how and where to add the wrinkles. I also added manually using DamStandard, constantly changing size and intensity based on what I thought was needed. Once this was done, I started adding a layer of final detail using some custom alphas I had gathered over time.

The finer cloth detail was also made with SK_Cloth, but layering on top of some custom alphas I have. As for the damage to the cloth, it’s a combination of scan alphas and custom Orb Brushes. I added them where I thought the clothes would get the most wear.

The hair was done using Xgen in the final stages of the process. There is nothing fancy about it, either. It is just manually placing guides and using some base expressions to control its variation. As many descriptions as needed were used to have control over each part of the hairstyle.

One thing to note, though, is that it took some back and forth between Maya and Marmoset to establish a workflow for this and achieve the look I wanted. One reason behind this is that the strands read differently in Maya and Marmoset. So, you’d have to establish a density and thickness in Maya in such a way as to end up working in Marmoset, which reads them as much thinner and less dense. You can, through the new groom system in Marmoset, control some of these settings but not all of them. You can set the thickness there, regardless of how it looks, when you import the hair from Maya, with finer controls for tip and root thickness. However, the density controls in Marmoset have some limitations. You can make it lower than what you brought in, but not higher. So, a workaround here is creating the look you want in Maya and then upping the density of each description just about enough to give you the wiggle room you need in Marmoset. Surprisingly, doing this is much more forgiving on your computer than I would have expected, although it took some crashes to figure out the right way to do it. The results I had in Maya are below.

Another thing to note is that I needed to buy a script for Maya to be able to export the groom with the proper extension for Marmoset. I’ll link that here.

Retopology

Well, I didn’t. No manual retopology tool was used on this project. As I was saying, I just worked in such a way as to preserve the topology of my base mesh and propagate it to all other meshes extracted from it. In the end, I had something looking quite good, with no weird deformations, so I ended up exporting the third subdivision for the head and the second for the rest of the meshes. I also kept the geometry for the stitches and the ripped threads. The idea was to test Marmoset’s capabilities, so there was no reason to go easy on it from the start. I would have reverted and made it lower if it ended up not handling the load. But it ended up working quite smoothly. So, the final polycount for this one was 385,140 tris. This is not that high, considering that AAA projects now have hairstyles that end up costing more than 100,000 tris.

The head is 200,000 tris on the third subdivision, and part of the body is not showing underneath the jacket either. All the clothing and accessories are around 100,000, and the stitches and treads take up the final 70,000 of it. And the final result looks like this.

Another thing to mention is that to use the Displacement Map in Marmoset, I needed to take the topology I had and subdivide it to support the details of the displacement. I’m not sure of the numbers I ended up with, but it was well into the millions, and it still ran quite smoothly. I also tested it with a normal map and no subdivisions, and it looked good as well, but not as refined or close to the sculpt as it did with the Displacement Map. With the normal map, you also lack fine control of the intensity, whereas with the displacement, you can do some fine-tuning.

Again, the UVs were pretty straightforward. I took the meshes from ZBrush into Maya, unwrapped them there using the subdivs at which I meant to use them, and then reimported them in ZBrush over the same mesh to have those UVs in there. After that, I exported the displacement and normal maps from ZBrush and baked the rest of the maps I needed for texturing off of the normal map in Substance.

I also used UDIMs to test Marmoset’s new functionality. Nothing to report there. It’s easy to set up and works without any struggle. They’re far from being the cleanest UVs I made, but they served their purpose. And here’s how they look:

Experience with Marmoset Toolbag 5

Well, I’ve scattered some remarks on the questions we had so far. But, the thing to mention is that to be able to have a workflow in real-time that’s close to what you’d use for cinematics is nothing short of godlike. Because it enables you to skip all those tedious processes that, if you also have a job working as an artist in the games industry, can be quite painful to do in the conditions and time you have left: after work, in the evening, with tired eyes and a fuzzy head. To also do UVs and topology optimized to its most minute detail and to the last polygon can be quite difficult after 8 hours of the same thing.

So, with real-time software that permits a workflow like this, we’re enabled to go into the joys of making art without all the technical pains of optimizing for the varied range of computer specs out there, as we do for games. And, as we all as artists know, the joy of making artwork comes with a pain of its own anyway, unrelated to the technical difficulties of the digital.

Texturing

The skin textures are mainly painted using base materials. I used no scanned maps or details. Just basic fill layers with full colors, over which I added a mask and painted the variations I wanted. The only procedural technique I used on the skin was using the AO and Curvature Maps. I layered it on top of everything I painted to bring out the finer shapes and details of the sculpture. Because, with all those color details overlayed on top of each other, the sculpt details can end up being covered or flattened up by the richness in the colors. And what we want is for them to be complementary of each other. Here are some screenshots with and without those layers turned on.

To decide the intensity of what was needed and what the general look of it was, I wasn’t solely relying on the viewport we have in Substance. I had already set up a Marmoset render scene, and I went back and forth with the textures until I was pleased with the final result. That’s something that I always do, whether I’m setting up the character in a test scene in the engine for a game I’m working on or using Marmoset or Arnold.

One thing that proved most difficult in painting the skin for this guy was deciding on the undertones. That’s because there are no real green people (that I know of), and green can be quite overpowering to other tones when it comes to skin. Red and green are also complementary colors and they tend to exalt each other, so using one underneath the other can bring weird or too powerful results. But I started with red as the main underlying tone and then, through trial and error, slowly established the nuance that I’d work with. And that ended up being a very warm and rich tone of yellow, with some orange tones layered on top of it. Here are all the layers I used to create the skin texture. And I’ve gathered quite a lot of them in painting this guy. Usually, I try to use the least number of layers possible.

By way of difference, below are all the layers of the linen material I created for the jacket. It’s a basic material with some paint and some procedural damage layered and painted on top, nothing fancy. The jacket was alone on its UV, so I didn’t need to take care of grouping the materials with masks to separate the pieces with different materials.

What I hope you’ll notice is the Base copy layer. That’s something I had to add to cover for a mistake I made in the UVs. Usually, we want all the textile patterns flowing in the same direction. Well, there was one piece that I missed, and to not go back and redo the UVs bakes and all that, I chose just to isolate that piece on a different material and rotate the material to fit the other ones in the orientation of the UVs. It’s good to avoid this sort of thing, but mistakes can happen, and there are workarounds for that, too. If this had ended up in a game, I would have needed to go back and fix that UV because you never know when other types of effects will use the orientation of UVs in order to achieve a certain result. Say it’s snow falling on the character, taking the UV orientation into account, and all pieces are aligned straight, top to bottom, but one is not. What fun would that be?

The rest of the texturing was made using the same general lines. Establishing a color base, a roughness base, and adding all detail on top, either painting it manually, or applying procedural masks.

Lighting & Rendering

I used a three-point light setup for the darker renderings of the character, with two lights acting as rims, one as main, and an underlying fill on very low intensity from the skylight.

A simple studio skylight from Cave Academy (which is now included in Marmoset’s base library) for the lighter renderings, the ones with a white background.

With the darker setup I wanted to achieve a dramatic look, and make it as similar as possible to the light it had in the concept. While with the lighter light setup, I wanted to bring out more of the details of the sculpture and give it the feeling of an isolation room.

For the post-processing, I just used some curves and the shadow, highlights, and mid values to bring up the contrast of the image. I didn’t use any other post-processing on this one besides what I had in Marmoset. What you see in the final images are the results straight out of the renderer.

I also had a third light setup that I used in testing, but I didn’t use it for the final renderings either. It had too little drama.

Summary

It’s hard to say how much time and how many actual hours I spent on this project, but based on the dates in the calendar, the first save was from the 9th of May. The high poly was finished on the 22nd of August. And the whole thing was done in the first week of October. It’s always hard to know how much time you spend on personal work like this one. Because the work is never constant. I sometimes worked three evenings in a row, a couple of hours each, but then didn’t work again for another week. That’s how it usually goes when working after work hours. What I sometimes wish for is a timer inside each software. It would be cool if Maya or ZBrush would have such a thing. A /played for the time you spend working on a project. That would be nice.

I enjoyed the whole thing. Beginning to end. But seeing the thing finished, catching the first glimpses of my character in a proper light setup, with the textures on and all that, is always the thing that gets me the most excited. Those last moments when you’re closing in on the end of it.

That part with the underlying tone of green skin was a difficult decision to make. I think I learned, in increments, a bit more of everything I do. That’s what happens with all new projects, I guess.

For me, the most complex aspect of it is having the hidden sides in 2D, hidden in 3D as well, while still maintaining the character’s anatomy in place and functional. In the case of this character it was having a profile that still has most of the front side of the character still visible, while having one eye completely hidden in the main shot.

It’s hard to give advice. I don’t know; keep at it, experiment a lot, and don’t settle on just one style. That’s what I do, anyway.

Daniel Merticariu, 3D Character Artist

Interview conducted by Amber Rutherford

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more