Frostpunk Explorer: Character Design for Cinematics

Claudiu Tanasie shared an extensive breakdown of his cinematic character Explorer created for the Frostpunk trailer: body and face modeling, texturing, detailing for closeups, clothes, and presentation.

Introduction

My name is Claudiu Tanasie, I'm a Senior 3D Character Artist from Bucharest, Romania.

As a child, I liked to draw a lot, but little by little, I started doing it less and less and by the time I got to high school I stopped entirely. When I was around 14, my brother introduced me to science fiction books. While with time I was drawing less, I was reading more and more sci-fi, and my interest in science grew overtime. At the end of high school, I had to choose what career to pursue - should I become an architect and follow my passion for drawing or become an engineer?

I chose to become an engineer and studied electronics and programming for five years. In my second year at the university, with the money from my first-year scholarship and some help from my girlfriend I bought a computer. At first, I discovered video games, a medium where stories similar to the books I enjoyed reading could become interactive experiences, and my actions would affect the outcome. I found out there were applications that would allow you to create characters and worlds like the ones in the video games I was playing. Through some colleagues, I got my hands on a copy of 3d Studio Max. I didn't have internet back then, so I learned by reading the tutorials that came with it. After a while, I got access to the internet and with it to a lot more resources. With some luck, I found some freelance work at a Hungarian studio. The money wasn't that good and the job didn't last, but it helped me realize I had a lot to learn to reach a professional level.

Career

Midway through my last year of university, thanks to my good friend, I found out about a Romanian outsourcing studio called AMC Studio. At first, I was skeptical, but after some research, I realized they were offering a better salary than what I could get as a junior engineer in a market overcrowded with engineers. When they posted an opening, I applied, did a test and got accepted. I started working right after getting my diploma. It was the first time I was surrounded by people who shared my passion, and they could teach me so much. The first few years were very challenging, but that's when I learned the most. Working at an outsourcing studio, you had to work efficiently, and because we were contributing to some big titles, the quality level was high. I worked there for eight years and a half, on games like Mercenaries 2, Saboteur, Call of Duty: Ghosts, The Witcher 3 and many more.

By 2013, I was feeling that I wanted more control over the projects I was working on, so I decided to become a freelancer. Since then, I got the change to work on games like Warmachine: Tactics, Doom, Dishonored 2, Lawbreakers, the upcoming Deathloop, some prerendered game cinematics like For Honor, Frostpunk, The Walking Dead: Saints and Sinners; and an animation/documentary feature film, Another Day of Life.

My years of working at the outsourcing studio helped me develop a solid workflow, and the need to always stay up to date with the latest software, techniques, and technology was evident. I enjoy reading and watching a lot of tutorials, not only about 3D character art, but also concept art, illustration, photography, cinematography and many other fields. Most of the time I find a lot of information that I can apply in my own work in some shape or form. I also like to stay up to date with what happens in the game and movie industries.

As a freelancer, I rarely get to work more than a year on the same project, and while someone might consider that a disadvantage, I see it as an opportunity to learn about different workflows and fine-tune my own with the new knowledge. While you're not in the same building, there's always communication with the clients. Each studio has its own way of doing things, and since usually you have to adhere to that at least to some degree, they teach you how to do that.

Character Design

I started doing character designs slowly, organically. As a 3D character artist, you often have to work from a concept. Sometimes the one you are given is not fleshed out, or changes need to be made for various reasons, and it's your job to come up with something that works well with the elements already present. Over time you get to see a lot of concepts and start discerning an interesting piece from a generic one as well as understand what makes an artwork good. But that doesn't make you a concept artist automatically, you still have to develop a workflow that can consistently produce good results in a decent amount of time. Unfortunately, I still struggle with that part. There's plenty of information on the internet on how to do it, but you still need to put in some effort and get the mileage.

1 of 2

An important milestone for me was the Lawbreakers project. From the start, we were tasked with coming up with our own designs. At first, it was about doing variations based on the existing characters, but by the end, we were doing complete designs on our own. It was very challenging, but also fun, and it certainly made me push my skills to the next level.

As for how I approach character design, it's mostly gathering as much info about what the character is supposed to be, researching, collecting a lot of reference images, and deciding which of the elements help convey the intended purpose. Most of the time you have to change what you see in the images to fit your needs, and that's where your sensibilities and knowledge come into play. My aim is always to create something believable, where the elements seem purposeful, and create an emotional response from the viewer.

Frostpunk Explorer

Frostpunk Explorer: Start of Work

In April 2016, Platige Image reached out to me, to work on a hero character for the teaser trailer of an upcoming Polish game. Previous to the Frostpunk cinematic, I had worked with them on another announcement trailer but it focused around secondary characters, so I was very excited to get this chance. The Explorer presented an additional challenge: it needed to hold up at an extreme closeup.

After getting the brief, I started researching mountaineers from the 40s, steampunk elements I could use, and what purpose each of the elements from the concept had. I gathered as many good reference images as I could find in a reasonable amount of time, and they proved invaluable in understanding how all the elements worked and how I could adapt them to fit the concept while still maintaining their function.

Organic Modeling Workflow & Tips

One rule I abide by religiously when it comes to modeling is to always work from large to small, from general to specific. If you have a complex concept, do a blocking first to get the right relationship between the big masses. This will give you a high-level understanding of the character and help to speed up the process by identifying possible issues with the concept, what parts you should model first because they will affect others, what pieces you can reuse and how you should approach that, etc.

I use ZBrush mainly for organic modeling. A good piece of advice I received from someone is not to add another subdivision level unless you got the most out of the current one. I make a habit of zooming out every few minutes and evaluate how the changes I did recently affect the overall character.

Another principle I follow is to use the best software for the task at hand. For example, if I have a hard-surface piece of gear, I can start in Maya with very basic geometry, then switch to ZBrush to quickly add some secondary shapes, then to TopoGun to create a cleaner topology, and finally back to Maya for the final cleanup, detailing and UVs. On another part of the character I might start in ZBrush, then switch to some other application, but the bottom line is that I always switch to the software that allows me getting the best result in the least amount of time.

Some artists prefer to stay as long as possible in one application. My advice would be to research different ways of doing things, try them, and decide what works best for you since each one of us has different strengths and weaknesses. Also, don't be afraid to change your workflow if you discover a better approach, and optimize your interface and keyboard shortcuts to your current way of doing things.

Clothes

The character has a few layers of clothes, on top of which there's a significant number of accessories. I started the blocking from the inside out. Platige Image provided the male base mesh they use for their characters. Even though the skin is barely visible, that mesh still had to be adapted to the physical build of this character and kept in the scene. On my side, I can use it as a reference mesh to keep the thickness of the arms, legs, and torso in check, and also utilize it as an avatar in Marvelous Designer. On their side, a technical artist would use that base mesh to speed up the skinning process by transferring the weights from it to the clothes and accessories. A mandatory rule when you adapt the base mesh to your character is not to change the topology, vertex order, and the UVs.

1 of 6

The blocking of the clothes was done in Marvelous Designer, then brought to Maya to help with the placement of other elements. The pieces that were very likely to change a lot stayed in the form of very basic boxes or cylinders, while the ones that were not affected by anything around them, were a bit more detailed.

After changing the proportions and placement of various elements according to the client's feedback, I started the detailing phase. In some cases, it was very straight forward, like the pants where I did the precise simulation in Marvelous Designer, then brought the result in TopoGun for retopology, and finally, added the fabric pattern and some wear in ZBrush.

There were some pieces of cloth that presented an increased challenge, specifically the scarf and the balaclava, or ski mask. On both of these, the work was split into two big stages:

  1. A pre-high stage in Maya: make a block-in in either Marvelous Designer or Maya; retopo where necessary and create UVs to decide on the placement and size of the weave pattern; cut the borders to give it the torn look, without destroying the UVs; create thickness by extruding.
  2. A detailing phase in ZBrush: add subdivisions; apply the weave pattern as displacement map; with a Tubes brush, add strays on the borders and keep them consistent in thickness with the displaced details; subdivide the tubes and add details to them, always using the displaced main body of the piece of cloth as reference. Throughout all this process, consistent reference images were heavily used.
1 of 5

Here is a small breakdown of how I created the weave pattern:

For the backpack, I modeled a rough approximation of what its content would be like, then used that as an avatar in Marvelous Designer to build the exterior around it. As usual, the topology was done in TopoGun, and additional detailing in Maya, then ZBrush.

The rope was another special case because mountaineers tie it around their bodies in a very specific manner, and I wanted it to be believable. I did some research and found a few great Youtube videos explaining it, but the final result is not 100% accurate because I couldn't diverge too much from the provided concept. In Maya, I used a curve with a Paint Effects stroke attached to it to recreate the intricate design and get a good representation of the thickness, while avoiding unwanted intersections. When I got to a satisfactory result, I converted the stroke to polygons and created the UVs. Here's an efficient way to create UVs for a tube-like surface made entirely out of quads: select all faces, go to the UV Editor menu bar, and choose Unitize in the Modify menu; select all edges except for an edgeloop along the tube and go to the Cut/Sew menu and choose “Move and Sew”; optionally, you might need to scale the UV shell on one axis to compensate, if your quads are not perfectly square.

The final steps were to subdivide and to use a tiled rope pattern as a displacement map in ZBrush.

1 of 3

Working on Face

When it comes to faces, and human faces in general, there's nothing more important than having a good grasp of anatomy. Even in this day and age, when 3D scanning is widely used, I find great value in knowing the underlying structure, the proportions and how the human body functions. If I start from scratch, I need to know how to build that head, and if I have a 3D scan as a starting point, I need to know what I can change to add more personality without losing the natural look.

There are so many resources today to improve your anatomy knowledge. The ones I found particularly useful are the excellent Proko Youtube Channel, very affordable New Masters Academy content, and some traditional books by Elliot Goldfinger, Paul Richer, and Frederic Delavier that contain a huge amount of information worth their price. With the democratization of 3D scanning and photogrammetry, it's becoming easier and easier to get your hands on good models. Some learning institutions offer them for free, and most of the 3D scanning providers offer samples to evaluate the quality of their services. Whenever I find something of good quality, I store it into my collection, to study it, and possibly use it as a reference.

Apart from studying anatomy that helps me understand how humans are built in general, I study people around me to get what makes each individual interesting. Whenever I run into an interesting-looking person, I try to understand what makes him or her stand out from the crowd. A good design is defined by a clear read, you instantly know what to expect from that character. If someone looks interesting, it's probably because the look gives away some aspect of his/her personality or story in general, and I try to identify what exactly that is. In the vast majority of cases, it's the imperfections that tell the most about us.

From a technical point of view, just like I said previously, - and the same will apply to the most cases - I work from large to small. In production, if I work on a regular human or a bipedal creature, I am provided with a base mesh that the studio uses for most of the characters to speed up the UVs, rigging and skinning process. At the start, I use only the two lowest subdivision levels in ZBrush. When I get the desired feeling from them, I will move to higher levels. To make sure the model has a good read, from time to time I zoom out until the character appears on the screen the way the audience will see it, and re-evaluate.

The higher you go in subdivision levels, the less impact on the overall look the changes will have. Pores are important to sell the realism of a piece, but they won't make up for sloppy primary and secondary shapes. When I do get to the small details, I make sure not to make them too strong, so I don't overpower the work I did on the lower subdivision levels.

Another trick I use to test the strength of the overall facial features and details is to switch to SkinShade4 material. Immediately, the fine details will lose their strength, and I can better asses how good the primary volumes are.

The main challenge of the face was the amount of detail needed for extreme closeups. For something that will be a focal point, a texel on its texture should never get bigger than a pixel from the final render, and when the nose of the character makes up half of the frame, you will need a very big texture or split the texture onto several UDIMs. I used an 8K texture, which is not uncommon in prerendered cinematics. The tough part was filling it with details that looked good both from far away and from very close. 

I started with the primary shapes, by recreating the proper bony structure and big features, like the nose, ears, etc. No matter how much our faces change during an expression, these landmarks will keep their proportions. To avoid getting a generic look, I always choose an actual person as a starting point or a combination of a few people. In this case, the client thought the facial structure of Max Martini would be a good fit. The role of this reference is to help get interesting, natural-looking features and proportions, not to create a perfect likeness.

For the secondary shapes, great reference images are a must. On Google, in the Advance search section, you can set the minimum size for the images so you don't have to scroll through thousands of low-resolution results. I recommend to avoid magazine pictures or glamorous photoshoots and to look for images from social events, since the post-processing on those images is minimal. The sculpting itself was a simple process of looking at pictures from multiple angles and recreating each volume as closely as possible.

For the pores, I used Surface Mimic displacement textures, but nowadays, you can get better results with Texturing XYZ which have bigger resolution and quality. You can project them using Mari, Mudbox or just the Stencil feature inside of ZBrush. Because the preview you get in the viewport is not a great representation of the final render, it's a good idea to add these fine details on a different layer, so you can later adjust the intensity.

For the ice, I used a small percent of the facial hair curves as a starting point. After attaching Maya Paint Effects strokes to the curves and converting them into geometry, I imported the result into ZBrush and assigned each hair a different group with the help of the Auto Groups feature. Using reference images, I identified the key areas where the ice forms up, which are mainly around the mouth and eyes, and with the Inflate brush, I increased the volume of some of the hairs. I then Dynameshed them with the following options: Groups on, Blur 4, Project off, SubProjection 0. Through trial and error, I found a value for the Resolution setting that would produce quads with the edge slightly larger than the diameter of the hairs. Since the hairs were too thin for the resolution of the resulting mesh, they almost disappeared. Using the mesh visibility tools I hid all the hairs I didn't need anymore and deleted them using Del Hidden. From this point onward, I used the Move, Inflate and Standard brushes to get some variation and finer details.

1 of 4

Retopology & UVs

For retopology, I used TopoGun, and I still use it to this day since I haven't found a more efficient workflow. While the application was released some time ago, it's still the best in my opinion because it allows creating polygons with the least amount of input, i.e. number of clicks. As with any other modeling task, I start from the big shapes. With the SimpleCreate tool and the Make Faces option checked, I lay down some big quads and triangles to cover the surface, while keeping the edges roughly the same length. Next, I subdivide and adjust where needed. If you already have a finished area of the model, you can select one vertex from the shell you are currently working on and use Subdivide Shell. TopoGun tries to conform the additional vertices to the surface, but in complex areas, it still needs some help. With this approach, my input is limited to fixing the aforementioned issues and changing the topology in key areas where I might need additional geometry or specific cuts for UVs. I repeat these steps until I get the desired amount of polygons. While triangles are best avoided since they might create artifacts upon subdivision, you can still use them if you can hide them in less visible places.

The UVs were done in Maya, and the process is very standard. I select a piece of geometry, apply a planar mapping, define the UV shells by cutting the edges on their borders and finally unfold everything. Some key points to keep in mind: set the same resolution on all the UV shells from the same texture/UDIM, with the help of the Set Texel Density feature from UV Toolkit/Transform; make your life easier for the look development phase by keeping each texture restricted to one type of material, i.e. put only leather pieces with other leather pieces on the same texture, metals with other metals, textiles with textiles, and don't mix them.

Texturing

At the start, I create some basic versions for all the materials and tweak them until I get a harmonious color scheme. This is the moment when you can identify if some elements grab too much attention, becoming unwanted focal points, or not enough and get lost among the other details. I adjust their saturation, reflectivity or local value to rectify any problem.

For the fine details, I make sure they reinforce what's already there in the modeling and in the starting materials. With the rise of applications like Substance Painter it's very simple to drag and drop a Smart Material and call it a day, but if the details are too strong and placed randomly without judgment, they can easily overwrite the secondary and tertiary volumes you created in the high poly phase.

I gather a lot of reference images for the types of materials I need to recreate, both in a pristine state and damaged. Studying how the passage of time affects their properties is key to getting believable results. I try to identify the different causes of the changes, like dirt, mechanical damage, prolonged exposure to the sun or different substances and recreate them one by one. Proper naming and organization in different folders help if later I need to make changes or if someone else has to take over, so don't skip this step.

Some surfaces, like the clothes, were almost entirely done in Photoshop, others, like the metals, were done in Substance Painter, and the rest were an equal mix of both.

For the clothes, when I created the weave height pattern used as displacement, I also created a corresponding albedo mask pattern, which I used to add color variation in the base material. For the base leather, different tiled patterns were used on the various pieces to give some variety. The base metals were simple flat colors that best produced the light interaction I was looking for.

On top of all this, I added weathering to give the impression that the character went through a lot. The main rule while doing this is to add it only where it makes sense (dust in the crevices, scratches on the exposed areas, chips on the exposed edges, etc.) and to make it just strong enough to be visible, but not too much so you won't overpower the secondary and primary details.

Usually, I try a few smart masks from the Substance Painter shelf, and when I find something close to what I see in the reference images or what I have in mind, I start tweaking its parameters and layering other procedurals, bitmap masks and paint until I'm happy. This was the case with the snow, which is a combination of three layers, all derived from a Dirt procedural: some small spots scattered all over the surface, some bigger ones used sparingly and some more dense ones for where the snow would accumulate on the top-facing surfaces and in some crevices.

The Mask Editor is one of the generators I use when I need something more specific. If you don't get exactly what you want from this one by tweaking the multitude of input baked textures, you can always layer other generators or paint layers.

Hair & Fibers

For the fur and cloth fibers, I used xGen in Maya. For hair and longer fur, the process almost always consists of these steps: create a description, define the guides, apply clumping/cut/noise modifiers and finally create the color maps. The power of xGen is in how you can combine values, math functions and painted maps to use them as parameters for each of these steps.

For example, to get some length variation, you can use an expression like “ rand(0.0,0.4) “ in the cut modifier, which means that on each hair a random value between 0 and 0,4 units will be removed from its length.

On the clumping modifiers, the results from using the default parameters are too regular and unnatural, but you can paint a mask to break that up.

Speaking of the color map, let's say you want to paint a striped pattern but you want to experiment a bit because you're not sure what colors to use. You can create a grayscale mask which you can later use to combine different colors in Hypershade without the need to access the xGen interface every time you want to change one color.

One great resource that helped me better understand the way xGen works and the power of expressions, in particular, is Jesus Fernandez's website.

Presentation

A good presentation can make a mediocre asset look great, while a bad one can make some of the best characters look bland and uninteresting. One aspect I see ignored a lot is the pose. We spend weeks or months working on an asset, and when it comes to showing it to the world, we are too tired, lazy or ignorant to change the lifeless T-pose in which it was created. Our brains are wired to emotionally respond to a good-looking picture, so we should create a compelling final image using all the tools at our disposal: great lighting, great mood and a pose that will bring the character to life. Even lowering the arms to a more natural hanging position and a slight twist of the head can dramatically improve the presentation.

Before I start the lighting phase, I search for some reference photos with a light setup that would fit the subject matter and the desired look if I already have something specific in mind. When it comes to lighting, I think less is more. If I get a good definition of the volumes and the materials with only two or three lights I'm happy. To properly showcase the asset, you need to understand how lighting affects the way we perceive things, you need to think like a photographer. The basic artistic principles from photography translate really well in the 3D world, so watching Youtube videos with different light setups is a great way to learn.

I use Arnold for Maya as my rendering engine, because I can get very good results with very few settings. I start with the camera position, then I add the key light, an Area Light placed slightly above and to the side of the camera. I try different positions, aiming to get almost equal amounts of light and shadows on the character. If a volume has too much light or too much shadow, it won't read properly, so I look for the spot where as many volumes as possible have a good definition. You can control the softness of the transition between the light and shadow through the size of the light:  the larger the light the softer the transition.

To get some lighting information in the shadows, I use either a simple Skydome Light with low intensity, between 0.1 and 0.5, or an HDRI image connected to a Skydome Light with similarly low intensity.

I use another light on the opposite side from the Key Light, usually a Rim Light, to differentiate the character from the background. If I want a more mysterious look, I use a low intensity for it, or I might ditch it entirely.

I try to get what I want from the actual render, so I won't need a lot of post-processing. I save the renders as 32bits .exr files and try to get a good value range when I convert to 8bit in Photoshop. I use HDRI images only as fill light or not at all, so most of the time there's no need to change the white balance since the contribution is minimal. Finally, I use a Curves adjustment layer, with a slight S curve for RGB, to get a bit more value contrast, and depending on the mood I want for the final image, an S curve or a reverse S curve for R, G, and B channels. My aim is to get a subtle play of cool versus warm, between the shadows and the highlights, without abusing it.

Afterword

I hope this article has shed some light on the character creation process. As a final piece of advice: keep learning, keep working, take care of your health, both mental and physical, and most of all, don't give up.

Claudiu Tanasie, Senior 3D Character Artist

Interview conducted by Kirill Tokarev

Keep reading

You may find this article interesting

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more