Game Character Production: Industry Standards

Pedram Karimfazli talked about the production of characters for games, testing them, approach to clothes and leather.

1 of 2

Introduction

Hi, my name is Pedram Karimfazli and I am a character artist from Sweden. I have been in the industry for roughly 18 years and am currently working as a lead at EA Criterion in the UK. Some of the games I have worked on most recently include Star Wars Battlefront 1 & 2, Battlefield 1 and Battlefield V.

As far as I can remember, I have always been drawn to creating art. I think it started with me doodling and drawing my own comic books as a kid. As soon as I got my first PC, which was around the time when Babylon 5 was on TV (really showing my age), I started playing around with 3D, trying (and failing miserably) to create my own space battles!

I started with Bryce 3D, a short stint in Lightwave, and then 3ds Max, and gradually, I started getting a grasp of modeling! By the time I was 18 I did a course called Digital Animation at the university. The course wasn’t great but it gave me time to learn Softimage and an early version of ZBrush pushing to create a demo-reel! I managed to land my first job at RealtimeUK and then Jellyfish Pictures in London, and it took off from there. Unreal Engine 3 was released around that time as well, and we were en route for the next-generation visuals in games we have today!

Some shots of my recent game character work for Battlefield V:

Battlefield V Firestorm Ranger Realtime showcase video:

Current Approach to Game Characters in the Industry

Look development/lighting test. Pedram Karimfazli / Wiek Luijken

The high-level principles of character creation for games haven’t changed much since I joined the industry. You still have a high-res mesh, a low-res mesh and a number of shaders and textures.

What has changed is improved hardware that means computers and consoles can render more and allow us to create more as artists. Today we create important characters with higher triangle count, higher resolution textures, more supporting textures and more complex shaders than ever before. Rendering techniques for shading and lighting have vastly improved as well. This allows us to add more detail in our characters than ever before and ensure they render at their best.

The artistic approach is also maturing, the way we as artists see, think about and approach characters. Lastly, processes and workflows for creating these elements have also improved, allowing us to do things in a faster, more efficient and iterative way while giving us more control as artists. With the improvements to GPUs and CPUs, console rendering capabilities and jump in output resolution, we are not only able to, but also required to create characters very closely to the way and to the quality they are creating them in the CGI industry.

I can’t speak for everyone’s approach for current and future characters, but I can share a bit of insight about how we have gone about creating some of the characters I have been involved with. It’s a combination of these advancements in our approach across multiple disciplines that all contribute to the jump in the quality we are seeing in games and get to enjoy as players.

Look development/lighting test. Pedram Karimfazli / Wiek Luijken

In the past, a short brief from an art director would be enough for a concept artist to produce concept art for character artists to work from. Today we spend a lot more time researching and understanding our characters from every possible angle. From the basic narrative background, who they are, what drives them, look evolution and what life events have influenced their current look, to even something like how their bedroom would look. Even considering whether the designs are cosplay friendly. This exhaustive effort helps us get to a design that is grounded, relatable and consumable by our fans and players.

Example of early speed-sculpting and design exploration. Pedram Karimfazli

We also dive into 3D early, doing speed sculpts and prototypes. In the past, when we didn’t have access to tools like ZBrush and doing this would be very time consuming and therefore impossible. Today a talented artist can kit-bash and speed-sculpt highly detailed characters in a single day that help to communicate the pose, attitude, proportions, and silhouette of the character, and help us understand if the design elements of the character work in 3D. We can also get this in engine early to see how it sits with the world, lighting and other characters, pre-viz cinematic sequences more accurately and so on.

Work in progress asset and shot look development. Pedram Karimfazli / Wiek Luijken

For human hero/cinematic characters, with a solid design and composition, we usually do body-type casting and wardrobe and then 3D scan the body and clothing.

One of Criterions scanning sessions for Battlefield V Firestorm

If we are after a specific likeness, we will also do hair and makeup then photograph and 3D scan the face; including a range of facial expressions that we can then break down into FACS based blendshapes for the face rigs. This likeness capture syncs the actor’s face shape (3D model) to the actor’s unique facial movements (FACS) and when we do the final performance capture, everything is 1:1. This process helps to really bring up the believability and sell the likeness. The thing that really makes a big difference today is the number of blendshapes we are creating and running with, where in the past you may have had a handful of hand-made blendshapes for the face, today we are running several hundred, all captured from a specific actor and driven by that same actor’s performance.

If you aren’t after a specific likeness or if it’s a non-human character, we traditionally sculpt the head or creature in ZBrush according to concept art and art direction to the highest quality.

With the introduction of PBR, consistency across all visual elements of the game has become a lot easier which contributes to the overall increase in visual quality. This applies to characters as well, where the contrasts between different materials are much better represented and look much more accurate and believable and they react correctly to any lighting scenario. However, the introduction of PBR also makes it really important that all your values are as accurate as possible.

There are some good PBR resources out there, but there is a lot of inconsistency and deviation from accurate PBR values. As well as scanning people we also scan materials to ensure we have PBR compliant and accurate, high-quality textures for our characters.

Skin and eyes are extremely important for characters; technical improvements and better artist understanding of these things has allowed us to really achieve higher visual quality. As I mentioned earlier, we use a lot of support textures. For characters, these come in the form of animation driven wrinkle maps and blood-flow maps as well as zone-specific pore detail maps. For eyes, we have things like animated refraction of light and shadow maps that really help to bring them alive in an animated scene.

Example of the different styles of hair that is possible with new realtime strand-based hair tech. Pedram Karimfazli

An area that is still quite under-developed is hair. Although there have been some improvements in the creation side, with some better artist facing tools for placements, etc., we are still rendering hair cards with alphas and the rendering problems that brings with it. Little has been done when it comes to the hair motion, where the hair might look good in a still shot, but once moving, the illusion is lost. Fortunately, we are working on some cool tech at EA/Criterion where we can do some very detailed, strand-based hairs with real solid physics-based hair motion. These advancements bring hair closer to CGI quality and should help push our characters to even better visual quality while making it a lot more artist-friendly.

Example of block-out to the first pass of strand based hair look development. Pedram Karimfazli

Character Production

At the beginning of a project, we usually break down all the different types of characters we need by importance (hero characters, story NPCs, special enemies, grunt enemies, less important NPCs, living world, etc.), gender and proportions (height, body type, etc.). Next, we try to find a good balance between enough variety and what we can actually create in time or what is technically possible.

All these decisions directly impact the amount of unique skeleton types, unique animations and unique code support is needed. Hero characters and important NPCs, for example, would all usually be unique while less important characters would share the same base body asset, proportions, and skeleton as a starting point.

One of Criterions scanning sessions for Battlefield V Firestorm

At this point, we start our scanning sessions as I described earlier. Depending on how much of the character’s body you will be able to see, or how many different outfits they will have, there might be multiple scanning sessions. Once we have the 3D scan, we bring it into ZBrush for an initial cleanup and then begin to sculpt in more details using a combination of good old hand sculpting, custom alphas or things like TexturingXYZ and projected displacement maps. We also create a symmetrical pose for the rig, create a unified neck and remove the head. The head is usually a separate detached asset that gets added in-game, due to its rigging and deformation complexities. Heads also usually get scanned separately, so we tend to keep them separate. We also need to separate any clothing and accessories and properly sculpt the insides and the underlying clothing to support cloth dynamics and secondary motion.

Example of 3d scan cleanup and additional detail sculpting and shape refinement. Pedram Karimfazli

We try and keep things as unified and shared as possible, so usually where possible the heads and body parts all share the same topology and UVs. This opens up the possibility for us to reuse and share a lot of existing data as a base between meshes and is sometimes necessary for optimization and performance reasons when in-game.

When we initially create the first base low-res meshes, we do it using your traditional retopology workflows with things like Topogun or its equivalents in Maya and the same with the UVs. Once we have a base topology that supports good and detailed deformation, we use tools like Wrap to fit that topology to the new high-res meshes.

Things like customization will add another layer of complexity to things as well, but I’ll leave that out of this for the sake of simplicity.

Clothes Production

For clothing, it is important to understand how it is going to be deformed in the game. Before you do anything it is vital to think about the type of poses your characters are going to be mostly in, for example, are we making an FPS where your characters arms are upholding a rifle at all times, or is it a game where the character is walking around with their arms down. Next thing to think about is if you are going to use cloth dynamics or not. These two things will help you know where to put the details of your clothing folds and how much, and also how to create the actual asset.

If the clothes exist in the real world, we will scan them to get the base high-res mesh. Nothing really beats the quality of weight and general drapery you can get from scanning. With in-house scanning capabilities, this allows you to not only use this technique for your most important characters but for any and all character clothing, including things like accessories. And we don’t just put the clothing on and press a button, there is a lot of effort put in to firstly age the items and make sure they have the right history but also to “direct” the way they drape and fold once on the model or mannequin during the scan.

Example for further iteration and tweaking of folds till you find something that is interesting. Pedram Karimfazli

Sometimes the clothing design doesn’t exist in real life and will need to be created from scratch. In this case, we tend to use Marvelous Designer to get the base shape, weight and drapery done using our scanned base body for the relevant character type as Marvelous Designer avatar body. For Marvelous Designer, we have a database of basic patterns for common clothing types that we usually start from as a base. This allows us to save time on repetitive tasks and spend that time on adding unique details where it matters. It is also important to really understand the piece of clothing you are making: how it drapes on the body; how it would have been produced in real life; the patterns it would be made from; how the physical materials it is made from behave; seam placement and even using the right type of stitching.

Marvelous Designer patterns for a character. Pedram Karimfazli

Another thing that you get for free with scanning but should pay close attention to when manually creating the high-res clothing, is fabric thickness, cloth layering and the final appearance they contribute to. For example, the fabric thickness between a cotton t-shirt and a leather jacket can be very different. Creating a jacket directly on the base body and then adding just the visible parts of the clothing underneath won’t give you the same appearance as adding each layer of clothing fully, with accurate thickness, and allowing these layers to interact with each other to contribute to the final appearance. It’s a compound effect that communicates the volume and weight of the layers and hints at the detail you can’t see.

Once we have the base, either scanned or from Marvelous Designer, we bring this into ZBrush where we clean it up and add more detail using classic sculpting techniques, custom brushes, and alphas. Marvelous Designer clothes have a distinct recognisable look to them that we try and remove to make sure the clothing looks as convincing as possible.

Static folds in the normal map on an area that has dynamic cloth will look wrong. So, as I mentioned earlier, depending on how your character moves and how you intend to do cloth dynamics, you will have to clean out folds from areas that will have cloth dynamics. You will also need to clean out any folds from areas that don’t fit how your model will move. Some behavioral folds are fine, as repetitive movements put a bit of memory in the fabric, but major folds need to be removed. Finally, we don’t tend to add fabric details to the high-res, it’s much easier and gives you far more control to do this at the texturing stage.

Example of finished high-res mesh. All the things mentioned above, coming together. Pedram Karimfazli

For the low-res mesh, again we retopologize in Topogun or equivalent in Maya, model in all the major details and silhouette-contributing elements, ensuring good even topology for consistent deformation from cloth dynamics, good topology around shoulders and elbows, etc. The important thing is aligned topology on the outside and on the inside so when your cloth is being deformed the two sides don’t intersect.

The next important thing is to ensure the unwrap is as close to the un-deformed surface area of the clothing. This will allow for the details of the fabric to flow correctly across the flow and folds of the fabric. You can get this relatively easily out of Marvelous Designer, but with scanned clothes, you have to pay extra attention to this stage. After this point, you would start baking support maps and begin the texturing process.

Example of work-in-progress texturing and shader setup. Pedram Karimfazli

Working on Leather Material

As I mentioned earlier, we are moving towards scanned materials; what this gives you is a great start as a base, but it is by no means ready to be used straight from the scan. There is a lot of artistic love that still needs to be added for the final appearance on the character.

Firstly, we need to have a good understanding of the material we are creating. Understanding how leather, in this case, is made, grain and flesh side structures, what happens when it ages and wears down over time and from damage, behavior-specific details, for example, cracking and creasing around joints and high friction areas. A lot of this varies based on the item: leather gloves, jacket or the leather grip on a tool or accessory will all wear and age differently. You have to be good at observation and use a lot of reference throughout the creation process.

Knowing all this will let you recreate it digitally, and in our case, we are now using substance painter for most of our texturing except for a few specific cases where we use Mari for very specific features.

Example of WIP multi-channel displacement map projection in Mari. Pedram Karimfazli

With PBR materials, smoothness is very important and contributes a lot to the final appearance of the material. This is really the map that drives how the material reacts to light. Where we tend to keep reflectance quite simple, our smoothness maps can be quite detailed and show a lot of variation that captures the history of the leather, from a base piece of fresh hide, throughout its history, whether it’s a modern leather bike jacket or a leather map case used in WW1. Whereas in the majority of the other maps, for example, a splotch of dirt might mean minor value changes, in smoothness that could be very significant, and knowing how to balance these values right is important. Tools like Substance Painter that allow you to tweak values live and see the changes live are extremely helpful. The important thing is to find the right balance. It’s easy to be heavy-handed, but that won’t always give you the best end results.

Example of leather work in Substance Painter, with the layer stack visible to the right. Pedram Karimfazli

Smoothness is very important, but we don’t neglect the other channels, it’s still important to ensure you pass on, for example, a scratch in the leather to the albedo, normal, AO and even the reflectance. Generally, it’s considered to be breaking PBR by changing reflectance from base 4% but in a lot of cases, it’s ok to take a bit of artistic freedom to fill in the gap where technology falls short. In this case, we are putting very small details in the reflectance to imitate micro occlusion and to help some smaller details pop. Putting this detail solely in the AO, for example, won’t achieve the same visual result, and usually, you don’t want to make your AO too noisy. Again, balance is key.

Testing Characters

As I described earlier, we tend to spend a lot of time really solidifying our designs. The basics of interesting contemporary design, including a solid silhouette and complementing colors, are key design considerations. We also do a lot of early 3D concept designs, speed-sculpting and kit-bashing in ZBrush to translate design ideas and 2D concepts to 3D. This is an important part, as sometimes an idea might look great in 2D but won’t hold up as well when seen in 3D.

Besides this, life is all about movement, and ensuring your design is cloth dynamic and secondary motion friendly will really bring your characters to life and make them stand out. Usually, when we have a rough speed-sculpt of design, we will set up all the cloth and secondary motions and play around with it in-game, to see how it moves and reads during gameplay. This is usually quick and exposes any problems or “blind-spots” in the design that were missed in the design process. It’s important to design your character for how your game plays. If you have a third-person view in-game, you should ensure that the back of the character is interesting and helps to convey as much as the front.

A skeptical and critical approach to any design choice is a good start. Inviting feedback from the whole team, not just from your peers, who might appreciate the complexity of the craft, rather than the raw design appeal.

We have also done early user testing to see how a cross-section of our audience will perceive the designs. This is usually when we really want to communicate and associate very specific tones and emotions with our characters, and I appreciate this might not be possible for the individual artist or smaller studios, but for the individual artist, sharing work early in the community and inviting feedback is a good way.

The point is, we really try to make sure the designs we make are really carrying through our assumptions. A lot of it is assumptions, although we do our research, throughout development a lot of times you won’t know how the world outside the dev team (who are often too close to be objective) will perceive your work until launch and feedback will give you a small insight and the opportunity to course-correct.

Look development/lighting test. Pedram Karimfazli / Wiek Luijken

Afterword

Thank you for reading. If this is interesting and something that you already do but want to join an amazing team and push yourself, don’t hesitate to visit Criterions career website, as we are actively hiring for exciting new projects.

The characters shared in this article were either created for Battlefield V or during Criterion’s “Off-The-Grid” time: free time everyone gets between project milestones every few weeks to spend on personal learning, experimentations, and explorations.

Links:

Pedram Karimfazli, Lead Character Artist at EA Criterion

Interview conducted by Kirill Tokarev

 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more