That helmet tho I think that one is spot on with kinda like a classic feel to it.
If I'm not mistaken, in the canon Samus can form the suit around her with her mind. In that case it's not necessary to make the suit industrial-looking (or the arm cannon that big) or have the paint stripes mentioned above, since Samus doesn't have to go buy parts to weld in place to upgrade anything. Also those glow plugs (bolts?) look bad, I get the blizzard look but I would change those and make them not come out of the suit like that. Something that wouldn't be necessary for someone that can form the suit around them.
I like everything EXCEPT the caution stripes on her thighs. The caution stripes look terrible. Take them off.
Tapio Terävä did a short overview of his paper, detailing the 3d character production techniques.
Hello 80 Level! My name is Tapio Terävä, and I’m an aspiring 3D Game Artist from Finland. I recently graduated from my game development studies at the Kajaani University of Applied Sciences, and as the final demonstration of all the knowledge I have acquired during my studies, I finished my thesis on game character creation, which I’m here to talk about. The thesis is titled “Workflows for Creating 3D Game Characters”.
Besides my studies at KUAS, I have done an internship at the Finnish game company Critical Force, best known for their mobile first-person-shooter Critical Ops, which I worked on as an environment artist. Currently I’m performing my non-military service at the Oulu Game LAB, and building up my portfolio so I can hopefully get a job in a game company after my service ends in December.
About the thesis
When writing my thesis, I had a couple of things I wanted to achieve. First of all, I wanted to compile a comprehensive description of all the different stages of creating a 3D game character. However, since there is no single definitive workflow for this, I decided to describe different methods and workflows used for different purposes and situations, and compare them to each other. I also wanted to introduce some of the more modern tools and workflows, and compare them to the more traditional methods, and describe the advantages the modern methods have over the traditional ones.
Initially I did also want to do a case study of creating a game character using current generation tools and techniques, like digital sculpting and PBR texturing, but in the end I ran out of time and was not able to finish it, and certainly would not have had enough time to document the process properly. The work-in-progress sculpt of a minotaur on the cover of the thesis is the character I was working on for the case study.
I also wanted to write the thesis in such a way that it would be easily understandable even for people who are not familiar with game development. Many of the technical terms and concepts have been explained using simple examples, and the thesis progresses from more general concepts like character design to specifics like rigging, digital sculpting, and physically based rendering. I’m hoping that the thesis could be a useful resource for any aspiring game artists who are new to the industry and want to learn about the creation process of game characters.
Creating 3D game characters
Creating a 3D game character from an idea to a finished asset is an incredibly complex process that requires tons of specialized knowledge and skill, not to mention time. The process consists of multiple different stages, like creating concept art, modeling, rigging, and animation. In big game companies, these stages are often divided among different specialized artists to speed up the production, and to ensure the best possible quality – however, in smaller game studios, these can also be performed by a single skilled artist. All of these stages are covered in my thesis, but here I’ve picked a few topics I think are especially important or interesting.
Game character design
Even though game character design in itself is a broad subject, and there are various different approaches to it, most game characters can usually be roughly divided into two types: art-driven character designs, and story-driven character designs. When creating characters for a game, it is important to understand which types of characters the game needs, and which of their features need to be emphasized or toned down.
The primary focus of art-driven character designs is the visual appearance of the character, while the backstory and personality of the character are often less important. The features of the character’s appearance are emphasized by using art principles like shape, form, and color saturation and value. Art-driven characters can be thought of as puppets or avatars which the player controls, and they are usually used in simple games that focus more on gameplay instead of storytelling.
An example of art-driven characters are the various birds and pigs in Rovio’s Angry Birds franchise – while the characters do nowadays have proper backstories and personalities, they are more of an extension to the original game, rather than the backstory defining the actual gameplay. The character designs make use of the different art principles and exaggeration – each character has their own easily identifiable shape and color palette, and their silhouettes and big eyes clearly convey which direction they are facing at any given time.
In comparison to the art-driven character designs, the visual features of story-driven character designs are often far more subtle, since the characters’ backstories, personalities, and abilities can be conveyed through the progression of the game’s storyline and their behaviour and actions. For example, while an art-driven villain character might look like your stereotypical bad guy, with big muscles, dark clothes, sharp angles, and the mandatory angry expression constantly on his face, a story-driven villain may sometimes look like any other character in the game, with only subtle hints of his inner darkness – perhaps a small scar somewhere on his skin, a reminder of his violent past. Good examples of story-driven characters are the characters in Naughty Dog’s The Last of Us and the Uncharted series.
Deciding whether your character design is art-driven or story-driven is of course just the starting point for your design process. Different studios and artists have their own specific workflows for designing characters, but there is a “general” design workflow that can be used as a guide. This general design workflow is discussed in more detail in the actual thesis, but here is a compact description of it from my thesis presentation slides:
Platform and performance considerations
Another important aspect that needs to be taken into consideration when creating game characters is their impact on the performance of the game. The performance of a game is of course dependent on which device it is played on – a mobile device has significantly less computational power than a modern game console or a high-end gaming PC. Therefore, when creating characters for mobile devices, you will often end up using different methods and workflows than what you would use when making a character for a console or a PC game.
The most common metrics used to measure the performance impact of a 3D character are the poly count of the character and the amount and size of textures it uses. The more triangles a 3D model has (or more precisely, vertices), the more performance intensive it is to display the model in real time. Similarly, the more pixels there are in the model’s textures, the more memory it requires. As a result, characters in mobile games often have significantly lower poly counts than those on console and PC games, and they use smaller and fewer textures. For example, while a modern game character in a PC game might have tens or hundreds of thousands of polygons, and several 4k texture maps (LOD0 on Ultra settings), a character in a mobile game might only have a couple thousand polygons or less, and only a single 256×256 diffuse texture.
Therefore, when creating mobile characters it is important to think thoroughly about which details need to be modeled, and which can be described in the textures, or whether they are needed at all. Additionally, on mobile it is common to make the most of your diffuse map by adding ambient occlusion and shadows, as well as specular highlights and directional lighting into the diffuse texture, since you might not have the luxury of using normal and specular maps, or realistic lighting and complex shaders.
One thing to note about poly counts is that even though the terms “low poly” and “high poly” are used a lot, they don’t actually have proper definitions, and there is no specific amount of polygons that is considered low or high poly. This is because the terms are always in relation to the context – while a mobile game character may seem low poly when compared to PC game character, that PC game character’s in-game model is still considered low poly when compared to the high poly sculpt that was used to bake the character’s normal maps. And when compared to a character from a 90s PC game, all of the previous examples could be considered high poly.
However, poly counts are not as simple as just looking at the number in the top corner of your 3D modeling software and calling it a day. Thing is, a polygon can be anything from a triangle to an n-gon with a hundred (or more) sides. Also, even if you model everything in quads, the computer actually sees every quad as two triangles. That’s why you should always look at the triangle count instead of the poly count. But wait, there’s more! What actually matters in terms of performance is the number of vertices, but most 3D software don’t even properly count them.
Most 3D software usually count how many vertices there are in the geometry, but these vertices can actually be multiplied several times if there are UV seams, hard edges, or material boundaries in the model. If you have a UV-seam in the model, the vertices on that seam are actually duplicated, since they need to be represented twice in the UV map, and the same applies to any hard edges (smoothing groups in 3ds Max), or if there are different materials applied to different parts of the model. So in a worst case scenario, the number of vertices in your model may actually be multiple times larger than what the 3D software tells you, which means its performance impact is equally higher. Luckily you can limit the amount of duplicated vertices by placing your UV seams, hard edges, and material boundaries on the same edges – this will only duplicate the vertices once.
In addition to these tips, there are many other ways to optimize the performance of a 3D game character, such as using tiling or mirrored textures, or not using traditional textures at all by switching to vertex colors or palette textures. Again, I’ve covered all of these in the actual thesis.
Choosing the right tools and workflows
Finally, you need to choose which tools and workflows you are going to use when creating your character. Of course, you may be limited to specific tools that are available for you, but if possible, you should consider which tools and workflows best suit the needs of the game character you are creating.
Traditional tools, like 3D modeling packages such as Autodesk Maya and 3ds Max, or the open source Blender, as well as photo editing tools like Adobe Photoshop are still very useful nowadays, and can be used to create characters for just about any type of game, from stylized mobile games to realistic AAA PC games. With these tools, it is possible to create both low poly characters with hand painted textures, as well as “current generation” AAA characters with PBR textures. Of course, creating characters for either art style requires very different workflows.
For example, the textures for a stylized low poly character are often hand painted in Photoshop using a graphics tablet, and it’s common to add shadows and lighting into the diffuse texture, as well as hand drawn specular highlights to glossy parts like metallic armor to give the textures better material definition. If a traditional specular map is used, the colors in it need to be inverted for any non-metallic materials, so that their specular highlights remain neutral white, instead of colored. All in all, the whole texturing process relies heavily on trickery and eyeballing everything, and the quality of the results depends on how skilled the artist is. This means that the consistency of art may vary significantly between different characters and lighting situations, not to mention between different artists.
In comparison, when creating PBR textures for a character in Photoshop, everything is very precise and based on real world material values (or at least should be, in theory). This means that the results are consistent and predictable, even between different artists, and look good in all environments. However, it also means that the texturing process is often very tedious. It is common to use photographs as a base when creating PBR textures from scratch, but this requires that all lighting and shadow information is removed from the photos. This can be done by using a combination of tools like Shadows/Highlights, Select Color Range, and by overlaying the photo on itself with inverted colors using Soft Light blend mode. Additionally, the color values of the textures need to be matched with the values measured from real world materials by using the Histogram and Curves or Levels. Another downside of creating PBR textures in Photoshop is that there is no way to preview the complete PBR material with all the necessary maps applied until the material is used in the game engine.
While using the traditional tools and workflows often requires a lot of manual work and eyeballing, modern tools and workflows are in many cases much easier to use, and are much more efficient and produce consistent results. Tools like Quixel DDO and Substance Painter allow the texture artist to use and preview pre-made and editable PBR materials, making it possible to paint and layer them on top of each other directly on the surface of the 3D model. On the other hand, creating these PBR materials from scratch is possible using tools like Substance Designer and Bitmap2Material. Bitmap2Material can create all the necessary PBR maps from a photograph, extracting normal maps, producing seamlessly tiling textures, and partially automating the process of removing lighting information, while Substance Designer creates procedural textures, allowing the texture artist to make endless variations of the material they are creating.
Similarly, digital sculpting workflows have also evolved over the years. Previously it was common to create your base mesh manually by modeling it in a traditional 3D modeling package, then sculpt the high poly from that base mesh, and finally either re-adjust the original base mesh to fit the shape of the sculpt, or manually retopologize a low poly model using a program like TopoGun. Nowadays it is possible to quickly create the base mesh directly in a sculpting program like ZBrush, because of tools like ZSphere, ZSketch, Dynamesh, and Shadowbox. The retopology process can also be almost completely automated, thanks to tools like ZRemesher.
Final words & Where can you read the thesis?
This article ended up becoming way longer than what I originally intended, just like the thesis it is based on. So if you did indeed read the whole thing and are still interested, you can be assured that this was just a small glimpse of all the stuff in the full thesis – it is almost a hundred pages long, not including the list of references etc.
The thesis is available online at the Finnish thesis repository Theseus.fi, but I have also created a more compact Google Slides presentation of the thesis, which will give you a quick overview of the contents of the thesis – some of the images in this article are from that presentation.