Jamie Lozada talked about the principles used to make readable game animations, their structure, and the difference between them and CGI animation.
Jamie Lozada talked about the principles used to make readable game animations, their structure, and the difference between them and CGI animation.
Hello! I’m Jamie Lozada, currently a Senior in Game Art at the Ringling College of Art and Design. I’ve been working on my craft as an animator for about five years now. The first 2-3 years were a bit more casual during which I worked on my own little Minecraft Animations (you can still find them on YouTube) before I started diving into my studies. With the Game Art major evolving, I had a chance to personally animate for my Senior Thesis OASIS which I worked on with Ruben Puccinelli and Alex Charnes.
I grew up as a kid from a military family, moving around about every 3 years or so. Throughout that time, I managed to convince my parents to get me the consoles that Monster Hunter would come out on – the game that would end up being a huge inspiration for the kind of animations and games I want to create. I was born in Oceanside, California, but have lived in Virginia, North Carolina, Japan, and other places. In high school, I found out Blender which was great because there was already so much content on how to use it. 3D was daunting and frustrating at first, but after a few attempts, I was finally able to pick it up and make a Garen vs. Darius Minecraft Animation inspired by League of Legends (which was the greatest thing for teenage-me). Little projects like this continued up to my Senior year in high school which actually had a small “Game Development” class that used Blender. Once I got to Ringling, I started to learn Unreal and Maya and by Sophomore year I was putting animations into Unreal Engine and making very small games. I learned that I could make pretty much anything I could think of. My first game project was a small Cube Ninja adventure-style game that had 3 simple rooms, a couple of enemies and a boss room. From this project on, I knew for sure that I wanted to do animation.
Getting into Ringling pretty much started on the internet. As a Senior in high school, I knew I wanted to do something that involved animation and games. I eventually boiled down what I was looking for to “Game Art” and, after contacting animator Tim Sormin through a comment on YouTube, I came across Ringling. After a little bit of research, Ringling seemed to be the deal so I put all of my eggs in that basket and was lucky enough to get accepted into the Game Art Program.
When I first arrived at college, the Game Art program was much more focused on creating environments, like cities and various biomes, which is what I initially thought I would be doing throughout my years here. As the years went on, the major’s area of focus began to broaden, giving me room to animate here and there for various projects. Each year, the Game Art theses started to include more gameplay mechanics and open up more room for stylization. From 2015 up to 2019, there was an enormous shift in the definition of what a Game Art Thesis could be, so I managed to get here at the right time – a time when I could start introducing animations into Thesis. One thing I like about Game Art program is that it’s constantly changing and improving on preparing artists for doing many different things: Tech Art, Environment Art, Animation, Storytelling, etc.
A lot of these things are taught by instructors who come from various areas of the industry. I’ve learned material work and environment art from Ryland Loncharich and Morgan Woolverton, Coding and Blueprint in Unreal Engine 4 from Scott Carroll and Eric Gingrich, Animation and Lighting from Andrew Welihozkiy and Jamie Deruyter, and Concept Art and Digital Painting from Michael Phillippi. The feedback that they can give on anything is invaluable and it helped me get to where I am now.
CGI vs Game Animations
Among learning many things during the course, I found out what it means to be a gameplay animator which entails animations like Idles, Attacks, etc., versus a Cinematic/CGI animator (animating to precomposed shots) where all of the composition work is planned beforehand.
The principles of animation are present in both of these but their application is a bit different. With CGI animation, you have the power to control exactly what your audience sees because your audience is simply an observer of your work. Your primary focus as a CGI animator is to make the performance believable and fun to watch – the characters are breathing and can think for themselves.
With game animation, you lose camera control. Instead of focusing on making a specific shot work, you must work on making every possible shot work since the player can move the camera whenever he/she wishes to. Animating for games also means that the animations themselves need to start as soon as a player pushes a button – that means the anticipation in a lot of actions gets reduced in order to have the controls feel snappy and responsive.
Game Animation Structure
Game Animation also means that, as opposed to CGI where many complex animations can be placed one after another in a timeline, many-many animations are made for various different actions and then sewed together through the code in the game’s engine. For example, something like a very simple jump would generally have four animations that get coded to happen one after another after the ‘JUMP’ button is pushed: Idle >> Jump Start Up [The Character jumps off the floor] >> Falling [This Loops until Character touches the Ground Again] >> Jump Landing >> Idle.
The same concept is used for everything in a video game. Many animations have some form of a start-up animation, such as the Jump Start Up previously mentioned, but some, if they need to be fast and zippy, may not need one. For example, a Great Sword would need to be lifted up before swinging because it is heavy. With a knife, however, you might be able to get away with starting the attack immediately because it is light and easy to move quickly.
Before touching your preferred 3D software solution for animating, it’s a good idea to break down exactly what you need for an action to take place in a game. In OASIS, for example, I needed my character, Enn, to walk. However, I wanted his movement to feel nice and weighty as he walked around. I broke down what I would need and that included:
- [Looping] Idle Animation
- [Transitional Animation] Step Forward [Transitions into Walk (T2W)]
- [Transitional Animation] Step to Right by 90 degrees [T2W]
- [Transitional Animation] Step to Left by 90 degrees [T2W]
- [Transitional Animation] Step to Right by 180 degrees [T2W]
- [Transitional Animation] Step to Left by 180 degrees [T2W]
- [Looping] Walk Animation LOOP
- [Transitional Animation] Ease out of Walking
These were all needed to create a locomotion (movement) system that moved the character depending on where the player tells him to move and where the camera is facing.
I also wanted to make him jump and land in different ways depending on two variables: how long he’s been in the air (AirTime) and whether the player is telling the character to move or not. After listing out what I’d need for this I came up with:
- [Looping] Walk and Run (For him to Jump from)
- [Transitional Animation] Jump Start Up from Walking
- [Transitional Animation] Jump Start Up from Running
- [Looping] Falling
- [Transitional Animation] Stumble Landing
- [Transitional Animation] Rolling Landing
- [Transitional Animation] Super-Hero Landing
From here, I made a BlendSpace, which is a graph that tells Enn to do certain animations depending on the values of certain variables. This type of graph is a specific feature of Unreal Engine, but the concept applies to various game engines.
The rest is configuring the animations according to certain inputs and events that occur while playing the game. A lot of the resources I used for doing this can be found on the internet and are accessible to everyone!
Making the animations themselves is where the principles of animation come in. The first thing I focus on is Staging AKA Posing AKA Silhouette and appeal. I pose the character at the start and apex of a particular movement and evaluate them. Do these key poses convey the impact of the animation I’m looking for? Are they readable? Are they fun to look at? If not, keep working on them until these conditions are met. I also like to place the poses on the timeline to get a rough sense of timing so I can test the animation in the engine.
At this point, the animations are very rough but they give me an idea of what the movement will feel like. I bring these into the Unreal Engine and playtest them to see if they read from the player’s point of view. When they do, I move to the next phase: “Inbetweens”. Here, I establish the movements between the base poses that were made previously – they start to form the arcs and help me refine the timing of the movement. Again, I do a test in the engine and if I’m satisfied with the result, I move on to breakdowns.
With breakdowns, I start to establish the snappiness and weight and really try to push the poses. Here, I put in any squashing and stretching that I think the animation should have, making sure the arcs are correct and the punch is punchy. Mind that between each phase, the timing tends to change a bit as the number of keys for the character increases.
For something like a Superhero landing, it’s important for the character to fall fast and quickly land in a stable pose. Timing is incredibly crucial in delivering a satisfying performance, especially in a game like Monster Hunter where you want the weapons to feel heavy and have a lot of OOMPH when the players land their attacks.
Timing & Posing
For games, timing is generally very fast. If you slow down your timing too much for gameplay animations, the result will be clunky and unresponsive. As soon as a button is pushed, something needs to happen. For a jump, the character tends to do a squat very quickly and then starts to jump shortly after. For the aforementioned Great Sword, the character will quickly lift the sword over the head in anticipation of the attack, and then attack.
Posing is especially important in animation, both CGI or for video games. In games, with much less time to animate, you have to find ways to convey your movements as quickly and reasonably as possible. With posing, the line-of-action is invaluable. The way the character stands says everything about what will happen, what is happening, and what has happened. If an archer is charging a magical arrow in his bow with all his might, then his pose needs to absolutely convey that idea. Is the bow heavy? Is the arrow heavy? How much energy is being built up? All of these questions and more need to be answered by a single pose so that the player knows exactly what is happening. How can you convey the most of your idea while keeping it in the shortest amount of time possible? It’s all in the pose – and if it’s readable, then you have your answers.
How to Make Poses Readable
For a pose to be readable, a few things need to be considered. I will use an Archer character as an example. I imagine the character nimble, acrobatic, and a quite flashy. This, accompanied by the fact that he’s an archer, means that many of his movements need to curve smoothly like he’s giving a performance showing he’s been practicing. For the first few moments of this particular animation, I wanted him to feel like he is winding himself up to use knock up the wooden log with his bow.
For that, I already had a general idea of what I want the pose to be. If you don’t have one, find some reference or make your own! Even if I do have an idea of what I want the pose to be, I’ll still use references to capture any extra details that can make the pose stronger. A strong pose has a very defined line of action. The archer is really trying to kill this wooden log, so I need his pose to tell the viewer that he is building up a lot of energy to do this.
The large curves that run through the body suggest that he is moving backward and support the silhouette of the bow which has a pretty similar shape. The straight left leg is used to give this curve just a bit more push. Imagine the classic slingshot: your hand provides a relatively straight direction of energy while the rubber band of the slingshot curves pretty intensely – those two lines are in relation to one another.
For each of these key poses, I use the same concept. Very often, I’ll find that I need to rework a pose a few times until I get it to feel just right. These poses set the foundation for the rest of the animation, so you absolutely have to get them right otherwise the rest of the piece will suffer.
After I’m satisfied with my poses, I start to make the in-between poses. These determine the poses get linked together and define how the character gets from one pose to another. The aforementioned concepts of posing apply here as well, however, we need to make sure that there is good contrast between the key moments in the pose.
At this moment of the animation, the archer needs to feel like he is swinging with all of his energy before winding up to do another attack. Wind-ups are more squashed while the apex of the swing is stretched out. Depending on the type of swing, the body is squashed in different ways to anticipate the type of attack that is coming next.
Along the way, I’m also always trying to consider what needs to be visible for the action to read well, and that goes back to the silhouette. If someone is throwing a punch, we absolutely need to see the fist before it is thrown, no exceptions. Even if the character is getting something from his pocket, it needs to be incredibly clear – if it isn’t, people might miss where the new object came from. The remedy to this is to always clearly show what will happen next before it happens. For reaching into the pocket, that means we need to see the character’s hand moving upwards in anticipation before it moves down into the pocket, even if we don’t necessarily move that way in real life. That’s an example from the video below which breaks down 12 principles of animation very clearly.
Going back to the archer and posing, I apply these concepts by making sure that the silhouette of the bow constantly reads as a bow while it’s twisted. For this particular animation, that generally means that the side of the bow is almost always facing the viewer.
Once I establish the in-betweens to my liking, I move forward to breakdowns and splines which further define the movement. The same concepts from before are applied here as well but now I begin to really iron out the arcs of the movement.
This process continues until the animation is finished.
Mocap vs Hand-Keys Animation
With the constant evolution of technology, a lot of animation in AAA games is done with motion capture. God of War or the Last of Us are pretty good examples of using mocap. Until Dawn is also notable, especially for their facial motion capture and animation. Even with mocap, though, studios need animators who are acquainted with doing hand-keyed animation in order to fix any bugs in the motion capture and push poses further to give more impact to a particular scene or movement. Naughty Dog made a video about their process of making the Last of Us:
I haven’t personally used motion capture but it is a lot more accessible now than it has ever been before. There are people who create homemade rigs while chillin’ in their living room.
However, even with motion capture, not all of the animation work is done for you. It gets you pretty far, but poses can always be pushed further and weight can always be shifted to add more OOMPH to performance. Those changes bring you back to understanding the fundamentals of animation and how to apply – and sometimes break – them in order to work for you.
Jamie Lozada, 3D Animator
Interview conducted by Kirill Tokarev
The fastest solution for professional video editing, audio editing, and disc authoring. Now with even more innovative creativity tools like advanced motion tracking, world-class video stabilization, and dynamic storyboarding that deliver incredible results faster than ever.