After 8 years at Bungie, Richard Lico talked about his life and the way to produce and use animation in video games.
Richard Lico talked about his life and the way to produce and use animation in video games. Richard is currently working in a small indie studio, but for over 8 years he worked with Bungie, animating the incredible world of Destiny. We thank our partners from Gamedev Unchained for this opportunity.
I’ve been an animator for 17 years now, and my focus has been almost entirely within the games industry. I’ve done primarily gameplay animation/design, but have branched out to cinematic animation, lighting, and modeling on occasion. Some of my early work includes Duke Nukem: Manhattan Project, Deer/Bird Hunter 2003, and more titles with Sunstorm Interactive. I spent some quality time with Ravensoft, working on Jedi Academy, Xmen: Legends, and Quake 4. Followed by a few fantastic years with Monolith/WB where I was the animation lead on Condemned 1 & 2, as well as assisting with F.E.A.R. 2.
Next, I spent 8+ amazing years with Bungie as the animation lead & principal animator. While there, I led the animation charge for Bungie’s final Halo game, Halo: Reach. I headed up the purchase of Bungie’s internal mocap studio and introduced it into the animation pipeline. I pitched and helped design Bungie’s Runtime Rig animation technology. I was the first animator on Destiny, helping to define the look of animation for the franchise. I personally did most of the 3rd person player animation for Destiny including most of the supers, navigation, emotes, and actions. I directed 1st person animation, and even helped create some 1st person animation, such as the sniper rifle. I also had the pleasure of creating some combatant animation, such as the Spider Tank animation set, and navigation sets for various enemies. Before saying farewell to my Bungie family last year, I had the pleasure of working on Destiny 2, again with a focus on player animation.
If you’re looking into more info about Richard’s work, please make sure to listen to this amazing talk, recorded by the wonderful guys at Gamedev Unchained.
Getting into Animation
Animation wasn’t my original goal. Growing up, I knew I wanted to make games for a living, but I wasn’t sure how to go about breaking in. Luckily, I discovered I had a proficiency for art, and figured game development needed artists, so I chose the Savannah College of Art & Design (SCAD) for college. Through more than half of my college education, I was actually an illustration major, illustrating books to help support myself. But as I became familiar with the illustration industry, I realized it likely wouldn’t be my ticket into game development. This was during the mid to late 90’s, and around that time, Toy Story hit theaters. I was blown away. It inspired me to switch my focus from illustration to “computer art”, which was a generic major for anything relating to art on a computer. Part of that major included a generalist education in Softimage, which is where I focused. Even though games had just started to transition from 2D to 3D, I knew that the need for 3D animated characters was inevitable. I wanted to be a part of that first wave of hires for this new field.
So animation to me was initially just an emerging opportunity to break into game development. A tool, really. But over the years, it’s become much more to me. I’ve always been a huge Street Fighter player due to the marriage of combat design, animation, and technology. Being a game animator allows me access to that intersection. I love the art of crafting character performances that resonate with people on an emotional level. The flow of a well designed combat system speaks to me like music. And I find the technical aspects required of a solid gameplay animator scratches my puzzle-solving itch. Working on an animation that fits into this complex puzzle is my version of meditation. It resets my mind, and helps me focus on the rest of my life. Always trying to improve gives me a goal that I can potentially reach every day, yet never complete. It’s a trade that has a unique sense of family to it. Meeting other animators creates an instant bond. Like a mutually understood kinship.
Functions of Animation
Video games are essentially a complex form of communication. The player has expectations, and an input device used to express those expectations. A game reacts to the inputs, communicating a response, leading the player to the next expectation. The clearer the communication loop, the more immersive and satisfying the experience is. For example, most would assume that dying in a game would be a frustrating experience. Yet, death in Dark Souls is still fun. This is because Dark Souls does an amazing job of showing the player how to play, and providing them with tools for every situation. A player understands what they can do better next time, and they’re excited to try it.
At a base level, animation for games is one of those avenues of communication. It’s is the art of providing feedback/responses to player intent. That intent may be the desire for a good story or appealing characters. The moment to moment act/react within a fluid combat system. Or the desire to influence other players in positive/negative ways. Animation has such an immense influence over this communication. Think about something simple like a punch in a fighting game. How satisfying that punch feels as the player presses attack is primarily driven by the quality of the animation content. It can inform the player of gameplay design information such as how open they are to retaliation after pressing punch, and how powerful the punch is. It helps to tell the player who the character is, how strong they are, and how much emotional conviction is behind the punch. And it helps with on-screen flow directing how the player’s eye is drawn around the screen. That simple punch can communicate arcs and a flow which can be pleasing or uncomfortable. Yet, it’s merely a punch animation.
Gameplay animation is active, back and forth communication. While film animation is a passive experience. With games, the player has that input to tell the game what they expect, and how the game responds is a measure of the experience. With film, the viewer has expectations, yet they have no means of interacting. They expect it to be a one-way street where the film feeds the viewer. Games need to communicate everything that a film needs to communicate. An experience that resonates with the viewer/player is paramount to both. But gameplay animators have the added complication of the player’s interactions, and how those interactions influence the rest of the puzzle. Often times in games, proper feedback conflicts with itself. For example, creating a robust, weighty punch benefits from a large anticipation and lengthy settle like you see in film. Yet, for it to feel responsive in a combat scenario, an extremely short anticipation and settle which the player can interrupt is ideal. Finding creative solutions to these conflicts is unique to gameplay animation, which I personally find extremely challenging and interesting.
Accurate Body Mechanics
Adding weight and accurate body mechanics really is a tremendously difficult thing to get right. And I believe it’s difficult because of two main reasons. The first challenge, finding who your character is and crafting animations to match. This is often derived from intuition, not data you can learn from a book. The second challenge is the animation rig itself, and supporting workflow. It’s way too easy to craft poses/actions that defy physics and anatomy.
When finding who my character is, Instead of having a pre-conceived style or set of animation goals, I’ll let the character’s design and project tone inspire me. It’s not a quick process. Usually my first animations with a new character rarely feel right, but I adjust as I spend more time with my character. It’s like getting to know a partner. Finding their likes and dislikes. What poses fit them, and which ones always feel out of place. Getting to know how fast they want to move, letting them set the pace they’re comfortable with. Keeping an open mind, and getting lots of feedback helps with this, but good intuition makes all the difference. It’s something I’ve had to learn and refine over many years, and I’m sure I’ll be refining this approach over my entire career.
As for my animation rig, I’ve been told I have a very odd workflow. I’ve built myself a ton of tools to help make accurate locomotion much easier to achieve. I’m constantly baking and switching what space my data is relative to. For example, I switch my spine from FK, to World Space, to spline IK without data loss to have the optimal space for posing, tweening, layering and polishing. I can data-mine my character’s translation through space to offset secondary object motion. I’ve written tools that can simulate physics on any body part, making it seem like flesh and bone. I’ve created a tool that will provide a visual center of mass, and make the character animation relative to that object for easy weight adjustments. A tool that can derive accurate hip rotation based on the position of the spine and feet. Essentially, I’m slowly finding ways to eliminate many of the manual workflow steps which make inaccurate animation such an easy pitfall.
Readability is very important, as it’s one of the most common and expected ways to communicate with the player in every game. I can’t think of a game with characters where it’s not important. Again, games are all about communication, and a character that’s hard to read is bad communication.
The way I approach readability is to balance the amount of motion with how extreme my posing needs to be. If I need responsive, overt posing like the supers in Destiny, I’ll make sure the actions are quick and smooth, while 2 or 3 strong key poses leverage a moving hold with a jitter for emphasis. It’ll have obvious peaks and valleys to the timing as it takes the average person 5 to 8 frames at 30fps to really “see” a pose. And even though people can’t see a pose shorter than that, they can easily feel a subtle motion consisting of as little as 2 frames. For subtle actions like a greeting emote, I like to play with the details supporting a single, strong thematic pose. I let the pose communicate the state and tone, while the texture adds personality and draws the eye.
I’ll also try and be mindful of the context, making sure the tone of the clip captures the feelings we’re hoping the player will be having. Combat animations should feel like they’re taking place while under fire. Animations that happen in a group should offset from each other so the focus can float around all characters comfortably.
Lastly, a real soldier will try to minimize their silhouette when in combat holding their weapon. It’s a life saving tactic, but it doesn’t make for clear gameplay. I’ll often balance the needs of realism with good gameplay feedback by trying to make the motion itself feel relate-able and real, while making poses and silhouettes more obvious in subtle ways based on real soldiers. This approach has in the past drawn criticism, but it makes the combat gameplay experience more enjoyable.
Contrast is key! Knowing the tone and pace of all the animations you’re crafting makes adding emphasis through contrast easy. For example, a punch in Destiny starts with a quick action, holds after the hit, and resumes to quick action for the return. That contrast tells the story of the action, as the pause after the hit explains what happened in the preceding punch. But the contrast in that punch isn’t as great as what you’ll find when doing a super in Destiny. I would make that contrast more extreme to help sell the grandeur of the action. Making it feel bigger than a punch with greater risk/reward.
Even within a clip, I’ll decide where to add contrast and emphasis based on where I want to draw the players eye. We’re making a game about a mouse at Polyarc. She’ll often point at objects we want the player to notice. So to draw the eye, I start the point animation with an abrupt pop, settling into the point itself. That contrast makes it easier for the player to notice the mouse, then follow her pointing to the object we need you to see.
You can also apply this to everything a character can do. A run, which is probably the most-used animation in every game, requires a large amount of noisy, repetitive motion. So making the idle calm and subtle helps to balance the rhythm. And in turn, adding idle variants that have a clear, discernible action will add a comfortable contrast to the subtle idle.
When I was with Bungie, we would split the 1st person content into two separate types. There were performance animations, such as reloads, readies, fires, and melee’s. Then there were gameplay design-centric animations such as additive overlays for moving, looking, jumping, iron sights, etc… Performances were meant to showcase a personality along with the design beats. Not just the personality of the character, but also the personality of the gun. In Destiny, the loot you earn and spent resources upgrading, created a bond between the weapon and player. People would have their favorite gun spending lots of time with it. And even though we had to share a lot of content across similar weapons, we’d often create unique performance animation content for specific guns. But even the shared performances tried to define personality differences between the different gun archetypes.
What bound all the performances together was a sense of presence. We strived to make it feel like, behind that 1st person camera was you, the player. In most motions, we’d animate the camera slightly to imply a physical person driving that camera. We’d play with the lead and follow dynamic between the camera and the motion on the arms and gun. During a melee, the camera leads like a persons head would. When firing your gun, it lags. We even considered what the eyes would be doing, not just the rotation of the head. A person’s eyes will compensate for much of their head’s motion, and it was important for us to represent this to minimize nausea.
We’d also track the focal points of a weapon as it moves through camera space. We’d make sure that motion would be lyrical and appealing, not jittery and avant-garde. Next time you’re playing Destiny, reload a sniper rifle to see what I mean. Pay attention to the camera, to the motion on the tip of the barrel, the motion of the center of mass of the gun. The camera will subtly make you feel like you’re there, while the parts of the gun paint interesting arcs on your screen without impeding gameplay space for design clarity.
I’m certain that the process of creating animation as we know it now will be obsolete within 25 years. It’s easy to look at the current state of automation and poke holes in it. Any new or evolving technology will struggle in its march towards proficiency. Learning systems currently look goofy, and completely lack “Acting”. Seeing what a character is thinking is certainly not what current learning systems are focused on. They just want a walk to look competent. But this is how the process of evolution works. Start with the basics like locomotion. Once that’s solid, continue to refine, but also start to research how to capture human intent. The human spirit. At which point, animators will shift to become directors or actors. The process of animation will no longer be about moving a hand, or rotating a spine. It will be directing when and where to glance, or what mood fits the tone. The end goal of animation can’t be found in your curves inside your graph editor. Animation exists to create a living performance that touches the human spirit. The current process is merely the best way we’re aware of right now for achieving that goal, and by far not the most efficient or intuitive method we will ever know.
It’s easy to be distracted by the here and now. Early in my career, so many people dismissed 3D characters in games, believing the initial quality was representative of the future quality. At that time, 3D characters were rudimentary, while 2D games had decades of refinement and could thrive despite the limited technology of the time. Knowing that 3D characters would allow us much more freedom as the technology evolved, and not getting hung-up on the deficiencies of the day, allowed me to be on the cusp of a new generation of game developers, helping to define it. VR/AR is in the same situation today. So many people are getting hung up on the wire tethering them to an expensive PC. Or that the headsets are face-bricks. But this is a temporary state. If you look at what VR/AR enables us to do, then look at the rate of technological advancement, isn’t it obvious where we’re headed?
Animation is such an odd profession. Given the scope of human history, it’s a trade that has only existed for a very short while, and will fundamentally change maybe as early as within my lifetime. Having had the privilege to be an animator within this very small window of opportunity is amazing luck!
Destiny was an odd situation. It has customizable player characters that mostly share one common set of animations for technical limitation reasons, with the only non-shared content being abilities and supers. So giving the player character a personality that fits everyone is near impossible. So when we created emotes, I saw it as an opportunity for the player, the user. By “player” I’m not referring to the Destiny character here, rather the person buying and playing the game. I wanted to create actions that would represent the mood and personality of the person holding the controller, not the player character. This approach initially divided the studio as there was a very spirited debate which lasted for months. Some people wanted guardians to only ever be the serious defender of humanity. While others welcomed the user expression. When I pressed people for examples of what a serious guardian emote would look like, the results were not very fun or inspiring, so user expression won out!
Dance emotes such as the thriller homage, or hotline bling homage garnered the most interest. Much like South Park’s success, this was due to the relevance of pop culture at the time. And much like SnapChat, it was a way to express yourself socially. It opened a new line of communication. Some emotes were based on mocap, while others I’d choose to keyframe entirely. If the concept for the emote was strong, getting a highly-accurate performance was needed, so I’d keyframe it. If a loose interpretation by an actor was enough, I’d use mocap. Take the thriller homage for example. I keyframed that because it needed to look and feel like Michael Jackson, and not an edited version of some dancer imitating MJ. People aren’t dumb, and can feel that difference. The little extra work this created for me was worth it, and I believe helped to establish the humor of it.
Blizzard has such a rich canvas. Their Overwatch characters are 100% unique, nothing shared. Their animators can really find and represent who those characters are. By making sure there’s a wide array of styles, you’re bound to capture a personality that speaks to any given user. Take Hanzo for example. Every animation, design decision, color choice, model, etc…. Everything is unified, and accurately represents Hanzo. That consistency, and variation between the different characters is what’s so appealing. So when they make an emote, I’m sure the animator has no shortage of inspiration. They can draw from pop culture, or draw from who their character is. Or even mix both! If they ever did a thriller homage, I’m sure each character would perform it in a way that would represent their unique personalities.