Very impressive work dude!
Ubisoft Toronto technical art director Alexander Bereznyak talked about the innovative animation technology IK Rig. This solution allows to build incredibly intricate animations for various characters automatically. Check out the videos. It’s pure magic.
My passion for making games started with 3D Studio Max – a gift from a friend of mine. I was born in Ukraine, and learning English version of professional 3D software package was pretty much like learning astrophysics in Aramaic. Solving this puzzle became an obsession; it was better than any book or video game. I wasn’t just learning 3D Studio Max; I was discovering a new method of making art. And I still have my wonderful renders of the chrome sphere over the checkered field to remind me of my obsession.
So I made the decision to pursue a career in games. I joined an outsourcing company based in Ukraine. I moved around between several different studios, taking on new roles as a modeller, texture artist, animator, rigger, lead, and finally, Art Director. I worked on a number of titles throughout that time, including Overlord, NBA Live 2006, Need For Speed Underground 2, Everquest 2, Pirates of Caribbean: Legend of Jack Sparrow, and Showtime Championship Boxing. This experience was invaluable, because I was able to learn many different art styles as well as many different management styles. My last role in Ukraine was shipping Metro: Last Light; before deciding to take my skills to the Ubisoft Toronto studio where I accepted a role as Technical Art Director.
At Ubisoft Toronto, I’m working on an unannounced project. But since joining Ubisoft, I’ve also invested time to develop IK Rig, a systemic animation system created for use by any project across Ubisoft. One of the greatest things about Ubisoft is its investment in tech innovation to support projects being created around the world, while being able to work on blockbuster AAA games.
Problems with Animation
I remember when Motion Capture started to gain ground in game development. Back in those days, many of my animator friends feared this technology would make their roles obsolete to game productions. Of course the real flow of events led to us having more and more animations in each game; and increasing the demand for animators tenfold.
Motion capture is brilliant, and it led to an increased demand for a variety of character motions that is difficult to satisfy. More and more animations are being put into games. While the numbers of animations seem immense, it seems it’s never enough: the set of motions you can create grows linearly, while set of motions you need to create grows exponentially. One of the examples I use is: imagine you want your character to crouch, while limping, and holding a gun, while walking up the stairs. You can capture this specific motion; but you can’t possibly capture every possible combination of different guns, different stairs, with different characters.
The solution I am working on, IK Rig, focuses on generating animations in realtime; basically having a solid set of motions that you can procedurally change to potentially create a limitless set of combinations for your characters. We’ve developed a way to turn a walking animation into crouching; a way to teach characters to carry props; a way to change a basic walk into climbing ladders, limping, crawling and so on. The application of this solution means you get your crouch/limp/gun/stairs created instantly from a base walking animation. We can now also change the scale of characters and their proportions to give them specific attributes like strength and health, and reflect these attributes in their motion style.
Hence the word “systemic”: with IK Rig, we don’t try to predict every possible combination, but instead generate motions on the fly.
I always have mixed feelings explaining how things work; much like an illusionist dislikes explaining how simple his tricks actually are.
There is really no sorcery; the idea for IK Rig evolved from my personal experience as an animator. Every new “rule” I create starts with a mental experiment, asking the question “how would I do this by hand?”
For example, to carry a pizza box, I would place an object in front of the chest, constrain hands on the box, and add some slight bouncing to show the weight of the box.
The next step is to realize “but I can do this with code!”. And the fun part is that in code, we can add variables, such as shifting the weight of the box, or changing the character’s strength, and link the effects to the end motion. Now the character can carry pizza, or a concrete slate; this is how we can scale characters, reflect their health, and so on… adding variety of movement to our games.
This is one of the oldest tricks in the book; for years it’s been done this way manually, or automatically. I keep referencing a number of great developments in animation such as Endorphine, HumanIK, FinalIK, RunTime and more. There are some brilliant people at work in game development and animation; and new developments keep coming. The most promising focus on runtime implementation; and of those, the best ones are about driving full body, not just hands (to carry guns) or feet (to climb stairs).
One of the first steps we took was finding out the minimum set of control points. You see, across dozens of projects there are hundreds of characters and hundreds of rigs; all with their hierarchy, scaling, naming conventions etc. If we represent a character class (like “biped”) with a set of IK chains, or full body IK, we will only deal with small number of controllers – like target position/rotation for hands instead of full hierarchy.
I call this shared format the “IK Rig definition”; having such a shared format means several things:
- if you map any rig to this format, all the animations of this rig can be transferred to this format;
- applying modifications to a short list of IK nodes is much easier than processing a full list of bones; hence you have a lot of freedom to modify the motion;
- then these animations can be transferred back to any other rig.
All of these steps are done directly in the engine. The mapping (1) is done the first time you import your character rig. Most of the work is creating the conditional transformations (2) – I call them “IK Rig rules” – previewing their results, mixing them, modifying, applying to different characters. The IK Rig rules are applied per frame at runtime; the resulting desired positions are fed to the IK system, which in turn drives the character bones.
In case of quadrupeds, there is a different IK Rig definition. Once I map characters to it, I can start sharing a dog’s movement, for instance, with horses, elephants, centaurs and more.
As an industry that is always evolving, animations in modern games are improving thanks in part to technology breakthroughs. This is exciting as these innovations have the potential to reduce limitations on creative freedom and the creative process that have restricted what’s possible for animators when developing games. One of the easiest examples is a lack of variety of character height and proportions in games because they share a rig – because the million animations captured only work on this rig. Another great example is the lineup of players before a match of football or any other team sport: versatile and varied in a real match, but all completely equal in the video game.
Another example is our difficulties with navigation, especially on uneven terrain, and particularly if that terrain also has stairs: this is extremely expensive to solve with motion capture alone. So we see conventional foot IK systems implemented to solve this problem, but the results are merely tolerable, not stunning visually or highly accurate.
The conventional solution of “a million mocap animations plus some leg IK plus some hand IK” limits the way our characters look, it limits the way our levels are built, and – paradoxically – it limits the freedom of motion the player has to move a character through the game.
An example of how this approach applies to systemic game design: the fire starts, so the fire truck arrives. You could manually design this event (and some years ago, that’s how we did it); but if you create a system that knows to send fire trucks to fire, you will design the series of events throughout your whole game. Much like this, with systemic animation, if the player says “jump” – the character will be taught to jump, regardless of whether or not there is a mocap clip for the current situation
Already we’re looking at a set of limitations to be removed, many of those not even directly connected with animations: level design, game design, even character design to open up new possibilities for game developers.
The Future of IK Rig
The fun part about the next steps for IK Rig is that many of the next steps will not be made by me. Systemic animation is not a set of lines of code, but instead a philosophy; and as such, it can be built and adapted to any game. While I can’t share exact timing, we will continue to incorporate and evolve the tech here at Ubisoft for our upcoming game releases.
I’ve done two talks on the IK Rig tech: the tech reveal at Nucl.AI 2015, and the “Moving Forward” talk at GDC 2016. I hope I’ve shared enough on the principles at work for others to tackle the technology and its solution. I am always available to answer any questions and to clarify my position. I am always happy to share and learn with colleagues from across the industry.
It’s my opinion that if something can be automated, perhaps humans weren’t meant to do this in the first place. IK Rig is developed to help animators create more content by taking care of the low-level, repetitive work. Our hope is that this would free up our animators to focus on pushing the industry forward and to take creative risks – to imagine new animation prototypes, create more beautiful, emotional cinematics, design special animation cases and more that we haven’t even thought of.
And we will always need great actors to deliver amazing performances for our games, but the way we capture these performances will evolve beyond recognition. We tend to rely less on hardware and more on software.
The big thing to happen will be a full blown procedural animation solution. There is a lot to be resolved in the field of physics, teaching characters to keep their balance, advancing plausibility and realism, and conveying awareness of the actions and. And there is a lot that can be done in the area of character psychology. Virtual characters today are not just about locomotion, but about how they act and how they embody a personality. This demands a next level of AI; and this is a whole new challenge in itself.
It’s not a secret there are people playing with neural networks. I hope we’ll have time to enjoy the results in our games before the neural networks start playing with people.