Cascadeur: The Future of Game Animation

Evgeniy Dyabin talked about the advanced features of the new animation tool.

Evgeniy Dyabin, a founder of a physics-based animation tool Cascadeur, talked about the way it works and upgrades the animation pipeline in game production.

Introduction

80lv: Hi, Evgeniy! Could you please introduce yourself to our audience? Where do you work, what do you do, what are your main tasks?

I am the owner and chief producer of Banzai Games, a game development studio behind Nekki projects. You might know one of our mobile fighting games Shadow Fight 3. Our office is located in Moscow. I work on game design, participate in developing game mechanics and making all key decisions. Since a long time ago I have been most interested in animation and happened to have some experience in it, so I also pay a lot of attention to this stage of the production which plays a big role in our games. Through the years our animators and I have developed a special approach to creating animations in our in-house software. This is how our animation software solution Cascadeur came into being.

Current State of Game Animation Technologies

80lv: Let’s talk about the current state of animation tech. It seems like most of the current games have VERY advanced animation, but the majority of it is done either in Maya with simple keyframes or recorded and captured from actors. What are the limitations of these approaches? Why even the simplest animation tasks turn out complicated?

Animations in games are very advanced, and the animators creating them are very skilled. But no matter how skilled you are it is extremely hard to create something as simple as a believable animation of a falling dice by keyframes. Anyone would easily recognize a handmade dice animation from a real-life dice. The main limitation of a keyframe animation is that you can’t make it physically believable for the viewer.

Watch Twitch Talk: Animation Exchange 2019 from TwitchOffice on www.twitch.tv

Motion capture sure looks physically believable, but there are a lot of limitations coming from the fact that it has to be done in real life. Mocap isn’t always satisfactory, especially in games. You need more control over animation: precise timings and motion parameters such as height and distance of jumps – all this is really hard to do with an actor, and some motions are just impossible for a human. For example, you can find a martial artist and capture a good-looking jump kick. But when you have to adapt this animation for the gameplay or decide to make the kick more powerful, you’ll just break the laws of physics, and the movement won’t look so convincing anymore.

Also, in games, you can often encounter creatures whose movements aren’t that easy to capture from a human actor.

Technology Behind Cascadeur

80lv: Let’s talk about the way your tool changes the workflow. First of all, what lies at the core of this technology? Do you incorporate the popular Disney’s principles in your tool? 

The first thing we add during the animation production is the physical skeleton. It is connected to the rig of the character, so by creating your animation you also animate the rigid body’s movement. Having this information, our tools can analyze it and tell you the physical characteristics your pose or animation has.

We also have tools that “fix” physical properties in animation. For example, your character is performing a somersault. During the jump, the angular moment should stay constant. If you do not comply with this limitation the somersault will not look believable. Our tool tries to find a solution with constant angular momentum. To find these solutions a big part of the physical simulation is carried out internally.

This kind of approach lets the animators do the whole animation in the software and make it physically realistic.

As for Disney’s twelve principles of animation – if we look closely, we’ll see that many of them are actually about physics. When animators create physical behavior without access to precise simulation, they have to hyperbolize and exaggerate physical laws, and this results in expressive, yet cartoony motions. Amazingly enough, if you abide by the laws of physics, many of Disney principles would be simple consequences of inertia. But there is an important difference – with accurate physics simulation, we can avoid cartoon-style movements.

Simulation vs. Keyframe Animation

80lv: In your talk, you’ve mentioned that a lot of realistic animations for hair, cloth, muscles are simulated. Then why do we still use keyframes, not only in cartoon production but in AAA games as well? After everything is captured, the artists still set up everything by hand.

Usually, the simulation is only done for “secondary” motions such as when you have some animation of the character and you want to know how his hair and his clothes would behave throughout it. However, your secondary animation does not affect the main character animation.

However, the animation of the character’s body is affected by all of the body parts. In real life, if you swing your hand it will affect the rotation of the whole body. That’s why it is so tricky to change a motion captured animation without ruining its believability.

Yet, you are still doomed to make changes to the mocap animation because there is always some tweaking you want to apply during the whole production. Also, there are a lot of limitations in the game you might not be able to cover via mocap. For example, you might want the character to jump higher than it is possible for a human. Or maybe you have a fighting game (like our Shadow Fight 3) and you need your all your animations including the jumps. Due to gravity, real jumps will always be pretty slow, so in Shadow Fight 3 all the jump animations are done with 2x gravity of Earth.

The main problem in character animation production is the animator’s control over the outcome. If someday we build a neural network capable of creating and simulating character movements, we’ll be left with a question: how do we make it produce a particular motion or a stunt for our needs and explain to this network what we want? We need a hybrid approach: an animator should have tools for editing animations, while an intelligent system should help to make the animation correct and convincing. At the moment, our system only works with physics, but we don’t plan to stop there.

Importance of Physics

80lv: What are the troubles with simulating bipedal human figures? What makes them so complex and how can we deal with the uncanny valley? How does your software help here?

Bipedal postures have a smaller number of the fulcrum points at each moment on average, and that means you have to pay more attention to keeping your balance. And to do that realistically you have to take physics into account.

There are some cool tricks a human can’t do. Take as an example the cat’s ability to always land on its feet. A human can’t do that. And again, to understand why it happens in such a way you have to know the laws of physics behind it.

Of course, a cat doesn’t know the physics, but it does know how to use its muscles to land on the feet. Just like the cat, we all know what behavior to expect from the objects when they are falling, colliding, etc. In our toddler years, all of us saw how objects behave and how they move, and this knowledge helps us to predict how to catch a falling bottle or how to throw a Frisbee.

This is why it is so hard to trick a viewer when creating animation. If you break the laws of physics even for the simplest objects, the person will know: this is not real. Fortunately carrying out physics simulation is relatively easy for simple objects, but it is much more complicated for a character. And that’s why mocap is so important: the motion captured person moves in real life, and thus behaves accordingly to the laws of physics. But it works well only for bipedal figures.

If we want to overcome the uncanny valley and make the viewer believe in our animation, not only does it have to look aesthetically pleasing, but it also must have correct physics. Animators are really good at poses and they handle the overall movement well, but keeping the physics correct is close to impossible. Our software solution is aimed at providing help with this to the user.

Pipeline with Cascadeur

80lv: Your product does look like magic. Could you tell us a bit about the way it works and how you use it in your pipeline?

What we have right now in terms of technology isn’t magic at all. I’d even say I am surprised that the tools like in Cascadeur are not included by default in most 3D packages intended to work with animation.

Our pipeline looks like this:

The animator creates a draft of the animation: keyframes and interpolations. At this point, you can already see the center of mass and its trajectory. The animator can mark the fulcrum points for the character and, using our tools, create a physically accurate trajectory for the center of mass and correct rotation for the character. Our algorithms try to preserve the poses set in the draft, however, to achieve the desired results you’ll have to manually make some changes and see how they affect the resulting animation. Most significantly, no matter how much we want to edit and improve the animation, physical accuracy can always be preserved.

Cascadeur might look like magic someday, but only if we are able to automate all these tools and achieve physically accurate solutions without any effort. For now, these are just instruments, and you have to learn how to use them.

Simulation of the Movements

80lv: What way is the simulation built? Do you have a base of movements? Do you simulate the entire physics of the body movement? For example, assign every part of the body certain physical properties, give it an impulse and a vector and see what happens?

As I’ve already mentioned, the animator starts with creating a draft. With this draft, we have character poses for every keyframe and its interpolation. If we further change the poses, most likely, the animation will get distorted. So we have to solve a reverse problem.

For example, if the character jumps, we know from what positions the jump starts and where it ends. The animator has an option to point out important in-between frames. After this, when our algorithm moves and rotates the character in every frame, the center of mass moves along a ballistic trajectory and angular momentum is preserved – all this while keeping the end result as close to the draft as possible. The algorithm tries to preserve every pose. When there are fulcrum points to take into account, the problem is a bit more difficult, but the idea is the same.

The animator can change the animation and see how it affects the character’s flight. As we get closer to the desired results, there are less and less nontrivial fluctuations in physics, and we can improve and refine the animation as much as we want.

Intelligent Assistance

80lv: Let’s talk about the further development of your intelligent system. Do you have a neural network that learns all your movements? How would that system be different from the deep learning stuff that Deepmind makes for traversal AI? Do you think they could complement each other?

There is a traditional approach when you have a physical model that includes muscles controlled by a neural network. You set a goal for the neural network to achieve – for example, a goal to keep balance, to cover the maximum distance or, in the case when you want to build a tool, a goal to try to get as close to the given keyframe data, as possible. This approach is theoretically very promising and in some distant future, it might be used a lot. But currently the time the neural network needs to achieve something close to what we will call animation is just too large, and the ability to reach a quality good enough for using it in real animation production is not anywhere close, although one must admit some interesting technologies already exist.

The main difference in our approach is that we want animators to retain full control over the end results of their work. Whatever our intelligent system might do, it only suggests improvements for an already existing animation. For now, it’s only physics. But we are researching how we can help animators with poses and body part trajectories, train a neural network to distinguish a convincing and lifelike pose from a stiff and unnatural one.

We call this concept the Green Ghost. Here is how it might look like: as the animator starts creating key poses, the intelligent assistant figures out where the fulcrum points should be placed, where jumps start and end, their heights and such. This system calculates not only correct physics but natural poses and trajectories for every body part. The animator can see this as a green ghost existing in parallel with a manually created animation. Perhaps, right from the beginning, you’ll see a good result – then you can copy it. But if it is not what you want, simply continue working, adding keys, changing poses – and the ghost will take all the changes into account. You can always copy poses and fragments that you like from the ghost. This is how the human mind and AI can work together, moving closer to each other.

Using this approach can save a lot of time. The general idea of animation is usually clear with only a handful of frames, and 90% of animator’s time is spent on routine polishing and not on the creative process.

Distribution

Right now we have launched our closed beta test. You can sign up on cascadeur.com and we will send you the package within 2 weeks to try out the software. Currently, rigging is not included as our tools are not ready for public use, but if you do want to use Cascadeur in your pipeline, we are open to discussion. Right now we are mostly offering to rig your skeleton for you so that you can try Cascadeur on your own model. But, as the program is not yet released, it is not possible to purchase it.

Currently, we support fbx export. Unity and Unreal both support it too, so using it with them is surely possible. In the future, we hope to add integration with more game engines.

Right now we are focused mostly on action animation, but we do have a goal to cover all the needed animation toolset in Cascadeur so that you can create an animation of any complexity from scratch.

Evgeniy Dyabin, chief producer and founder of Cascadeur and Banzai Games.

Interview conducted by Kirill Tokarev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more