logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

DeepMotion: AI Driven Motion for Games

The devs from DeepMotion show how their new technology may revolutionize animation production for video games with AI “Motion Intelligence”.

The devs from DeepMotion show how their new technology may revolutionize animation production for video games with AI “Motion Intelligence”. Our editor Kirill Tokarev met with the team at GDC 2018, talking about some of the interesting products they are making.

DeepMotion was founded in 2014. Our CEO, Kevin He, had over a decade of experience in gaming—having worked at companies like Blizzard, Roblox, and Disney—before starting this company to build the next generation of physics-driven gaming and animation software. The rest of the team comes from gaming, animation, and visual effects as well — companies like Ubisoft, Microsoft, Pixar, and Blizzard. We’ve used decades of experience in the field alongside recent advancements, particularly in artificial intelligence, to build software that addresses major pain points in both 2D and 3D character animation. With this technology, we’ve focused on a mission to build a platform for “Motion Intelligence”: a digital character’s ability to learn how to drive its body to perform complex motor skills in a flexible, natural way.

“Motion Intelligence” is observable in the real world. For example, human motion is an intricate coordination between the brain and over two hundred bones, joints, and muscles; as babies learn to crawl, stand, and walk, these muscles are activated and deactivated by the cerebellum to create movement, following months of imitation, trial, and error. Motion skills begin to take shape when the child tries to achieve a simple goal, like getting to her mother across the room, and her brain learns locomotion to achieve that goal.

Our technology draws inspiration from the process of babies learning to coordinate and regulate their musculature in accordance with goals. Just like the baby, a digital character whose musculature is comprehensively physically modeled can learn motor skills through mimicry, in concert with the process of learning how to optimally achieve goals like staying upright or moving between two points. In short, we’ve created a digital cerebellum for virtual characters.

What is a Motion Intelligence platform? In a nutshell, it’s a cloud-based machine learning pipeline that trains digital actors to perform complex motor skills like parkour, dancing, athletics, and martial arts. Our platform will store, index, and enable users to use a massive repertoire of digital motor skills. We have SDKs supporting both Unity and Unreal to ensure the pipeline is compatible with typical game development workflows.

The basal feature of our technology stack is a physics engine optimized for real-time articulated rigid body-based physics. Our users that build mechanical or robotics simulations rely heavily on this “articulated physics engine”, since it can handle collision and multi-joint simulation.

Once we built a physics engine powerful enough to handle joint articulation, we created a comprehensive biomechanical model for characters (humanoid, quadruped, hexapod, etc). When a typical FBX file is configured to our simulation rig, we find characters can balance and walk on their own, without any keyframe animation of motion capture data. Users can adjust joint parameters, muscle strength, bone rotation, and much more, to adjust a character’s style of movement. For example, we can simulate a diverse group of zombies by adjusting the joints and bones in various ways to reflect decomposition.

(Learn how to simulate 10 zombies in Unreal without keyframe animation or motion capture here.)

This form of generating basic locomotion, like walk cycles, provides a tremendous reduction in time and cost for technical animators. Difficult problems like diverse crowd simulation and non-repetitive character locomotion are easily addressed with our solution. Users can even build their own creation out of Unity colliders, or modify the body of a physically simulated character and see the effects in real-time.

Most importantly, these physically simulated characters are interactive. Because these characters simulate biomechanical bodies, they respond naturally to environmental stimuli like force and terrain changes. This has huge implications for interactive content.

We also can control physically simulated characters with trackers on standard VR rigs to create Avatars that exhibit natural full body locomotion. Unlike typical IK solutions, our physics-based solution infers lifelike lower body movement and realistic limb rotation. Inverse kinematic algorithms approximate limb placement using geometry, leaving room for strange rotations in the legs and elbows; our robotic model constraints joints to solve this issue altogether, even with the limited data points provided by VR headsets and hand controllers. Again, this has huge implications for increasing immersion in XR and social games.  Our articulated physics engine, biomechanically simulated characters, and the VR Avatar rig are available to early adopters as part of our closed alpha for DeepMotion Avatar.

True Motion Intelligence can’t be realized with a physical body alone. As in the baby example, a digital character needs an intelligent motor controller that learns and is adaptive. To achieve this we built a proprietary algorithm using recent advancements in machine learning and deep reinforcement learning. These techniques allow us to train the physically simulated character to optimize a variety of goals: to maintain balance, to hit various targets, and to maintain body positioning that resembles training data. The training data is either a second of motion capture footage or keyframe animation.

Over the course of training, the character develops the ability to perform the desired skill. What is truly exceptional is that these skills are still constrained by the physical model: meaning that characters can still be interacted with, interrupted, and the behaviors can be parameterized within a physically reasonable scope. For example, a running character will exhibit emergent behaviors, like stumbling and stumble recovery, in order to maintain balance when faced with obstacles in the environment. We’ve also used machine learning to integrate learned behaviors into a motion map, which allows for fluid transition between behaviors.

All of these skills are compatible with gameplay code. Characters can achieve high-level objectives using learned skills, like “get from point A to point B”, “never collide with another character”, and other types of motion planning; or, individual skills can also be triggered by command like “run”, “vault”, or “backflip”. Our AI-driven Motion Intelligence product, DeepMotion Neuron, is the next gen of Avatar and will be released this summer. You can sign up for updates on the Neuron release here.

We believe physics-based procedural character animation, i.e., Motion Intelligence is the future of rapid prototyping, cost-effective high-fidelity character animation, in addition to opening the door for interactive content and more open-ended forms of narrative.  Our tools are being used by game industry veterans, those in visual effects industries, roboticists, AR/VR developers, and industrial education. These tools will also democratize AAA game design for small studios and indie developers who could never afford, say, a crowd simulation team, or $10,000+ dollars a day at the mocap studio; there’s a lot of speculation about who wins and who loses as labor is automated by intelligent systems.  We see our form of automation as an enabler for more efficient creative design for all developers.

Physics-based skeletal animation and AI-driven walk cycles are also transformative for 2D character animation. Our 2D software, Creature, provides user-friendly physics and force motors to automate complex secondary motion, like hair or cloth movement.

We also use AI to auto-rig characters and create walk-cycle templates. Creature was developed by our Technical Director, Jiayi Chong, after nearly a decade at Pixar, to simplify 2D animation while also elevating quality.

Coming from a long history in 3D visual effects, Jiayi has included features in the tool that will appeal to animators working in both 2D and 3D. Users can create 2D characters out of 3D characters, as well transferring motion capture data to a 2D character rig.

Level80 users can use the code LVL80 to get a 30 day free trial of Creature, here!

The team of DeepMotion

Interview conducted by Kirill Tokarev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more