A new paper discusses how to use deep learning for character animation and control.
The study is a part of Sebastian Starke Ph.D. research at the University of Edinburgh in the School of Informatics, supervised by Taku Komura. The team developed a modular and stable framework for data-driven character animation, including data processing, network training, and runtime control, developed using Unity, Tensorflow, and PyTorch. The approach helps animating biped locomotion, quadruped locomotion, and character-scene interactions with objects and environments, plus sports games.
"We present a deep learning framework to interactively synthesize such animations in high quality, both from unstructured motion data and without any manual labeling," states the abstract. "We introduce the concept of local motion phases, and show our system being able to produce various motion skills, such as ball dribbling and professional maneuvers in basketball plays, shooting, catching, avoidance, multiple locomotion modes as well as different character and object interactions, all generated under a unified framework."
You can find the project page here.