Check out a SIGGRAPH 2020 study that discusses a novel approach to character animation.
Simulating moves in a dynamic environment is one of the main challenges of character animation. Motion capture data gives realistic results but such approaches are rather difficult to scale when working with large complex environments. Physics-based controllers are effective in such cases but provide less flexibility when trying to control.
A new paper discusses CARL, a quadruped agent that is capable of dealing with high-level directives and reacting naturally to dynamic environments. "Starting with an agent that can imitate individual animation clips, we use Generative Adversarial Networks to adapt high-level controls, such as speed and heading, to action distributions that correspond to the original animations," states abstract.
Fine-tuning through deep reinforcement learning let researchers teach the agent how to recover from unexpected external forces while simulating smooth transitions. The team states that this new approach allows creating autonomous agents in dynamic environments by setting up navigation modules.
You can find the paper here.