Phase-Functioned Neural Networks for Character Control

Daniel Holden studies a new kind of neural network that can be used to create a character controller suitable for games.

Daniel Holden, a researcher at Ubisoft Montreal, will be presenting Phase-Functioned Neural Networks for Character Control at this year’s SIGGRAPH. Researcher studies a new kind of neural network called a “Phase-Functioned Neural Network” that can be used to create a character controller suitable for games. This controller would require very little memory, is fast to compute at runtime, and generates high quality motion in many complex situations. 

This paper can really push the boundaries of the industry and change the way characters are controlled. You can watch the video on this idea below:

Abstract

We present a real-time character control mechanism using a novel neural network architecture called a Phase-Functioned Neural Network. In this network structure, the weights are computed via a cyclic function which uses the phase as an input. Along with the phase, our system takes as input user controls, the previous state of the character, the geometry of the scene, and automatically produces high quality motions that achieve the desired user control. The entire network is trained in an end-to-end fashion on a large dataset composed of locomotion such as walking, running, jumping, and climbing movements fitted into virtual environments. Our system can therefore automatically produce motions where the character adapts to different geometric environments such as walking and running over rough terrain, climbing over large rocks, jumping over obstacles, and crouching under low ceilings. Our network architecture produces higher quality results than time-series autoregressive models such as LSTMs as it deals explicitly with the latent variable of motion relating to the phase. Once trained, our system is also extremely fast and compact, requiring only milliseconds of execution time and a few megabytes of memory, even when trained on gigabytes of motion data. Our work is most appropriate for controlling characters in interactive scenes such as computer games and virtual reality systems.

Read the full paper

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more