ZeroEGGS: Ubisoft's Speech-to-Gesture Neural Network

The framework only requires a short example motion clip and a speech prompt to generate a believable animation.

It seems that with the ongoing AI boom, more and more AAA game developers are starting to utilize neural networks and artificial intelligence in their workflows, looking for ways to push the believability of their games to a whole new level. Just recently, a team of researchers from Ubisoft La Forge presented a new neural network capable of generating incredible gesture animations.

Titled ZeroEGGS, the framework only requires a speech prompt and a short example motion clip to generate lifelike full-body character gestures. According to the team, the network outperforms previous state-of-the-art techniques in the "naturalness of motion, appropriateness for speech, and style portrayal."

"Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings," comments the team. "The probabilistic nature of our framework further enables the generation of a variety of outputs given the same input, addressing the stochastic nature of gesture motion."

You can learn more and access ZeroEGGS' code here. Also, don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more