logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Generating Animations From Audio With NVIDIA’s Deep Learning Tech

Check out a tool in beta called Omniverse Audio2Face that lets you quickly generate new animations.

In case you missed the news, NVIDIA has a tool in beta that lets you quickly and easily generate expressive facial animation from just an audio source using the team's deep learning-based technology. The Audio2Face tool allows users to simplify the animation of 3D characters for a game, film, real-time digital assistants, and other projects. The toolkit lets you run the results live or bake them out. 

NVIDIA's latest tech comes with “Digital Mark” – a 3D character model that can be animated with your audio track. The process is straightforward: you just select your audio and upload it – the tool deals with the rest. A pre-trained Deep Neural Network drives the 3D vertices of your character mesh to create the facial animation in real-time. The toolkit also allows you to edit various post-processing parameters to edit the performance of your character.

Audio2Face is said to process any language easily and the team is continually updating it with more and more languages. The tool can also be used to animate stylized characters or humanoid aliens, for example.

Learn more and get started here. Also, don't forget to join our new Reddit pageour new Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more