The company's Omniverse will have new Unreal Engine and Unity Connectors, while Omniverse Audio2Face is to feature facial animation and lip-sync quality improvements and upgraded real-time RTX rendering.
During his GTC 2023 keynote, NVIDIA CEO Jensen Huang revealed the upcoming enhancements and improvements for NVIDIA Omniverse, a platform for creating and operating metaverse applications, and its Audio2Face application, which utilizes generative AI to create lifelike facial animations from just an audio source.
According to the announcement, the upgraded NVIDIA Omniverse features an open-beta version of the Unity Connector, which enables you to add your Unity scenes directly onto Omniverse Nucleus servers with access to platform features, such as the DeepSearch tool, thumbnails, bookmarks, and others, as well as an upgraded version of the Unreal Engine connector, which now lets you use UE's USD import utilities to add skeletal mesh blend shape importing, and Python USD bindings to access stages on Omniverse Nucleus, as well as brings improvements in import, export and live workflows.
Furthermore, the update adds more than a 1000 new SimReady assets, built to real-world scale with accurate mass, physical materials and center of gravity for use within Omniverse PhysX.
Additionally, the company shed some light on the latest upcoming generative AI update for Omniverse Audio2Face, an AI-powered app allows 3D artists to efficiently animate secondary characters, generating realistic facial animations with just an audio file, stating that the new version comes with Mandarin support, overall facial animation and lip-sync quality improvement across multi-languages revamped real-time RTX rendering, and the first real-time ray traced SubSurface Scattering shading.
You can learn more about the Omniverse updates here. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.