logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

SIGGRAPH 2023: NVIDIA on Its Latest Advancements in Generative AI

The company's CEO, Jensen Huang, has made several groundbreaking announcements related to NVIDIA's AI tech.

NVIDIA's Founder and CEO Jensen Huang has recently joined the ongoing SIGGRAPH 2023, the renowned global conference and exhibition for computer graphics, to make several groundbreaking announcements pertaining to NVIDIA's advancements in the realm of generative artificial intelligence.

Addressing thousands of developers and digital creators during an in-person keynote, Huang spoke about the company's next-generation GH200 Grace Hopper Superchip platform, unveiled NVIDIA AI Workbench, a comprehensive toolkit that simplifies model tuning and deployment on NVIDIA AI platforms, and discussed how NVIDIA Omniverse, a computing platform that empowers individuals and teams to create OpenUSD-based 3D workflows and applications, will be boosted with generative AI.

"The generative AI era is upon us, the iPhone moment if you will," Huang said during his address. "Graphics and artificial intelligence are inseparable, graphics needs AI, and AI needs graphics."

During his speech, the CEO announced that NVIDIA's Grace Hopper Superchip, the NVIDIA GH200, which combines a 72-core Grace CPU with a Hopper GPU, is already in full production and has been since May 2023.

Expanding on this, he went on to announce that the GH200 Grace Hopper superchip platform will possess the capability to support multiple GPUs. Additionally, a complementary version of the platform with HBM3e memory will also be introduced.

What's more, this novel platform will be accessible in a diverse array of configurations. For instance, the dual configuration offers a remarkable enhancement – delivering up to 3.5 times the memory capacity and 3 times the bandwidth of the current generation's offering. This configuration comprises a single server equipped with 144 Arm Neoverse cores, providing eight petaflops of AI performance, and boasting 282GB of cutting-edge HBM3e memory technology. Leading system manufacturers are expected to deliver systems based on the platform in the second quarter of 2024.

Moreover, Huang revealed NVIDIA AI Workbench, a new unified and easy-to-use toolkit that enables creators to quickly create, test, and fine-tune generative AI models on a PC or workstation and then scale them to virtually any data center, public cloud, or NVIDIA DGX Cloud.

As explained by the team, AI Workbench streamlines the process of initiating an enterprise AI project. Utilizing an uncomplicated interface hosted on a local system, developers can fine-tune models sourced from popular repositories such as Hugging Face, GitHub, and NGC by utilizing custom data. These models can then be effortlessly shared across a range of platforms.

With AI Workbench, developers can effortlessly tailor and execute generative AI processes with just a few clicks. It grants them the ability to consolidate all the essential components, including enterprise-grade models, frameworks, software development kits, and libraries, into a unified workspace tailored for developers.

In addition, the CEO introduced a significant new update to NVIDIA Omniverse, an OpenUSD-native development platform designed for constructing, simulating, and fostering collaboration within tools and virtual environments.

The enhancements made to the Omniverse platform encompass progress in Omniverse Kit, the engine used for crafting native OpenUSD applications and extensions. Notably, improvements were also made to the NVIDIA Omniverse Audio2Face foundational application and its spatial-computing capabilities.

Huang revealed that Omniverse users now have the capability to generate content, experiences, and applications that seamlessly integrate with other OpenUSD-based spatial computing platforms like ARKit and RealityKit.

He also disclosed the introduction of a wide array of frameworks, resources, and services aimed at expediting the adoption of Universal Scene Description, commonly known as OpenUSD. Additionally, the CEO shared the news that Adobe and NVIDIA are collaborating on plans to integrate Adobe Firefly, Adobe's suite of creative generative AI models, into Omniverse as APIs.

On top of that, Huang introduced four new Omniverse Cloud APIs developed by NVIDIA. These APIs are tailored to assist developers in efficiently integrating and rolling out OpenUSD pipelines and applications:

  • ChatUSD: Assisting developers and artists working with OpenUSD data and scenes, ChatUSD is a large language model (LLM) agent for generating Python-USD code scripts from text and answering USD knowledge questions.
  • RunUSD: a cloud API that translates OpenUSD files into fully path-traced rendered images by checking compatibility of the uploaded files against versions of OpenUSD releases, and generating renders with Omniverse Cloud.
  • DeepSearch: an LLM agent enabling fast semantic search through massive databases of untagged assets.
  • USD-GDN Publisher: a one-click service that enables enterprises and software makers to publish high-fidelity, OpenUSD-based experiences to the Omniverse Cloud Graphics Delivery Network (GDN) from an Omniverse-based application such as USD Composer, as well as stream in real time to web browsers and mobile devices.

Besides that, the CEO also announced the latest version of the company's enterprise software suite, NVIDIA AI Enterprise 4.0, revealed that Cesium, Convai, Move AI, SideFX Houdini, and Wonder Dynamics are now connected to Omniverse via OpenUSD, disclosed NVIDIA's three new desktop workstation Ada Generation GPUs, and much, much more. You can read the full list of Huang's SIGGRAPH 2023 announcements by clicking this link.

Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on ThreadsInstagramTwitter, and LinkedIn, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more