NVIDIA Omniverse Supports A100 and H100 Systems

The ecosystem expands into HPC with connections to NVIDIA Modulus, NeuralVDB, and IndeX and Kitware’s ParaView to accelerate Million-X scale discovery.

NVIDIA announced that Omniverse – its open computing platform for building and operating metaverse applications – now connects to popular scientific computing visualization software and supports new batch-rendering workloads on systems powered by NVIDIA A100 and H100 Tensor Core GPUs.

The company also introduced real-time digital twins enabled by NVIDIA OVX, a computing system designed to power large-scale Omniverse digital twins, and Omniverse Cloud.

AI and HPC researchers, scientists, and engineers can run batch workloads supported by Omniverse on A100 or H100 systems – including rendering videos and images or generating synthetic 3D data.

NVIDIA also revealed connections to scientific computing tools such as Kitware’s ParaView, an application for visualization; NVIDIA IndeX for volumetric rendering; NVIDIA Modulus for developing physics-ML models; and NeuralVDB for large-scale sparse volumetric data representation.

“Today’s scientific computing workflows are extremely complex, involving enormous datasets that are impractical to move and large, global teams that use their own specialized tools,” said Dion Harris, lead product manager of accelerated computing at NVIDIA. “With new support for Omniverse on A100 and H100 systems, HPC customers can finally start to unlock legacy data silos, achieve interoperability in their complex simulation and visualization pipelines, and generate compelling visuals for their batch-rendering workflows.”

Learn more here and don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more. 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more