NVIDIA Announces DGX GH200, A 100 Terabyte Supercomputer for AI Workloads

NVIDIA has unveiled the DGX GH200, its new AI supercomputing platform, designed for handling massive generative AI workloads.

At the Computex 2023 event held in Taipei, Jensen Hiang, the CEO of NVIDIA, unveiled new details about the company's upcoming supercomputer, the DGX GH200.

The DGX GH200 supercomputer utilizes the NVLink Switch System to integrate 256 GH200 Grace Hopper superchips, each equipped with an Arm-based Grace CPU and an H100 Tensor Core GPU, functioning collectively as a single GPU.

NVIDIA claims that this configuration enables the DGX GH200 to achieve a performance of 1 exaflop and possess a shared memory capacity of 144 terabytes. In comparison, this memory capacity is nearly 500 times larger than that of a single DGX A100 system.

NVIDIA claims that the architecture of the DGX GH200 offers 10 times more bandwidth compared to the previous generation, allowing it to deliver "the power of a massive AI supercomputer with the simplicity of programming a single GPU."

The DGX GH200 has garnered attention from major players in the industry, including Google Cloud, Meta, and Microsoft which are expected to be among the first companies granted access to this supercomputer for evaluating its performance with generative AI workloads.

According to NVIDIA, the DGX GH200 supercomputers will be available by the end of 2023.

You can learn more about the DGX GH200 here. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more