Google Launched AI Supercomputers with NVIDIA H100 GPUs

The A3 supercomputer is built to train and serve demanding AI models.

Google has launched its next-generation A3 supercomputers built for AI and powered by NVIDIA H100 GPUs. The devices are "purpose-built to train and serve the most demanding AI models that power today's generative AI and large language model innovation."

The A3 machines use custom-designed 200 Gbps IPUs and offer up to 10 times more network bandwidth compared to our A2 VMs. They provide up to 26 exaFlops of AI performance, which "considerably improves the time and costs for training large ML models." 

Key features:

  • 8 H100 GPUs utilizing NVIDIA’s Hopper architecture, delivering 3 times compute throughput
  • 3.6 TB/s bisectional bandwidth between A3’s 8 GPUs via NVIDIA NVSwitch and NVLink 4.0 
  • Next-generation 4th Gen Intel Xeon Scalable processors
  • 2TB of host memory via 4800 MHz DDR5 DIMMs
  • 10 times greater networking bandwidth powered by our hardware-enabled IPUs, specialized inter-server GPU communication stack and NCCL optimizations 

To learn more about the A3, read Google's blog post. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more