nvidia-gh200-platform

NVIDIA Goes All-in with AI in its Upcoming Processor

NVIDIA recently announced that it was in the process of the developing its next processor, which the company refers to as the GH200 Grace Hopper platform. The company touts the chip as the world’s first HBM3e processor, which is designed to handle AI-based workloads, language models, vector databases, and more.

The GH200 will be available in different hardware configurations, consisting of a dual configuration which can deliver up to 3.5x more memory capacity and 3x more bandwidth compared to the current-gen, comprised of a single server with 144 Arm Neoverse cores, eight petaflops of AI performance and 282GB of HBM3e memory technology.Jensen Huang, founder and CEO of NVIDIA describes the new processor:

“To meet surging demand for generative AI, data centers require accelerated computing platforms with specialized needs… The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center.”

The new platform uses the Grace Hopper Superchip, which can be connected with additional Superchips by NVIDIA’s NVLink technology, allowing them to work together for use in generative AI. Nvidia adds that HBM3e memory is 50% faster than current HBM3, and can deliver a total of 10TB/sec of combined bandwidth, allowing the new platform to run models 3.5x larger than the previous version.

In terms of availability, the company says that the GH200 will be available in systems sometime in Q2 2024.

Source: NVIDIA

Exit mobile version