
Nvidia has officially announced its latest advancements in AI technology with the introduction of the Blackwell Ultra GB300 and Vera Rubin superchips. These new chips are designed to significantly enhance AI performance and capabilities, particularly in data centers, as part of Nvidia's ongoing efforts to lead in the AI computing space.
The Blackwell Ultra GB300 is 1.5 times faster than its predecessor, the B200, featuring 288GB of HBM3e memory and capable of delivering up to 15 petaflops (PFLOPS) of dense FP4 compute. This increase in performance is crucial for handling more sophisticated AI models and workloads.
Nvidia has designed the GB300 to be modular, allowing for configurations that include multiple GPUs. For instance, a single rack can house up to 72 Blackwell Ultra GPUs, providing a total of 20TB of HBM memory and enabling exascale computing capabilities.
The Blackwell Ultra platform is positioned as an "AI factory," which can be linked together with other systems to create powerful supercomputing solutions. This includes configurations like the Blackwell Ultra DGX SuperPOD, which integrates multiple racks for massive computational throughput.
The GB300 will be paired with Nvidia's Arm-based Grace CPUs, enhancing performance for AI workloads that require both high processing power and efficient memory management.
Alongside the Blackwell Ultra, Nvidia also introduced the Vera Rubin superchip, which focuses on specific AI tasks and applications. While detailed specifications were less emphasised in initial announcements, it is expected to complement the functionalities of the Blackwell Ultra by targeting different aspects of AI processing.
Nvidia plans to begin shipping the Blackwell Ultra GB300 products in the second half of 2025, aiming to meet the increasing demand for advanced AI computing solutions.