At its GTC 2024 conference, Nvidia announced its latest GPU, the B200. Built on the cutting-edge Blackwell architecture, it offers dramatic improvements over its predecessor, the Hopper H100:
- 4x faster training
- 30x faster inference
- 25x better energy efficiency
These gains enable a new generation of DGX SuperPODs capable of an astounding 11.5 billion billion floating-point operations (exaflops) using a new low-precision format. The B200 is named for mathematician David Harold Blackwell, honoring his groundbreaking contributions to the field.
Key Features
- Smaller Numbers, Better Performance: The B200 can compute using floating-point numbers as small as 4 bits, enhancing speed and efficiency compared to Hopper.
- Enhanced Memory and Connectivity: The B200 boasts advanced HBM3e memory and the latest NVLink interconnect for faster communication between GPUs.
- AI Reliability & Security: Nvidia has integrated features to boost up-time, protect AI models, and enhance data analysis speed.
Combining the B200 with Nvidia’s Grace CPU allows for powerful new supercomputers. A DGX SuperPOD, consisting of eight DGX GB200s, can deliver 11.5 exaflops of AI computing.
Nvidia also announced its move to production with the cuLitho inverse lithography tool and the launch of a new humanoid robotics foundation model, GROOT.
Nvidia expects Blackwell-based SuperPODs and other computers to be available later this year.