CoreWeave, NVIDIA and IBM Submit Largest-Ever MLPerf Results on NVIDIA GB200 Grace Blackwell Superchips
- Record-breaking MLPerf submission with largest-ever NVIDIA GB200 NVL72 cluster (2,496 GPUs)
- 2x faster training performance compared to similar cluster sizes
- 34x larger cluster than other cloud provider submissions
- Completed Llama 3.1 405B model training in just 27.3 minutes
- Platinum tier ranking in SemiAnalysis' ClusterMAX
- None.
Insights
CoreWeave's record-breaking AI benchmark demonstrates significant competitive advantage in the lucrative AI infrastructure market.
CoreWeave has made a significant statement in the AI infrastructure space with this benchmark achievement. The company, alongside NVIDIA and IBM, has demonstrated unprecedented scale by deploying 2,496 NVIDIA GB200 GPUs in a single cluster—34 times larger than any other cloud provider's submission to MLPerf.
What's particularly notable is the 2x performance advantage over competitors with similar cluster sizes, specifically when training Llama 3.1 405B, the most complex model in the benchmarking suite. Completing this training in just 27.3 minutes represents a dramatic acceleration of AI development cycles.
This benchmark carries substantial commercial implications. AI development companies face intense time-to-market pressure, and CoreWeave's infrastructure could potentially cut development time in half. For enterprises training large language models, this translates to significant cost savings and competitive advantages.
CoreWeave's position as the only cloud provider ranked in the Platinum tier of SemiAnalysis' ClusterMAX further solidifies their leadership in AI infrastructure. This benchmark serves as powerful validation of their cloud architecture strategy and suggests they've successfully optimized their platform specifically for the most demanding AI workloads.
The timing is also strategic—by demonstrating these capabilities months before widespread availability, CoreWeave is positioning itself to capture high-value customers in the increasingly competitive AI infrastructure market, potentially securing long-term contracts with AI labs and enterprises requiring cutting-edge computing resources.
Submission with nearly 2,500 NVIDIA GB200 GPUs achieved breakthrough results on most complex benchmarking model
The submission achieved a breakthrough result on the largest and most complex foundational model in the benchmarking suite–Llama 3.1 405B–completing the run in just 27.3 minutes. When compared against submissions from other participants across similar cluster sizes, CoreWeave's GB200 cluster achieved more than 2x faster training performance. This result highlights the significant performance leap enabled by the GB200 NVL72 architecture and the strength of CoreWeave's infrastructure in delivering consistent, best-in-class AI workload performance.
"AI labs and enterprises choose CoreWeave because we deliver a purpose-built cloud platform with the scale, performance, and reliability that their workloads demand," said Peter Salanki, Chief Technology Officer and Co-founder at CoreWeave. "These MLPerf results reinforce our leadership in supporting today's most demanding AI workloads."
These results matter because they translate directly to faster model development cycles and an optimized Total Cost of Ownership. For CoreWeave customers, that means cutting training time in half, scaling workloads efficiently, and training or deploying their models more cost-effectively by leveraging the latest cloud technologies, months before their competitors. With leading submissions for both MLPerf Inference v5.0 and Training v5.0 benchmarks and the sole cloud provider ranked in the Platinum tier of SemiAnalysis' ClusterMAX, CoreWeave sets the standard for AI infrastructure performance across the entire cloud stack.
About CoreWeave
CoreWeave, the AI Hyperscaler™, delivers a cloud platform of cutting-edge software powering the next wave of AI. The company's technology provides enterprises and leading AI labs with cloud solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers across the US and
The MLPerf name and logo are registered and unregistered trademarks of MLCommons Association in
View original content:https://www.prnewswire.com/news-releases/coreweave-nvidia-and-ibm-submit-largest-ever-mlperf-results-on-nvidia-gb200-grace-blackwell-superchips-302473361.html
SOURCE CoreWeave