STOCK TITAN

CoreWeave, NVIDIA and IBM Submit Largest-Ever MLPerf Results on NVIDIA GB200 Grace Blackwell Superchips

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Positive)
Tags
CoreWeave (CRWV), NVIDIA, and IBM achieved record-breaking MLPerf Training v5.0 results using 2,496 NVIDIA GB200 Blackwell GPUs on CoreWeave's AI cloud platform. The submission represents the largest NVIDIA GB200 NVL72 cluster ever benchmarked, 34x larger than other cloud provider submissions. The cluster completed training the Llama 3.1 405B model in just 27.3 minutes, demonstrating 2x faster training performance compared to similar-sized clusters. CoreWeave's achievement showcases their platform's capability to deliver superior AI workload performance, offering customers significant advantages in model development speed and cost efficiency. The company's leadership in AI infrastructure is further validated by its Platinum tier ranking in SemiAnalysis' ClusterMAX and strong performance in both MLPerf Inference v5.0 and Training v5.0 benchmarks.
CoreWeave (CRWV), NVIDIA e IBM hanno raggiunto risultati record nel test MLPerf Training v5.0 utilizzando 2.496 GPU NVIDIA GB200 Blackwell sulla piattaforma cloud AI di CoreWeave. Questa configurazione rappresenta il più grande cluster NVIDIA GB200 NVL72 mai testato, 34 volte più grande rispetto alle submission di altri provider cloud. Il cluster ha completato l'addestramento del modello Llama 3.1 405B in soli 27,3 minuti, dimostrando una velocità di training doppia rispetto a cluster di dimensioni simili. Il risultato di CoreWeave evidenzia la capacità della loro piattaforma di offrire prestazioni superiori per carichi di lavoro AI, garantendo ai clienti vantaggi significativi in termini di velocità di sviluppo del modello e efficienza dei costi. La leadership dell'azienda nell'infrastruttura AI è ulteriormente confermata dalla classificazione Platinum nel ClusterMAX di SemiAnalysis e dalle ottime performance nei benchmark MLPerf Inference v5.0 e Training v5.0.
CoreWeave (CRWV), NVIDIA e IBM lograron resultados récord en MLPerf Training v5.0 utilizando 2,496 GPUs NVIDIA GB200 Blackwell en la plataforma cloud de IA de CoreWeave. La configuración representa el clúster NVIDIA GB200 NVL72 más grande evaluado hasta la fecha, 34 veces mayor que las presentaciones de otros proveedores cloud. El clúster completó el entrenamiento del modelo Llama 3.1 405B en solo 27.3 minutos, demostrando un rendimiento de entrenamiento 2 veces más rápido en comparación con clústeres de tamaño similar. El logro de CoreWeave destaca la capacidad de su plataforma para ofrecer un rendimiento superior en cargas de trabajo de IA, brindando a los clientes ventajas significativas en velocidad de desarrollo de modelos y eficiencia de costos. El liderazgo de la empresa en infraestructura de IA se valida aún más con su clasificación Platinum en ClusterMAX de SemiAnalysis y su sólido desempeño en los benchmarks MLPerf Inference v5.0 y Training v5.0.
CoreWeave(CRWV), NVIDIA, IBM은 CoreWeave의 AI 클라우드 플랫폼에서 2,496개의 NVIDIA GB200 Blackwell GPU를 사용해 MLPerf Training v5.0에서 기록적인 성과를 달성했습니다. 이번 제출은 지금까지 벤치마크된 NVIDIA GB200 NVL72 클러스터 중 가장 큰 규모로, 다른 클라우드 제공업체 제출보다 34배 큽니다. 클러스터는 Llama 3.1 405B 모델 훈련을 단 27.3분 만에 완료해 유사 규모 클러스터 대비 2배 빠른 훈련 성능을 보여주었습니다. CoreWeave의 성과는 우수한 AI 작업 부하 성능을 제공하는 플랫폼 역량을 입증하며, 고객에게 모델 개발 속도와 비용 효율성 면에서 큰 이점을 제공합니다. 또한 SemiAnalysis의 ClusterMAX에서 플래티넘 등급을 획득하고 MLPerf Inference v5.0 및 Training v5.0 벤치마크에서 강력한 성과를 내며 AI 인프라 리더십을 확고히 했습니다.
CoreWeave (CRWV), NVIDIA et IBM ont obtenu des résultats record lors du MLPerf Training v5.0 en utilisant 2 496 GPU NVIDIA GB200 Blackwell sur la plateforme cloud IA de CoreWeave. Cette soumission représente le plus grand cluster NVIDIA GB200 NVL72 jamais testé, 34 fois plus grand que les soumissions d'autres fournisseurs cloud. Le cluster a terminé l'entraînement du modèle Llama 3.1 405B en seulement 27,3 minutes, démontrant une performance d'entraînement deux fois plus rapide que des clusters de taille similaire. La réussite de CoreWeave met en avant la capacité de leur plateforme à offrir des performances supérieures pour les charges de travail IA, offrant aux clients des avantages significatifs en termes de rapidité de développement des modèles et d'efficacité des coûts. Le leadership de l'entreprise dans l'infrastructure IA est également confirmé par son classement Platinum dans le ClusterMAX de SemiAnalysis et ses solides performances dans les benchmarks MLPerf Inference v5.0 et Training v5.0.
CoreWeave (CRWV), NVIDIA und IBM erzielten mit 2.496 NVIDIA GB200 Blackwell GPUs auf der AI-Cloud-Plattform von CoreWeave Rekordergebnisse beim MLPerf Training v5.0. Die Einreichung stellt den bisher größten getesteten NVIDIA GB200 NVL72 Cluster dar, der 34-mal größer ist als die anderer Cloud-Anbieter. Der Cluster absolvierte das Training des Llama 3.1 405B Modells in nur 27,3 Minuten und zeigte damit eine doppelt so schnelle Trainingsleistung im Vergleich zu ähnlich großen Clustern. Der Erfolg von CoreWeave unterstreicht die Fähigkeit ihrer Plattform, überlegene AI-Workload-Leistung zu liefern und bietet Kunden erhebliche Vorteile in Bezug auf Modellentwicklungs-Geschwindigkeit und Kosteneffizienz. Die Führungsposition des Unternehmens in der AI-Infrastruktur wird zudem durch die Platinum-Rangierung im SemiAnalysis ClusterMAX sowie starke Leistungen in den MLPerf Inference v5.0 und Training v5.0 Benchmarks bestätigt.
Positive
  • Record-breaking MLPerf submission with largest-ever NVIDIA GB200 NVL72 cluster (2,496 GPUs)
  • 2x faster training performance compared to similar cluster sizes
  • 34x larger cluster than other cloud provider submissions
  • Completed Llama 3.1 405B model training in just 27.3 minutes
  • Platinum tier ranking in SemiAnalysis' ClusterMAX
Negative
  • None.

Insights

CoreWeave's record-breaking AI benchmark demonstrates significant competitive advantage in the lucrative AI infrastructure market.

CoreWeave has made a significant statement in the AI infrastructure space with this benchmark achievement. The company, alongside NVIDIA and IBM, has demonstrated unprecedented scale by deploying 2,496 NVIDIA GB200 GPUs in a single cluster—34 times larger than any other cloud provider's submission to MLPerf.

What's particularly notable is the 2x performance advantage over competitors with similar cluster sizes, specifically when training Llama 3.1 405B, the most complex model in the benchmarking suite. Completing this training in just 27.3 minutes represents a dramatic acceleration of AI development cycles.

This benchmark carries substantial commercial implications. AI development companies face intense time-to-market pressure, and CoreWeave's infrastructure could potentially cut development time in half. For enterprises training large language models, this translates to significant cost savings and competitive advantages.

CoreWeave's position as the only cloud provider ranked in the Platinum tier of SemiAnalysis' ClusterMAX further solidifies their leadership in AI infrastructure. This benchmark serves as powerful validation of their cloud architecture strategy and suggests they've successfully optimized their platform specifically for the most demanding AI workloads.

The timing is also strategic—by demonstrating these capabilities months before widespread availability, CoreWeave is positioning itself to capture high-value customers in the increasingly competitive AI infrastructure market, potentially securing long-term contracts with AI labs and enterprises requiring cutting-edge computing resources.

Submission with nearly 2,500 NVIDIA GB200 GPUs achieved breakthrough results on most complex benchmarking model

LIVINGSTON, N.J., June 4, 2025 /PRNewswire/ -- CoreWeave (Nasdaq: CRWV), in collaboration with NVIDIA and IBM, delivered the largest-ever MLPerf® Training v5.0 submission on NVIDIA Blackwell, using 2,496 NVIDIA Blackwell GPUs running on CoreWeave's AI-optimized cloud platform. This submission is the largest NVIDIA GB200 NVL72 cluster ever benchmarked under MLPerf, 34x larger than the only other submission from a cloud provider highlighting the large scale and readiness of CoreWeave's cloud platform for today's demanding AI workloads.

The submission achieved a breakthrough result on the largest and most complex foundational model in the benchmarking suite–Llama 3.1 405B–completing the run in just 27.3 minutes. When compared against submissions from other participants across similar cluster sizes, CoreWeave's GB200 cluster achieved more than 2x faster training performance. This result highlights the significant performance leap enabled by the GB200 NVL72 architecture and the strength of CoreWeave's infrastructure in delivering consistent, best-in-class AI workload performance.

"AI labs and enterprises choose CoreWeave because we deliver a purpose-built cloud platform with the scale, performance, and reliability that their workloads demand," said Peter Salanki, Chief Technology Officer and Co-founder at CoreWeave. "These MLPerf results reinforce our leadership in supporting today's most demanding AI workloads."

These results matter because they translate directly to faster model development cycles and an optimized Total Cost of Ownership. For CoreWeave customers, that means cutting training time in half, scaling workloads efficiently, and training or deploying their models more cost-effectively by leveraging the latest cloud technologies, months before their competitors. With leading submissions for both MLPerf Inference v5.0 and Training v5.0 benchmarks and the sole cloud provider ranked in the Platinum tier of SemiAnalysis' ClusterMAX, CoreWeave sets the standard for AI infrastructure performance across the entire cloud stack.

About CoreWeave

CoreWeave, the AI Hyperscaler™, delivers a cloud platform of cutting-edge software powering the next wave of AI. The company's technology provides enterprises and leading AI labs with cloud solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers across the US and Europe. CoreWeave was ranked as one of the TIME100 most influential companies and featured on Forbes Cloud 100 ranking in 2024. Learn more at www.coreweave.com.

The MLPerf name and logo are registered and unregistered trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited. See www.mlcommons.org for more information.

 

Cision View original content:https://www.prnewswire.com/news-releases/coreweave-nvidia-and-ibm-submit-largest-ever-mlperf-results-on-nvidia-gb200-grace-blackwell-superchips-302473361.html

SOURCE CoreWeave

FAQ

What did CoreWeave achieve in the MLPerf Training v5.0 benchmarks?

CoreWeave achieved the largest-ever MLPerf Training v5.0 submission using 2,496 NVIDIA GB200 Blackwell GPUs, completing Llama 3.1 405B model training in 27.3 minutes with 2x faster performance than similar clusters.

How does CoreWeave's NVIDIA GB200 cluster compare to other cloud providers?

CoreWeave's cluster is 34x larger than the only other cloud provider submission, demonstrating superior scale and AI workload capabilities.

What are the benefits for CoreWeave (CRWV) customers from these MLPerf results?

Customers can cut training time in half, scale workloads efficiently, and train or deploy models more cost-effectively using the latest cloud technologies months ahead of competitors.

What certifications or rankings does CoreWeave hold in AI infrastructure?

CoreWeave is ranked in the Platinum tier of SemiAnalysis' ClusterMAX and has leading submissions in both MLPerf Inference v5.0 and Training v5.0 benchmarks.

How long did it take CoreWeave to train the Llama 3.1 405B model?

CoreWeave completed the training of Llama 3.1 405B model in just 27.3 minutes using their NVIDIA GB200 GPU cluster.
CoreWeave, Inc.

NASDAQ:CRWV

CRWV Rankings

CRWV Latest News

CRWV Stock Data

55.75B
49.00M
Services-prepackaged Software
SPRINGFIELD