CoreWeave Becomes the First AI Cloud Provider to Offer NVIDIA RTX PRO 6000 Blackwell GPU at Scale
CoreWeave (Nasdaq: CRWV) has become the first cloud platform to offer NVIDIA RTX PRO 6000 Blackwell Server Edition instances for general availability. The new GPU architecture delivers up to 5.6x faster LLM inference and 3.5x faster text-to-video generation compared to its predecessor, optimized for models up to 70B parameters.
The RTX PRO 6000-based instances feature 8x RTX PRO 6000 GPUs, 128 Intel Emerald Rapids vCPUs, 1TB System RAM, 100 GBps Networking Throughput, and 7.68TB local NVMe storage. CoreWeave now offers one of the broadest ranges of NVIDIA Blackwell infrastructure, including the NVIDIA GB200 NVL72 system and NVIDIA HGX B200 platform.
CoreWeave recently achieved a milestone by submitting the largest-ever MLPerf® Training v5.0 benchmark, training the Llama 3.1 405B model in just 27.3 minutes using nearly 2,500 NVIDIA GB200 Grace Blackwell Superchips.
CoreWeave (Nasdaq: CRWV) è diventata la prima piattaforma cloud a offrire istanze NVIDIA RTX PRO 6000 Blackwell Server Edition disponibili per il pubblico. La nuova architettura GPU garantisce fino a 5,6 volte più veloce inferenza LLM e 3,5 volte più rapida generazione di video da testo rispetto al modello precedente, ottimizzata per modelli fino a 70 miliardi di parametri.
Le istanze basate su RTX PRO 6000 includono 8 GPU RTX PRO 6000, 128 vCPU Intel Emerald Rapids, 1 TB di RAM di sistema, 100 Gbps di throughput di rete e 7,68 TB di storage NVMe locale. CoreWeave offre ora una delle più ampie gamme di infrastrutture NVIDIA Blackwell, comprendendo anche il sistema NVIDIA GB200 NVL72 e la piattaforma NVIDIA HGX B200.
Recentemente, CoreWeave ha raggiunto un traguardo importante presentando il più grande benchmark MLPerf® Training v5.0 mai effettuato, addestrando il modello Llama 3.1 da 405 miliardi di parametri in soli 27,3 minuti utilizzando quasi 2.500 NVIDIA GB200 Grace Blackwell Superchips.
CoreWeave (Nasdaq: CRWV) se ha convertido en la primera plataforma en la nube que ofrece instancias NVIDIA RTX PRO 6000 Blackwell Server Edition para disponibilidad general. La nueva arquitectura GPU ofrece hasta 5.6 veces más rápida inferencia LLM y 3.5 veces más rápida generación de texto a video en comparación con su predecesora, optimizada para modelos de hasta 70 mil millones de parámetros.
Las instancias basadas en RTX PRO 6000 cuentan con 8 GPUs RTX PRO 6000, 128 vCPUs Intel Emerald Rapids, 1 TB de RAM del sistema, 100 Gbps de ancho de banda de red y 7.68 TB de almacenamiento NVMe local. CoreWeave ahora ofrece una de las gamas más amplias de infraestructura NVIDIA Blackwell, incluyendo el sistema NVIDIA GB200 NVL72 y la plataforma NVIDIA HGX B200.
Recientemente, CoreWeave alcanzó un hito al presentar el mayor benchmark MLPerf® Training v5.0 jamás realizado, entrenando el modelo Llama 3.1 de 405 mil millones en solo 27.3 minutos utilizando casi 2,500 NVIDIA GB200 Grace Blackwell Superchips.
CoreWeave (나스닥: CRWV)는 NVIDIA RTX PRO 6000 Blackwell Server Edition 인스턴스를 일반에 최초로 제공하는 클라우드 플랫폼이 되었습니다. 새로운 GPU 아키텍처는 이전 세대 대비 최대 5.6배 빠른 LLM 추론과 3.5배 빠른 텍스트-투-비디오 생성을 제공하며, 최대 700억 매개변수 모델에 최적화되어 있습니다.
RTX PRO 6000 기반 인스턴스는 8개의 RTX PRO 6000 GPU, 128개의 Intel Emerald Rapids vCPU, 1TB 시스템 RAM, 100Gbps 네트워크 처리량, 그리고 7.68TB 로컬 NVMe 스토리지를 특징으로 합니다. CoreWeave는 현재 NVIDIA GB200 NVL72 시스템과 NVIDIA HGX B200 플랫폼을 포함해 가장 광범위한 NVIDIA Blackwell 인프라를 제공합니다.
CoreWeave는 최근 거의 2,500개의 NVIDIA GB200 Grace Blackwell 슈퍼칩을 사용해 Llama 3.1 405B 모델을 단 27.3분 만에 훈련시키며, 역대 최대 규모의 MLPerf® Training v5.0 벤치마크 제출이라는 이정표를 달성했습니다.
CoreWeave (Nasdaq : CRWV) est devenue la première plateforme cloud à proposer des instances NVIDIA RTX PRO 6000 Blackwell Server Edition en disponibilité générale. La nouvelle architecture GPU offre jusqu'à 5,6 fois plus rapide l'inférence LLM et 3,5 fois plus rapide la génération de texte en vidéo par rapport à son prédécesseur, optimisée pour des modèles allant jusqu'à 70 milliards de paramètres.
Les instances basées sur RTX PRO 6000 comprennent 8 GPU RTX PRO 6000, 128 vCPU Intel Emerald Rapids, 1 To de RAM système, 100 Gbps de débit réseau et 7,68 To de stockage NVMe local. CoreWeave propose désormais l'une des gammes les plus étendues d'infrastructures NVIDIA Blackwell, incluant le système NVIDIA GB200 NVL72 et la plateforme NVIDIA HGX B200.
CoreWeave a récemment franchi une étape majeure en soumettant le plus grand benchmark MLPerf® Training v5.0 jamais réalisé, entraînant le modèle Llama 3.1 405B en seulement 27,3 minutes grâce à près de 2 500 NVIDIA GB200 Grace Blackwell Superchips.
CoreWeave (Nasdaq: CRWV) ist die erste Cloud-Plattform, die NVIDIA RTX PRO 6000 Blackwell Server Edition-Instanzen für die allgemeine Verfügbarkeit anbietet. Die neue GPU-Architektur ermöglicht bis zu 5,6-fach schnellere LLM-Inferenz und 3,5-fach schnellere Text-zu-Video-Generierung im Vergleich zum Vorgänger und ist für Modelle mit bis zu 70 Milliarden Parametern optimiert.
Die auf RTX PRO 6000 basierenden Instanzen verfügen über 8x RTX PRO 6000 GPUs, 128 Intel Emerald Rapids vCPUs, 1 TB Systemspeicher, 100 Gbps Netzwerkdurchsatz und 7,68 TB lokalen NVMe-Speicher. CoreWeave bietet nun eine der umfangreichsten NVIDIA Blackwell-Infrastrukturen, einschließlich des NVIDIA GB200 NVL72-Systems und der NVIDIA HGX B200-Plattform.
CoreWeave erreichte kürzlich einen Meilenstein, indem es den bisher größten MLPerf® Training v5.0 Benchmark einreichte und das Llama 3.1 405B Modell in nur 27,3 Minuten mit fast 2.500 NVIDIA GB200 Grace Blackwell Superchips trainierte.
- First-to-market advantage with NVIDIA RTX PRO 6000 Blackwell Server Edition deployment
- Significant performance improvements: 5.6x faster LLM inference and 3.5x faster text-to-video generation
- Achieved Platinum rating in SemiAnalysis's GPU Cloud ClusterMAX Rating System
- Demonstrated superior performance with record-breaking MLPerf Training benchmark
- None.
Insights
CoreWeave's early access to Blackwell GPUs strengthens its competitive position in the AI cloud infrastructure market.
CoreWeave has positioned itself as the first-mover in the AI cloud provider space by making NVIDIA's RTX PRO 6000 Blackwell Server Edition GPUs generally available before competitors. This strategic advantage allows CoreWeave to capture demand from enterprises requiring cutting-edge AI capabilities.
The performance improvements are substantial: the new GPUs deliver up to 5.6x faster LLM inference and 3.5x faster text-to-video generation than previous generation hardware. These enhancements are particularly valuable for companies running inference on models up to 70B parameters - a sweet spot for many commercial AI applications that don't require the largest models but still demand significant computational resources.
CoreWeave's expanded portfolio now includes a comprehensive range of NVIDIA Blackwell infrastructure, including the GB200 NVL72 system and HGX B200 platform. This diversification allows them to serve customers across different AI workload profiles, from training trillion-parameter models to serving multimodal inference applications.
The company's track record of early hardware adoption (being first with H200 GPUs and GB200 NVL72 systems) has established a pattern that builds confidence with AI-focused customers who require access to the latest compute technology. Their recent MLPerf benchmark collaboration with NVIDIA and IBM, achieving the training of Llama 3.1 405B in just 27.3 minutes, further demonstrates their technical capabilities at the bleeding edge of AI infrastructure.
The specifications of the RTX PRO 6000-based instances (8x GPUs, 128 Intel Emerald Rapids vCPUs, 1TB System RAM, 100 GBps networking, 7.68TB NVMe storage) create a balanced system architecture optimized for AI workloads, helping CoreWeave maintain its Platinum rating in SemiAnalysis's GPU Cloud ClusterMAX™ Rating System.
Groundbreaking GPU architecture, powered by CoreWeave's AI Cloud platform, will enable enterprises and startups to push the boundaries of AI innovation
"CoreWeave is built to move at the speed of innovation, and with the new RTX PRO 6000-based instances, we're once again first to bring advanced AI and graphics technology to the cloud," said Peter Salanki, Co-Founder and Chief Technology Officer of CoreWeave. "This is a major step forward for customers building the future of AI, as it gives them the ability to optimize and scale on GPU instances that are ideal for their applications, and a testament to the speed and reliability of our AI cloud platform."
The new RTX PRO 6000 Blackwell Server Edition achieves up to 5.6x faster LLM inference and 3.5x faster text-to-video generation than the previous generation, ideal for inference of models up to 70B parameters. By combining NVIDIA's cutting-edge compute with CoreWeave's purpose-built AI Cloud Platform, customers are able to access a more cost-efficient alternative to larger GPU clusters while maintaining strong performance for teams building and scaling AI applications.
With the launch of RTX PRO 6000, CoreWeave now offers one of the widest ranges of NVIDIA Blackwell infrastructure on the market, including the NVIDIA GB200 NVL72 system and NVIDIA HGX B200 platform. Whether a customer is looking to train their next trillion-parameter large language model or serve multimodal inference, CoreWeave's flexible platform and AI-optimized software stack allow customers to select the optimal Blackwell architecture for their unique needs.
"The NVIDIA RTX PRO 6000 GPU represents a breakthrough in AI and graphics performance, empowering a variety of industries with advanced, cost-effective solutions," said Dave Salvator, director of accelerated computing products at NVIDIA. "As the first to offer the RTX PRO 6000, CoreWeave demonstrates how rapidly our partners can bring the power of Blackwell-based architecture to market, enabling businesses to accelerate innovation and achieve transformative results."
CoreWeave continues to demonstrate its ability to be first to market with the world's latest and most advanced hardware solutions, giving customers unparalleled access to the next generation of compute infrastructure at unprecedented speed. Last year, the company was among the first to offer NVIDIA H200 GPUs and was the first AI cloud provider to make NVIDIA GB200 NVL72 systems generally available. In June 2025, CoreWeave, in collaboration with NVIDIA and IBM, submitted the largest-ever MLPerf® Training v5.0 benchmark using nearly 2,500 NVIDIA GB200 Grace Blackwell Superchips, achieving a breakthrough result on the most complex model, Llama 3.1 405B, in just 27.3 minutes.
CoreWeave's RTX PRO 6000-based instances feature 8x RTX PRO 6000 GPUs, 128 Intel Emerald Rapids vCPUs, 1TB System RAM, 100 GBps Networking Throughput, and 7.68TB local NVMe storage. These instances are integrated with CoreWeave's AI cloud platform, where every layer is fine-tuned to maximize efficiency for AI workloads, with deep optimizations across hardware, software, and operations. CoreWeave is the only hyperscaler to achieve the highest Platinum rating by SemiAnalysis's GPU Cloud ClusterMAX™ Rating System, an independent AI cloud industry benchmark.
About CoreWeave
CoreWeave, the AI Hyperscaler™, delivers a cloud platform of cutting-edge software powering the next wave of AI. CoreWeave's technology provides enterprises and leading AI labs with cloud solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers across the US and
View original content to download multimedia:https://www.prnewswire.com/news-releases/coreweave-becomes-the-first-ai-cloud-provider-to-offer-nvidia-rtx-pro-6000-blackwell-gpu-at-scale-302500917.html
SOURCE CoreWeave