CoreWeave Becomes the First AI Cloud Provider to Offer NVIDIA RTX PRO 6000 Blackwell GPU at Scale
Rhea-AI Summary
CoreWeave (Nasdaq: CRWV) has become the first cloud platform to offer NVIDIA RTX PRO 6000 Blackwell Server Edition instances for general availability. The new GPU architecture delivers up to 5.6x faster LLM inference and 3.5x faster text-to-video generation compared to its predecessor, optimized for models up to 70B parameters.
The RTX PRO 6000-based instances feature 8x RTX PRO 6000 GPUs, 128 Intel Emerald Rapids vCPUs, 1TB System RAM, 100 GBps Networking Throughput, and 7.68TB local NVMe storage. CoreWeave now offers one of the broadest ranges of NVIDIA Blackwell infrastructure, including the NVIDIA GB200 NVL72 system and NVIDIA HGX B200 platform.
CoreWeave recently achieved a milestone by submitting the largest-ever MLPerf® Training v5.0 benchmark, training the Llama 3.1 405B model in just 27.3 minutes using nearly 2,500 NVIDIA GB200 Grace Blackwell Superchips.
Positive
- First-to-market advantage with NVIDIA RTX PRO 6000 Blackwell Server Edition deployment
- Significant performance improvements: 5.6x faster LLM inference and 3.5x faster text-to-video generation
- Achieved Platinum rating in SemiAnalysis's GPU Cloud ClusterMAX Rating System
- Demonstrated superior performance with record-breaking MLPerf Training benchmark
Negative
- None.
News Market Reaction 1 Alert
On the day this news was published, CRWV gained 1.06%, reflecting a mild positive market reaction.
Data tracked by StockTitan Argus on the day of publication.
Groundbreaking GPU architecture, powered by CoreWeave's AI Cloud platform, will enable enterprises and startups to push the boundaries of AI innovation
"CoreWeave is built to move at the speed of innovation, and with the new RTX PRO 6000-based instances, we're once again first to bring advanced AI and graphics technology to the cloud," said Peter Salanki, Co-Founder and Chief Technology Officer of CoreWeave. "This is a major step forward for customers building the future of AI, as it gives them the ability to optimize and scale on GPU instances that are ideal for their applications, and a testament to the speed and reliability of our AI cloud platform."
The new RTX PRO 6000 Blackwell Server Edition achieves up to 5.6x faster LLM inference and 3.5x faster text-to-video generation than the previous generation, ideal for inference of models up to 70B parameters. By combining NVIDIA's cutting-edge compute with CoreWeave's purpose-built AI Cloud Platform, customers are able to access a more cost-efficient alternative to larger GPU clusters while maintaining strong performance for teams building and scaling AI applications.
With the launch of RTX PRO 6000, CoreWeave now offers one of the widest ranges of NVIDIA Blackwell infrastructure on the market, including the NVIDIA GB200 NVL72 system and NVIDIA HGX B200 platform. Whether a customer is looking to train their next trillion-parameter large language model or serve multimodal inference, CoreWeave's flexible platform and AI-optimized software stack allow customers to select the optimal Blackwell architecture for their unique needs.
"The NVIDIA RTX PRO 6000 GPU represents a breakthrough in AI and graphics performance, empowering a variety of industries with advanced, cost-effective solutions," said Dave Salvator, director of accelerated computing products at NVIDIA. "As the first to offer the RTX PRO 6000, CoreWeave demonstrates how rapidly our partners can bring the power of Blackwell-based architecture to market, enabling businesses to accelerate innovation and achieve transformative results."
CoreWeave continues to demonstrate its ability to be first to market with the world's latest and most advanced hardware solutions, giving customers unparalleled access to the next generation of compute infrastructure at unprecedented speed. Last year, the company was among the first to offer NVIDIA H200 GPUs and was the first AI cloud provider to make NVIDIA GB200 NVL72 systems generally available. In June 2025, CoreWeave, in collaboration with NVIDIA and IBM, submitted the largest-ever MLPerf® Training v5.0 benchmark using nearly 2,500 NVIDIA GB200 Grace Blackwell Superchips, achieving a breakthrough result on the most complex model, Llama 3.1 405B, in just 27.3 minutes.
CoreWeave's RTX PRO 6000-based instances feature 8x RTX PRO 6000 GPUs, 128 Intel Emerald Rapids vCPUs, 1TB System RAM, 100 GBps Networking Throughput, and 7.68TB local NVMe storage. These instances are integrated with CoreWeave's AI cloud platform, where every layer is fine-tuned to maximize efficiency for AI workloads, with deep optimizations across hardware, software, and operations. CoreWeave is the only hyperscaler to achieve the highest Platinum rating by SemiAnalysis's GPU Cloud ClusterMAX™ Rating System, an independent AI cloud industry benchmark.
About CoreWeave
CoreWeave, the AI Hyperscaler™, delivers a cloud platform of cutting-edge software powering the next wave of AI. CoreWeave's technology provides enterprises and leading AI labs with cloud solutions for accelerated computing. Since 2017, CoreWeave has operated a growing footprint of data centers across the US and
View original content to download multimedia:https://www.prnewswire.com/news-releases/coreweave-becomes-the-first-ai-cloud-provider-to-offer-nvidia-rtx-pro-6000-blackwell-gpu-at-scale-302500917.html
SOURCE CoreWeave