STOCK TITAN

CoreWeave Extends Its Cloud Platform with NVIDIA Rubin Platform

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Very Positive)
Tags

Key Terms

agentic ai technical
Agentic AI refers to computer systems that can make their own decisions and take actions without needing someone to tell them what to do each time. It's like giving a robot a degree of independence to solve problems or achieve goals on its own, which matters because it could change how we work and interact with technology in everyday life.
mixture-of-experts models technical
Mixture-of-experts models are a type of artificial intelligence that uses many small specialist “experts,” each trained to handle different kinds of tasks or data, and a gatekeeper that routes each input to the most suitable experts. For investors, they matter because this approach can deliver higher accuracy and faster results while using less computing power than one-size-fits-all models, affecting a company’s product performance, operating costs, and competitive position in AI-driven markets.
kubernetes-native technical
Software or services described as "Kubernetes-native" are built specifically to run on Kubernetes, a widely used platform that manages containerized applications. For investors, this signals modern, cloud-friendly design that can make a product easier to scale, update and move between providers—similar to furniture made to fit a common modular shelving system—potentially lowering operating costs and accelerating growth opportunities, while also shaping vendor and deployment risks.
reinforcement learning technical
A type of artificial intelligence that learns by trial and error, receiving feedback from its actions to favor choices that lead to better outcomes. Think of it like a salesperson learning which pitches close deals by trying different approaches and keeping the ones that work. For investors, reinforcement learning matters because it can power smarter trading systems, optimize business operations, or improve products—potentially boosting efficiency and profits while also introducing model and execution risks.
reliability, availability, and serviceability (ras) technical
Reliability, availability, and serviceability (RAS) is a way to describe how well a product or system stays working, can be used when needed, and how easy it is to repair or maintain. Think of it like a car that rarely breaks down (reliability), starts whenever you need it (availability), and can be fixed quickly and inexpensively (serviceability). For investors, strong RAS means lower downtime and repair costs, better customer retention, and more predictable operational performance, all of which can protect earnings and valuation.

LIVINGSTON, N.J.--(BUSINESS WIRE)-- CoreWeave, Inc. (Nasdaq: CRWV), The Essential Cloud for AI™, today announced it will add NVIDIA Rubin technology to its AI cloud platform, expanding the range of solutions available to its customers who are building and deploying agentic AI, reasoning, and large-scale inference workloads. CoreWeave is expected to be among the first cloud providers to deploy the NVIDIA Rubin platform in the second half of 2026, offering its customers greater flexibility and choice as AI systems scale.

NVIDIA Vera Rubin NVL72 racks to be deployed by CoreWeave

NVIDIA Vera Rubin NVL72 racks to be deployed by CoreWeave

CoreWeave designed its cloud platform to operate large-scale AI across multiple generations of technology, enabling customers to match the right systems to the right workloads as requirements evolve. The addition of the NVIDIA Rubin platform will advance this strategy by expanding the performance, efficiency, and scale available to enterprises, AI labs, and startups running production AI workloads.

“The NVIDIA Rubin platform represents an important advancement as AI evolves toward more sophisticated reasoning and agentic use cases,” said Michael Intrator, Co-founder, Chairman, and Chief Executive Officer, CoreWeave. “Enterprises come to CoreWeave for real choice and the ability to run complex workloads reliably at production scale. With CoreWeave Mission Control™ as our operating standard, we can bring new technologies like Rubin to market quickly and enable our customers to deploy their innovations at scale with confidence.”

“CoreWeave’s speed, scale, and ingenuity make them an essential partner in this new era of computing. With Rubin, we’re pushing the boundaries of AI—from reasoning to agentic AI —and CoreWeave is helping turn that potential into production as one of the first to deploy it later this year,” said Jensen Huang, Founder and Chief Executive Officer, NVIDIA. “Together, we’re not just deploying infrastructure—we’re building the AI factories of the future.”

Designed to support demanding workloads such as agentic AI, drug discovery, genomic research, climate simulation, and fusion energy modeling, the NVIDIA Rubin platform enables large-scale mixture-of-experts models that require massive and sustained compute. On CoreWeave, Rubin will support AI builders who need to train, serve, and scale these workloads with flexibility, performance, and consistency.

CoreWeave has a proven track record of bringing advanced AI infrastructure to market quickly and at scale. The company was the first cloud provider to offer general availability of NVIDIA GB200 NVL72 instances and the NVIDIA Grace Blackwell Ultra NVL72 platform. Its custom-built software stack for AI accelerates deployment timelines while maintaining industry-leading standards for performance and reliability.

NVIDIA Rubin will be deployed using CoreWeave Mission Control, the industry’s first operating standard for training, inference, and agentic AI workloads that unifies security, expert-led operations and observability into one operating standard to enable reliability, transparency and actionable insights. Integrated with the NVIDIA Reliability, Availability, and Serviceability (RAS) Engine, CoreWeave Mission Control provides real-time diagnostics and observability across fleet, rack, and cabinet levels, giving customers clear visibility into system health and schedulable production capacity.

To manage the tightly coupled requirements of power delivery, liquid cooling, and network integration at scale, CoreWeave has developed the Rack Lifecycle Controller, a Kubernetes-native orchestrator that treats an entire NVIDIA Vera Rubin NVL72 rack as a single programmable entity. This system coordinates provisioning, power operations, and hardware validation to ensure production readiness before customer workloads are deployed.

“Workloads like drug discovery, climate modeling, and advanced robotics demand both cutting-edge compute and the ability to run it reliably at scale,” said Dan O’Brien, President and COO, The Futurum Group. “The NVIDIA Rubin platform expands what is possible, and platforms like CoreWeave are what make those capabilities available in practice. That combination is what accelerates real progress.”

Once NVIDIA Rubin is integrated into the CoreWeave Cloud platform, customers will be able to focus on building advanced AI systems rather than managing infrastructure. By pairing NVIDIA Rubin’s agentic and reasoning capabilities with CoreWeave’s purpose-built software stack, CoreWeave is enabling large-scale training, high-performance inference, and low-latency agentic AI for the next generation of intelligent applications.

The addition of NVIDIA Rubin builds on CoreWeave’s broader platform strategy to unify the essential tools required to run AI at production scale on a single cloud platform, spanning high-performance compute, multi-cloud compatible data storage, and the software layer builders rely on to develop, test, and deploy AI systems. Recent platform innovations such as our Serverless RL, the first publicly available, fully managed reinforcement learning capability, further extend this foundation. CoreWeave’s focus on performance and operational excellence is reflected in industry-leading MLPerf benchmark results and its distinction as the only AI cloud to earn top Platinum rankings in both SemiAnalysis ClusterMAX™ 1.0 and 2.0, reinforcing the company’s ability to deliver advanced AI infrastructure with reliability and efficiency at scale.

About CoreWeave

CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to move at the pace of innovation, building and scaling AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave serves as a force multiplier by combining superior infrastructure performance with deep technical expertise to accelerate breakthroughs. Established in 2017, CoreWeave completed its public listing on Nasdaq (CRWV) in March 2025. Learn more at www.coreweave.com.

Media

press@coreweave.com

Source: CoreWeave, Inc.

CoreWeave, Inc.

NASDAQ:CRWV

CRWV Rankings

CRWV Latest News

CRWV Latest SEC Filings

CRWV Stock Data

43.59B
313.08M
24.91%
55.56%
5.94%
Software - Infrastructure
Services-prepackaged Software
Link
United States
LIVINGSTON