Oracle Unveils Next-Generation Oracle Cloud Infrastructure Zettascale10 Cluster for AI
Oracle (ORCL) announced OCI Zettascale10, which Oracle describes as the largest AI supercomputer in the cloud, delivering up to 16 zettaFLOPS of peak performance by connecting hundreds of thousands of NVIDIA GPUs across multi–gigawatt clusters.
OCI Zettascale10 uses Oracle Acceleron RoCE networking and NVIDIA AI infrastructure, targets initial deployments of up to 800,000 NVIDIA GPUs, is being deployed with OpenAI at the Stargate site in Abilene, Texas, and will be available in the second half of next calendar year. Oracle says the design prioritizes ultra‑low GPU‑to‑GPU latency, cluster utilization, reliability, and power efficiency.
Oracle (ORCL) ha annunciato OCI Zettascale10, che Oracle descrive come il più grande supercomputer AI nel cloud, capace di offrire fino a 16 zettaFLOPS di picco collegando centinaia di migliaia di GPU NVIDIA attraverso cluster multi‑gigawatt. OCI Zettascale10 utilizza Oracle Acceleron RoCE networking e NVIDIA AI infrastructure, punta a deployment iniziali fino a 800,000 NVIDIA GPU, viene implementato con OpenAI presso il sito Stargate ad Abilene, Texas, e sarà disponibile nella seconda metà del prossimo anno solare. Oracle dice che il design privilegia una latenza ultra‑bassa GPU‑to‑GPU, l'utilizzo del cluster, affidabilità ed efficienza energetica.
Oracle (ORCL) anunció OCI Zettascale10, que Oracle describe como la mayor supercomputadora de IA en la nube, que ofrece hasta 16 zettaFLOPS de rendimiento de pico conectando cientos de miles de GPU NVIDIA a través de clústers de varios gigavatios. OCI Zettascale10 utiliza la red Oracle Acceleron RoCE y la infraestructura de IA de NVIDIA, apunta a deployments iniciales de hasta 800,000 GPU NVIDIA, se está desplegando con OpenAI en el sitio Stargate de Abilene, Texas, y estará disponible en la segunda mitad del próximo año calendario. Oracle dice que el diseño prioriza una latencia ultra baja GPU‑a‑GPU, la utilización del clúster, fiabilidad y eficiencia energética.
Oracle (ORCL)은 OCI Zettascale10를 발표했으며, Oracle은 이를 클라우드에서 가장 큰 AI 슈퍼컴퓨터로 묘사합니다. 수십만 개의 NVIDIA GPU를 다중 기가와트급 클러스터에 연결해 최대 16 zettaFLOPS의 피크 성능을 제공합니다. OCI Zettascale10은 Oracle Acceleron RoCE 네트워킹과 NVIDIA AI 인프라를 사용하고, 최초 배치를 800,000대의 NVIDIA GPU까지 목표로 하며, 텍사스주 애빌린의 Stargate 사이트에서 OpenAI와 함께 배포 중이고, 다음 달력 연도 하반기에 이용 가능해집니다. Oracle은 설계가 GPU 간 초저 대기시간, 클러스터 활용도, 신뢰성 및 전력 효율성을 우선하도록 한다고 말합니다.
Oracle (ORCL) a annoncé OCI Zettascale10, que Oracle décrit comme le plus grand supercalculateur IA dans le cloud, offrant jusqu'à 16 zettaFLOPS de performance de pointe en reliant des centaines de milliers de GPU NVIDIA à travers des clusters multi‑gigawatt. OCI Zettascale10 utilise le réseau Oracle Acceleron RoCE et l'infrastructure IA de NVIDIA, vise des déploiements initiaux jusqu'à 800 000 GPU NVIDIA, est déployé avec OpenAI sur le site Stargate à Abilene, Texas, et sera disponible dans la deuxième moitié de l'année civile prochaine. Oracle dit que la conception privilégie une latence ultra-faible GPU‑à‑GPU, l'utilisation du cluster, la fiabilité et l'efficacité énergétique.
Oracle (ORCL) hat OCI Zettascale10 angekündigt, das von Oracle als der größte KI‑Supercomputer in der Cloud beschrieben wird und bis zu 16 zettaFLOPS Spitzenleistung durch die Verbindung hunderter Tausender NVIDIA‑GPUs über multi‑Gigawatt‑Cluster liefert. OCI Zettascale10 verwendet Oracle Acceleron RoCE Networking und NVIDIA AI‑Infrastruktur, zielt auf erste Bereitstellungen von bis zu 800.000 NVIDIA‑GPUs, wird zusammen mit OpenAI am Stargate‑Standort in Abilene, Texas, eingesetzt und wird in der zweiten Hälfte des nächsten Kalenderjahres verfügbar sein. Oracle sagt, das Design priorisiert ultraniedrige GPU‑zu‑GPU‑Latenz, Cluster‑Auslastung, Zuverlässigkeit und Energieeffizienz.
Oracle (ORCL) أعلنت OCI Zettascale10، والتي وصفتها Oracle بأنها أكبر كمبيوتر فائق للذكاء الاصطناعي في السحابة، مقدمًا ما يصل إلى 16 zettaFLOPS من الأداء الأقصى من خلال ربط مئات الآلاف من وحدات NVIDIA GPU عبر عناقيد متعددة بقدرات جيغاوات متعددة. تستخدم OCI Zettascale10 شبكة Oracle Acceleron RoCE وبنية NVIDIA AI التحتية، وتستهدف نشرات_initials حتى 800,000 من وحدات NVIDIA GPU، ويتم نشرها مع OpenAI في موقع Stargate في أبيلين، تكساس، وستكون متاحة في النصف الثاني من العام التقويمي المقبل. وتقول Oracle إن التصميم يعطي الأولوية لانخفاض زمن الانتقال GPU‑إلى‑GPU، واستخدام العنقود، والاعتمادية وكفاءة الطاقة.
Oracle (ORCL) 宣布 OCI Zettascale10,Oracle 将其描述为云端最大的 AI 超级计算机,提供高达 16 zettaFLOPS 的峰值性能,通过将数十万 NVIDIA GPU 连接在多千兆瓦级集群中实现。OCI Zettascale10 采用 Oracle Acceleron RoCE 网络和 NVIDIA AI 基础设施,目标初始部署高达 800,000 NVIDIA GPUs,正在与 OpenAI 在德克萨斯州阿比林的 Stargate 站点部署,并将于来年下半年提供。Oracle 表示设计优先考虑 GPU 与 GPU 间的 ultra‑低延迟、集群利用率、可靠性与功率效率。
- 16 zettaFLOPS peak performance
- Targets up to 800,000 NVIDIA GPUs per deployment
- Joint supercluster with OpenAI at Stargate in Abilene, Texas
- Designed for ultra‑low GPU‑to‑GPU latency and higher cluster utilization
- Operates in multi‑gigawatt data center campuses, implying very high power demand
- Clusters concentrated within a two‑kilometer radius, increasing site concentration risk
Insights
Oracle announces a large-scale OCI Zettascale10 supercluster with up to 16 zettaFLOPS and deployments up to 800,000 GPUs.
OCI Zettascale10 packages massive GPU count, custom RoCE networking, and multi–gigawatt data‑center campuses to offer extremely high aggregate AI throughput. The collaboration with OpenAI at the Stargate site and the stated support for up to 800,000 NVIDIA GPUs and up to 16 zettaFLOPS signal an intent to serve hyperscale model training at cloud scale.
Key dependencies include physical campus density, network latency within the stated two‑kilometer radius, and successful delivery of the Oracle Acceleron RoCE fabric across planes without introducing new operational failure modes. Watch order intake and the cited availability window ("
Oracle presents a networking‑centric approach to GPU scaling that emphasizes low latency, resilience, and power efficiency for large model workloads.
The core mechanism replaces a deeper network tiering model by using GPU NIC switching and multiple isolated planes to increase scale and avoid single‑plane congestion. This should improve predictable GPU‑to‑GPU latency and lower non‑compute power draw via Linear Pluggable Optics and Linear Receiver Optics while enabling plane‑level maintenance without full cluster interruption.
Risks include the complexity of managing many isolated network planes, verifying fabric‑wide latency at the claimed scale, and integrating customer software stacks with the custom RoCE design. Monitor technical benchmarks from the Stargate collaboration, multi‑customer deployment confirmations, and reported GPU‑to‑GPU latency numbers after early customer rollouts; expect these concrete signals in the months following initial availability.
Largest AI supercomputer in the cloud delivers 10X the amount of zettaFLOPS of peak performance
Built on Oracle Acceleron RoCE networking architecture with NVIDIA AI infrastructure , OCI Zettascale10 will provide multi–gigawatt AI workload capacity and scale
OCI Zettascale10 is a powerful evolution of the first Zettascale cloud computing cluster, which was introduced in September 2024. OCI Zettascale10 clusters are housed in large gigawatt data center campuses that are hyper-optimized for density within a two-kilometer radius to offer the best GPU-GPU latency for large scale AI training workloads. This architecture is being deployed with OpenAI at the Stargate site in
"With OCI Zettascale10, we're fusing OCI's groundbreaking Oracle Acceleron RoCE network architecture with next-generation NVIDIA AI infrastructure to deliver multi–gigawatt AI capacity at unmatched scale," said Mahesh Thiagarajan, executive vice president, Oracle Cloud Infrastructure. "Customers can build, train, and deploy their largest AI models into production using less power per unit of performance and achieving high reliability. In addition, customers will have the freedom to operate across Oracle's distributed cloud with strong data and AI sovereignty controls."
"OCI Zettascale10 network and cluster fabric was developed and deployed first at the flagship Stargate site in
OCI plans to offer multi-gigawatt deployments of OCI Zettascale10 to customers. Initially, OCI Zettascale10 clusters will target deployments of up to 800,000 NVIDIA GPUs delivering predictable performance and strong cost efficiency, with high GPU–to–GPU bandwidth enabled by Oracle Acceleron's ultra–low–latency RoCEv2 networking.
"Oracle and NVIDIA are bringing together OCI's distributed cloud and our full–stack AI infrastructure to deliver AI at extraordinary scale," said Ian Buck, vice president of Hyperscale, NVIDIA. "Featuring NVIDIA full-stack AI infrastructure, OCI Zettascale10 provides the compute fabric needed to advance state–of–the–art AI research and help organizations everywhere move from experimentation to industrialized AI."
Oracle Acceleron RoCE networking delivers scale, reliability, and efficiency for AI on OCI Zettascale10
Oracle Acceleron RoCE networking architecture is a critical innovation for customers to build, train, and inference AI workloads in the cloud, while taking full advantage of OCI Zettascale10's power and capabilities. It uses the switching capability built into modern GPU NICs (network interface cards), allowing them to connect to multiple switches simultaneously, with each on a separate and isolated network plane. This approach dramatically increases the network's overall scale and reliability by shifting traffic to other network planes when one has a problem, avoiding costly stalls and restarts. Key features of Oracle Acceleron RoCE networking that help customers with their critical AI workloads, include:
- Wide, shallow, resilient fabric: Helps customers deploy larger AI clusters faster at lower total cost by using the GPU NIC as a mini–switch and connecting to multiple physically and logically isolated planes. This boosts scale while reducing network tiers, cost, and power.
- Higher reliability: Helps customers maintain the stability of AI jobs by eliminating data sharing across planes. This shifts traffic away from unstable or congested planes, which keeps training jobs running and avoids costly checkpoint restarts.
- Consistent performance: Provides customers with more uniform GPU–to–GPU latency by removing a tier versus traditional three-tier designs, improving predictability for large–scale AI training and inference.
- Power–efficient optics: Supports customer workloads with Linear Pluggable Optics (LPO) and Linear Receiver Optics (LRO) to cut network and cooling costs without sacrificing 400G/800G throughput. This allows customers to devote more of their power budget to compute.
- Operational flexibility: Helps customers reduce downtime and speed up feature rollouts through plane–level maintenance and independent network operating system updates.
OCI is now taking orders for OCI Zettascale10, which will be available in the second half of next calendar year, with up to 800,000 NVIDIA AI infrastructure GPU platforms.
Additional Resources
- Watch the Oracle AI World keynote with Mahesh Thiagarajan
- Learn more about OCI AI infrastructure
About Oracle
Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at oracle.com.
About Oracle AI World
Oracle AI World is where customers and partners discover the latest product and technology innovations, see how AI is being applied across industries, and connect with experts and peers. Attendees will gain practical tips and insights to drive immediate impact within their organizations and explore how Oracle is helping unlock the full potential of cloud and AI. Join the event to see new capabilities in action and hear from thought leaders and industry movers. Register now at oracle.com/ai-world or follow the news and conversation at oracle.com/news and linkedin.com/company/oracle.
Future Product Disclaimer
The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle's products may change and remains at the sole discretion of Oracle Corporation.
Forward-Looking Statements Disclaimer
Statements in this article relating to Oracle's future plans, expectations, beliefs, and intentions are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect Oracle's current expectations and actual results, and could cause actual results to differ materially. A discussion of such factors and other risks that affect Oracle's business is contained in Oracle's Securities and Exchange Commission (SEC) filings, including Oracle's most recent reports on Form 10-K and Form 10-Q under the heading "Risk Factors." These filings are available on the SEC's website or on Oracle's website at oracle.com/investor. All information in this article is current as of October 14, 2025 and Oracle undertakes no duty to update any statement in light of new information or future events.
Trademarks
Oracle, Java, MySQL and NetSuite are registered trademarks of Oracle Corporation. NetSuite was the first cloud company—ushering in the new era of cloud computing.
View original content to download multimedia:https://www.prnewswire.com/news-releases/oracle-unveils-next-generation-oracle-cloud-infrastructure-zettascale10-cluster-for-ai-302583054.html
SOURCE Oracle