STOCK TITAN

Oracle and AMD Collaborate to Help Customers Deliver Breakthrough Performance for Large-Scale AI and Agentic Workloads

Rhea-AI Impact
(Low)
Rhea-AI Sentiment
(Very Positive)
Tags
AI
Oracle and AMD announced a major collaboration to enhance AI capabilities on Oracle Cloud Infrastructure (OCI), with Oracle becoming one of the first hyperscalers to offer an AI supercomputer using AMD Instinct MI355X GPUs. The partnership will deliver a zettascale AI cluster supporting up to 131,072 MI355X GPUs, providing 2.8X higher throughput and 2X better price-performance compared to previous generations. The new offering features 288GB of HBM3 memory, up to 8TB/s memory bandwidth, and supports FP4 compute standard. Notable improvements include liquid-cooled design at 125kW per rack, AMD Turin CPU head nodes, open-source AMD ROCm software stack, and pioneering implementation of AMD Pollara AI NICs for advanced networking capabilities.
Oracle e AMD hanno annunciato una collaborazione importante per potenziare le capacità di intelligenza artificiale su Oracle Cloud Infrastructure (OCI), con Oracle che diventa uno dei primi hyperscaler a offrire un supercomputer AI basato sulle GPU AMD Instinct MI355X. La partnership prevede la realizzazione di un cluster AI zettascale con fino a 131.072 GPU MI355X, garantendo un throughput 2,8 volte superiore e un rapporto prezzo-prestazioni doppio rispetto alle generazioni precedenti. La nuova soluzione dispone di 288GB di memoria HBM3, una larghezza di banda della memoria fino a 8TB/s e supporta lo standard di calcolo FP4. Tra i miglioramenti significativi si annoverano un design a raffreddamento liquido da 125kW per rack, nodi head con CPU AMD Turin, lo stack software open-source AMD ROCm e l'implementazione pionieristica delle schede di rete AI AMD Pollara per funzionalità di networking avanzate.
Oracle y AMD anunciaron una colaboración importante para mejorar las capacidades de inteligencia artificial en Oracle Cloud Infrastructure (OCI), con Oracle convirtiéndose en uno de los primeros hyperscalers en ofrecer una supercomputadora de IA que utiliza GPUs AMD Instinct MI355X. La asociación entregará un clúster de IA a escala zetta que soporta hasta 131,072 GPUs MI355X, proporcionando un rendimiento 2.8 veces mayor y un costo-beneficio 2 veces mejor en comparación con generaciones anteriores. La nueva oferta cuenta con 288GB de memoria HBM3, hasta 8TB/s de ancho de banda de memoria y soporte para el estándar de cómputo FP4. Las mejoras destacadas incluyen un diseño refrigerado por líquido de 125kW por rack, nodos principales con CPU AMD Turin, la pila de software de código abierto AMD ROCm y la implementación pionera de las tarjetas de red AI AMD Pollara para capacidades avanzadas de networking.
오라클과 AMD는 오라클 클라우드 인프라스트럭처(OCI)에서 AI 기능을 강화하기 위한 주요 협력을 발표했으며, 오라클은 AMD Instinct MI355X GPU를 사용하는 AI 슈퍼컴퓨터를 제공하는 최초의 하이퍼스케일러 중 하나가 되었습니다. 이번 파트너십은 최대 131,072개의 MI355X GPU를 지원하는 제타스케일 AI 클러스터를 제공하며, 이전 세대 대비 2.8배 높은 처리량과 2배 향상된 가격 대비 성능을 제공합니다. 새로운 제품은 288GB HBM3 메모리, 최대 8TB/s 메모리 대역폭, FP4 연산 표준을 지원합니다. 주요 개선 사항으로는 랙당 125kW의 액체 냉각 설계, AMD Turin CPU 헤드 노드, 오픈 소스 AMD ROCm 소프트웨어 스택, 그리고 고급 네트워킹 기능을 위한 AMD Pollara AI NIC의 선도적 구현이 포함됩니다.
Oracle et AMD ont annoncé une collaboration majeure pour renforcer les capacités d'intelligence artificielle sur Oracle Cloud Infrastructure (OCI), Oracle devenant ainsi l'un des premiers hyperscalers à proposer un superordinateur IA utilisant les GPU AMD Instinct MI355X. Ce partenariat permettra de déployer un cluster IA à l'échelle zetta supportant jusqu'à 131 072 GPU MI355X, offrant un débit 2,8 fois supérieur et un rapport qualité-prix doublé par rapport aux générations précédentes. La nouvelle offre comprend 288 Go de mémoire HBM3, une bande passante mémoire allant jusqu'à 8 To/s, et prend en charge la norme de calcul FP4. Parmi les améliorations notables figurent un design refroidi par liquide à 125 kW par rack, des nœuds principaux équipés de CPU AMD Turin, la pile logicielle open source AMD ROCm, ainsi que la mise en œuvre pionnière des cartes réseau IA AMD Pollara pour des capacités réseau avancées.
Oracle und AMD haben eine bedeutende Zusammenarbeit angekündigt, um die KI-Fähigkeiten auf der Oracle Cloud Infrastructure (OCI) zu verbessern. Oracle wird einer der ersten Hyperscaler sein, der einen KI-Supercomputer mit AMD Instinct MI355X GPUs anbietet. Die Partnerschaft wird einen zettaskaligen KI-Cluster mit bis zu 131.072 MI355X GPUs bereitstellen, der eine 2,8-fach höhere Durchsatzrate und eine doppelt so gute Preis-Leistungs-Relation im Vergleich zu vorherigen Generationen bietet. Das neue Angebot verfügt über 288 GB HBM3-Speicher, bis zu 8 TB/s Speicherbandbreite und unterstützt den FP4-Rechenstandard. Zu den bemerkenswerten Verbesserungen zählen ein flüssigkeitsgekühltes Design mit 125 kW pro Rack, AMD Turin CPU-Headnodes, der Open-Source-AMD-ROCm-Software-Stack sowie die bahnbrechende Implementierung der AMD Pollara AI NICs für erweiterte Netzwerkfunktionen.
Positive
  • 2.8X higher throughput and 2X better price-performance compared to previous generation GPUs
  • Massive scalability with support for up to 131,072 MI355X GPUs in zettascale clusters
  • Significant memory improvements with 288GB HBM3 and 8TB/s memory bandwidth
  • Implementation of cost-effective FP4 standard for efficient AI model deployment
  • Open-source compatibility through AMD ROCm stack prevents vendor lock-in
Negative
  • High power consumption at 1,400 watts per GPU may lead to significant operational costs
  • Requires specialized liquid-cooling infrastructure which could increase complexity
  • Large-scale deployment might require substantial initial investment

Insights

Oracle's AMD partnership brings 2X better price-performance for AI workloads, strengthening OCI's competitive position against AWS and Azure.

The Oracle-AMD collaboration represents a significant strategic enhancement to Oracle's cloud infrastructure portfolio. By integrating AMD's MI355X GPUs into OCI, Oracle is positioning itself to compete more effectively in the high-stakes AI infrastructure market currently dominated by NVIDIA-powered offerings from AWS, Azure, and Google Cloud.

The technical specifications are impressive: these AMD GPUs deliver 2.8X higher throughput than previous generations, with 288GB of HBM3 memory and bandwidth of 8TB/second. This configuration specifically addresses the growing demand for processing large language models (LLMs) and generative AI workloads, which require enormous computational resources.

What's particularly notable is Oracle's commitment to building a zettascale AI supercluster with up to 131,072 MI355X GPUs—a massive deployment that signals Oracle's ambition to capture market share in high-performance AI computing. The liquid-cooled racks operating at 125 kilowatts with 64 GPUs per rack indicate Oracle is embracing advanced thermal management techniques necessary for ultra-dense AI compute environments.

The emphasis on an open-source stack through AMD ROCm is strategically significant, as it offers customers flexibility and avoids vendor lock-in—a direct counter to NVIDIA's more proprietary CUDA ecosystem. This could appeal to enterprise customers concerned about dependency on a single vendor's technology stack.

Finally, Oracle becoming the first to deploy AMD Pollara AI NICs demonstrates a commitment to networking innovations critical for distributed AI training. The collaboration clearly aims to provide better price-performance for AI workloads, which could help Oracle attract cost-conscious enterprises looking to scale their AI initiatives without the premium associated with NVIDIA-based solutions.

Oracle will be among the first hyperscalers to offer an AI supercomputer with AMD Instinct MI355X GPUs

OCI to deploy new zettascale AI cluster with up to 131,072 MI355X GPUs to enable customers to build, train, and inference AI at scale

AUSTIN, Texas and SANTA CLARA, Calif., June 12, 2025 /PRNewswire/ -- Oracle and AMD today announced that AMD Instinct™ MI355X GPUs will be available on Oracle Cloud Infrastructure (OCI) to give customers more choice and more than 2X better price-performance for large-scale AI training and inference workloads compared to the previous generation. Oracle will offer zettascale AI clusters accelerated by the latest AMD Instinct processors with up to 131,072 MI355X GPUs to enable customers to build, train, and inference AI at scale.

"To support customers that are running the most demanding AI workloads in the cloud, we are dedicated to providing the broadest AI infrastructure offerings," said Mahesh Thiagarajan, executive vice president, Oracle Cloud Infrastructure. "AMD Instinct GPUs, paired with OCI's performance, advanced networking, flexibility, security, and scale, will help our customers meet their inference and training needs for AI workloads and new agentic applications."

To support new AI applications that require larger and more complex datasets, customers need AI compute solutions that are specifically designed for large-scale AI training. The zettascale OCI Supercluster with AMD Instinct MI355X GPUs meets this need by providing a high-throughput, ultra-low latency RDMA cluster network architecture for up to 131,072 MI355X GPUs. AMD Instinct MI355X delivers nearly triple the compute power and a 50 percent increase in high-bandwidth memory than the previous generation.

"AMD and Oracle have a shared history of providing customers with open solutions to accommodate high performance, efficiency, and greater system design flexibility," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Business Group, AMD. "The latest generation of AMD Instinct GPUs and Pollara NICs on OCI will help support new use cases in inference, fine-tuning, and training, offering more choice to customers as AI adoption grows."

AMD Instinct MI355X Coming to OCI
AMD Instinct MI355X-powered shapes are designed with superior value, cloud flexibility, and open-source compatibility—ideal for customers running today's largest language models and AI workloads. With AMD Instinct MI355X on OCI, customers will be able to benefit from:

  • Significant performance boost: Helps customers increase performance for AI deployments with up to 2.8X higher throughput. To enable AI innovation at scale, customers can expect faster results, lower latency, and the ability to run larger AI workloads.
  • Larger, faster memory: Allows customers to execute large models entirely in memory, enhancing inference and training speeds for models that require high memory bandwidth. The new shapes offer 288 gigabytes of high-bandwidth memory 3 (HBM3) and up to eight terabytes per second of memory bandwidth.
  • New FP4 support: Allows customers to deploy modern large language and generative AI models cost-effectively with the support of the new 4-bit floating point compute (FP4) standard. This enables ultra-efficient and high-speed inference.
  • Dense, liquid-cooled design: Enables customers to maximize performance density at 125 kilowatts per rack for demanding AI workloads. With 64 GPUs per rack at 1,400 watts each, customers can expect faster training times with higher throughput and lower latency.
  • Built for production-scale training and inference: Supports customers deploying new agentic applications with a faster time-to-first token (TTFT) and high tokens-per-second throughput. Customers can expect improved price performance for both training and inference workloads.
  • Powerful head node: Assists customers in optimizing their GPU performance by enabling efficient job orchestration and data processing with an AMD Turin high-frequency CPU with up to three terabytes of system memory.
  • Open-source stack: Enables customers to leverage flexible architectures and easily migrate their existing code with no vendor lock-in through AMD ROCm. AMD ROCm is an open software stack that includes popular programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs.
  • Network innovation with AMD Pollara™: Provides customers with advanced RoCE functionality that enables innovative network fabric designs. Oracle will be the first to deploy AMD Pollara AI NICs on backend networks, providing advanced RoCE functions such as programmable congestion control and support for open industry standards from the Ultra Ethernet Consortium (UEC) for high-performance and low latency networking.

Additional Resources

About Oracle
Oracle offers integrated suites of applications plus secure, autonomous infrastructure in the Oracle Cloud. For more information about Oracle (NYSE: ORCL), please visit us at oracle.com.

Trademarks
Oracle, Java, MySQL and NetSuite are registered trademarks of Oracle Corporation. NetSuite was the first cloud company—ushering in the new era of cloud computing.

AMD, the AMD Arrow Logo, AMD Instinct, Pollara, ROCm, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names are for informational purposes only and may be trademarks of their respective owners.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/oracle-and-amd-collaborate-to-help-customers-deliver-breakthrough-performance-for-large-scale-ai-and-agentic-workloads-302480486.html

SOURCE Oracle

FAQ

What is the performance improvement of AMD Instinct MI355X GPUs on Oracle Cloud Infrastructure?

The AMD Instinct MI355X GPUs on OCI deliver 2.8X higher throughput and 2X better price-performance compared to the previous generation.

How many AMD Instinct MI355X GPUs can Oracle's new zettascale AI cluster support?

Oracle's new zettascale AI cluster can support up to 131,072 AMD Instinct MI355X GPUs.

What are the memory specifications of the AMD Instinct MI355X GPUs on OCI?

The AMD Instinct MI355X GPUs feature 288 gigabytes of HBM3 memory and up to eight terabytes per second of memory bandwidth.

What is the power consumption of AMD Instinct MI355X GPUs in Oracle's infrastructure?

Each GPU consumes 1,400 watts, with racks operating at 125 kilowatts per rack using liquid-cooling technology.

How does Oracle prevent vendor lock-in with the AMD Instinct MI355X deployment?

Oracle uses AMD's open-source ROCm software stack, which includes programming models, tools, and libraries that allow flexible architectures and easy code migration.
Oracle Corp

NYSE:ORCL

ORCL Rankings

ORCL Latest News

ORCL Stock Data

471.39B
1.65B
41.11%
44.88%
0.77%
Software - Infrastructure
Services-prepackaged Software
Link
United States
AUSTIN