STOCK TITAN

AMD and its Partners Share their Vision for “AI Everywhere, for Everyone” at CES 2026

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags
AI

AMD (NASDAQ: AMD) unveiled products and partnerships at CES 2026 focused on scaling AI from cloud to edge. Highlights include the Helios rack-scale blueprint delivering up to 3 AI exaflops per rack using Instinct MI455X, EPYC Venice CPUs and Pensando Vulcano NICs, a full MI400 Series reveal and the new Instinct MI440X eight-GPU enterprise accelerator. AMD previewed the next-generation MI500 Series (targeted 2027) claiming up to 1,000x AI performance vs MI300X. For clients, AMD launched Ryzen AI 400 Series (60 TOPS NPU) shipping Jan 2026 and announced a $150 million commitment to AI education.

Loading...
Loading translation...

Positive

  • Helios rack delivers up to 3 AI exaflops per rack
  • New Instinct MI440X eight-GPU design for on-prem enterprise AI
  • MI500 Series preview targets up to 1,000x AI performance vs MI300X
  • Ryzen AI 400 Series offers 60 TOPS NPU; systems ship Jan 2026
  • AMD committed $150 million to expand AI education access

Negative

  • MI500 performance gains are projected for a 2027 launch, creating timing uncertainty

News Market Reaction 1 Alert

-3.04% News Effect

On the day this news was published, AMD declined 3.04%, reflecting a moderate negative market reaction.

Data tracked by StockTitan Argus on the day of publication.

Key Figures

Current compute capacity 100 zettaflops Global compute capacity cited as today’s baseline
Projected compute 10+ yottaflops Projected global compute capacity in next five years
Helios rack performance 3 AI exaflops AI performance per Helios rack for trillion-parameter training
MI500 performance gain 1,000x Planned AI performance increase vs Instinct MI300X GPUs
Ryzen NPU 60 TOPS NPU performance in Ryzen AI 400 and PRO 400 platforms
Model size support 128-billion parameters Maximum model size for Ryzen AI Max+ 392/388 platforms
Unified memory 128GB Unified memory capacity in Ryzen AI Max+ 392/388
AI education commitment $150 million Pledged to bring AI into classrooms and communities

Market Reality Check

$203.17 Last Close
Volume Volume 30,918,910 is 14% above the 20-day average of 27,097,916 normal
Technical Price 221.08 is trading above the 200-day MA at 163.81, after a -1.07% day

Peers on Argus

AMD fell -1.07% with mixed peers: MU -3.99%, AVGO -3.19%, ARM -3.53%, TXN -0.48%, while QCOM rose 0.48%. Moves do not indicate a synchronized sector trend.

Common Catalyst Multiple semiconductor peers highlighted AI and autonomous/robotics product news on the same day.

Historical Context

Date Event Sentiment Move Catalyst
Dec 02 AI rack-scale deal Positive -2.1% Expanded HPE collaboration around Helios open rack-scale AI infrastructure.
Dec 02 AI cloud expansion Positive -2.1% Vultr 50 MW AI supercluster adding 24,000 Instinct GPUs for global AI workloads.
Nov 24 AI training milestone Positive +5.5% Zyphra’s ZAYA1 MoE model trained entirely on Instinct MI300X GPUs.
Nov 19 AI JV announcement Positive -2.9% Joint venture with Cisco and HUMAIN targeting up to 1 GW AI capacity.
Nov 18 Exascale win France Positive -4.3% Alice Recoque exascale supercomputer in France using next-gen EPYC and Instinct.
Pattern Detected

Recent AI infrastructure wins often coincided with negative price reactions, with only one strong positive move among the last five AI-related releases.

Recent Company History

Over the last few months, AMD has announced several large AI infrastructure and supercomputing wins, including the Helios rack-scale platform with HPE and a 50 MW Vultr AI supercluster using 24,000 Instinct GPUs. Other milestones include powering the ZAYA1 foundation model solely on Instinct MI300X and securing Europe’s first France-based exascale system. Despite broadly positive AI news, four of the last five events saw negative 24-hour price reactions, suggesting a tendency for AMD shares to sell off or consolidate after strong AI headlines.

Market Pulse Summary

This announcement detailed AMD’s broader AI strategy, from the Helios rack-scale platform delivering up to 3 AI exaflops per rack to future MI500 GPUs targeting 1,000x performance over MI300X. It also expanded Ryzen AI PCs, embedded processors, and included a $150 million AI education commitment. In context with earlier AI infrastructure wins and supercomputer deals, investors may watch execution milestones, product availability timelines, and further large-scale deployment announcements.

Key Terms

yotta-scale technical
"the blueprint for yotta-scale AI infrastructure, built on AMD Instinct"
Yotta-scale describes systems, datasets or capacities that reach the yotta level — 10^24 units — meaning unimaginably large amounts such as yottabytes of data or yotta-level transactions. For investors, the term signals projects or markets that require massive infrastructure, specialized technology and heavy capital, much like planning highways for the entire planet’s traffic; such scale can create big opportunities but also high cost, execution risk and long timelines.
exaflops technical
"the blueprint for yotta-scale infrastructure, delivering up to 3 AI exaflops"
An exaflop is a measure of computing speed equal to one quintillion (10^18) floating‑point operations per second, effectively describing how many tiny math calculations a computer can perform each second. For investors, exaflop capacity signals the ability to run very large simulations, advanced artificial intelligence models, and data‑heavy workloads faster and at scale—similar to comparing how many cars a highway can move per hour—which can affect demand, pricing, and competitive position in cloud, chip, and data‑center markets.
yottaflops technical
"from today’s 100 zettaflops of global compute capacity to a projected 10+ yottaflops"
A yottaflop is a measure of computing speed equal to one septillion (10^24) floating-point operations per second, where each operation is a basic math calculation computers use for simulations, machine learning or data analysis. For investors, a system rated in yottaflops signals extreme processing capacity—like comparing a single calculator to a city of calculators—which can influence competitiveness in cloud services, AI model training and data-center demand that drive hardware and service revenues.
tokens-per-second-per-dollar technical
"delivering leadership tokens-per-second-per-dollar with high-performance Ryzen AI Max+ Series"
Tokens-per-second-per-dollar is a cost-efficiency metric that measures how many pieces of text (tokens) a computing service can process every second for each dollar spent. Investors use it to compare the speed and price of AI and cloud offerings the way a driver compares miles per gallon: higher values mean faster, cheaper processing and can signal lower operating costs or better competitive positioning for firms that rely on large-scale text processing.
AI exaflops technical
"yotta-scale infrastructure, delivering up to 3 AI exaflops of performance in a single rack"
AI exaflops measures how fast a computer system can perform the massive number-crunching tasks required by artificial intelligence, with one exaflop equaling a quintillion (10^18) calculations per second. Investors watch exaflops because higher raw compute lets firms train and run larger, faster AI models, which can create competitive advantage, change capital spending on hardware, and affect revenue and costs—like choosing a higher‑power engine that increases a factory’s output and expenses.

AI-generated analysis. Not financial advice.

News Highlights

  • AMD provided an early look at its “Helios” rack-scale platform, the blueprint for yotta-scale AI infrastructure, built on AMD Instinct MI455X GPUs and AMD EPYC “Venice” CPUs designed for advanced AI workloads
  • AMD expanded its AI portfolio with the introduction of the AMD Instinct MI440X GPU for enterprise deployments and previewed the next-generation Instinct MI500 Series GPUs
  • Launched new AMD Ryzen AI platforms for AI PCs and embedded applications; unveiled the Ryzen AI Halo developer platform
  • Announced a commitment of $150 million to bring AI into more classrooms and communities

LAS VEGAS, Jan. 05, 2026 (GLOBE NEWSWIRE) -- Today at CES 2026, AMD (NASDAQ: AMD) Chair and CEO Dr. Lisa Su detailed in the show’s opening keynote how the company’s extensive portfolio of AI products and deep cross-industry collaborations are turning the promise of AI into real-world impact.

The keynote showcased major advancements from the data center to the edge, with partners including OpenAI, Luma AI, Liquid AI, World Labs, Blue Origin, Generative Bionics, AstraZeneca, Absci and Illumina detailing how they are using AMD technology to power AI breakthroughs.

“At CES, our partners joined us to show what’s possible when the industry comes together to bring AI everywhere, for everyone,” said Dr. Lisa Su, chair and CEO of AMD. “As AI adoption accelerates, we are entering the era of yotta-scale computing, driven by unprecedented growth in both training and inference. AMD is building the compute foundation for this next phase of AI through end-to-end technology leadership, open platforms, and deep co-innovation with partners across the ecosystem.”

The blueprint for yotta-scale compute
Compute infrastructure is the foundation of AI, and accelerating adoption is driving an unprecedented expansion from today’s 100 zettaflops of global compute capacity to a projected 10+ yottaflops in the next five years. Building AI infrastructure at yotta-scale will require more than raw performance; it demands an open, modular rack design that can evolve across product generations, combining leadership compute engines with high-speed networking to connect thousands of accelerators into a single, unified system.

The AMD “Helios” rack-scale platform is the blueprint for yotta-scale infrastructure, delivering up to 3 AI exaflops of performance in a single rack. It’s designed to deliver maximum bandwidth and energy efficiency for trillion-parameter training. “Helios” is powered by AMD Instinct™ MI455X accelerators, AMD EPYC™ “Venice” CPUs and AMD Pensando™ “Vulcano” NICs for scale-out networking, all unified through the open AMD ROCm™ software ecosystem.

At CES, AMD provided an early look at “Helios” and, for the first time unveiled the full AMD Instinct MI400 Series accelerator product portfolio while previewing the next-generation MI500 Series GPUs.

The latest addition to the MI400 Series is the AMD Instinct MI440X GPU, designed for on-premises enterprise AI deployments. The MI440X will power scalable training, fine-tuning and inference workloads in a compact, eight-GPU form factor that integrates seamlessly into existing infrastructure.

The MI440X builds on the recently announced AMD Instinct MI430X GPUs, which are designed to deliver leadership performance and hybrid computing for high-precision scientific, HPC and sovereign AI workloads. MI430X GPUs will power AI factory supercomputers around the world, including Discovery at Oak Ridge National Laboratory and the Alice Recoque system, France’s first exascale supercomputer.

AMD shared additional details at CES on the next-generation AMD Instinct MI500 GPUs, planned to launch in 2027. The MI500 Series is on track to deliver up to a 1,000x increase in AI performance compared to the AMD Instinct MI300X GPUs introduced in 20231. Built on next-generation AMD CDNA™ 6 architecture, advanced 2nm process technology and cutting-edge HBM4E memory, MI500 GPUs will deliver leadership at every level.
  
Enabling AI PC experiences everywhere
AI is becoming a foundational part of the PC experience, where billions of users will interact directly with AI, both locally on the device and through the cloud. At CES, AMD introduced new products that expand AMD’s AI PC portfolio and deepen developer support across the ecosystem.

The next-generation AMD Ryzen™ AI 400 Series and Ryzen AI PRO 400 Series platforms deliver a 60 TOPS NPU2, cutting-edge efficiency and full AMD ROCm platform support for seamless cloud-to-client AI scaling. First systems ship in January 2026, with broader OEM availability in Q1 2026.

AMD also expanded its breakthrough on-device AI compute offerings with Ryzen AI Max+ 392 and Ryzen AI Max+ 388 which supports models of up to 128-billion-parameters with 128GB unified memory. These platforms enable advanced local inference, content creation workflows and incredible gaming experiences in premium thin-and-light notebooks and small form factor (SFF) desktops.

For developers, the Ryzen AI Halo Developer Platform brings powerful AI development capabilities to a compact SFF desktop PC, delivering leadership tokens-per-second-per-dollar with high-performance Ryzen AI Max+ Series processors. Ryzen AI Halo is expected to be available in Q2 2026.

AI transforming the physical world
AMD introduced the Ryzen AI Embedded processors, a new portfolio of embedded x86 processors designed to power AI-driven applications at the edge. From automotive digital cockpits and smart healthcare to physical AI for autonomous systems, including humanoid robotics, the new P100 and X100 Series processors deliver high performance, efficient AI compute for the most constrained embedded systems.

Advancing the Genesis Mission and the future of AI innovation
Lisa Su was joined on stage by Michael Kratsios, Director of the White House Office of Science and Technology Policy, to discuss the role AMD has in the U.S. government’s Genesis Mission. This ambitious public/private technology initiative aims to secure U.S. leadership in AI technologies and shape scientific discovery and global competitiveness for years to come. Genesis includes two recently announced AMD-powered AI supercomputers at Oak Ridge National Laboratory, Lux and Discovery.

Mr. Kratsios also highlighted the White House’s efforts to rally organizations to pledge resources toward expanding access to AI education with more hands-on opportunities for students to learn and build. In joining the pledge, AMD announced its commitment of $150 million to bring AI into more classrooms and communities.

The keynote concluded with recognition of the more than 15,000 student innovators who participated in the AMD AI Robotics Hackathon in partnership with Hack Club.

Supporting Resources

  • Watch the keynote replay here
  • Check out all the news here

About AMD
AMD (NASDAQ: AMD) drives innovation in high-performance and AI computing to solve the world’s most important challenges. Today, AMD technology powers billions of experiences across cloud and AI infrastructure, embedded systems, AI PCs and gaming. With a broad portfolio of AI-optimized CPUs, GPUs, networking and software, AMD delivers full-stack AI solutions that provide the performance and scalability needed for a new era of intelligent computing. Learn more at www.amd.com.

Cautionary Statement
This press release contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the features, functionality, performance, availability, timing and expected benefits of AMD products including the AMD “Helios” rack-scale platform, AMD Instinct™ MI400 Series, AMD Instinct™ MI500 Series, AMD Ryzen™ AI 400 Series, AMD Ryzen™ AI PRO 400 Series, AMD Ryzen™ AI Max+ 392, AMD Ryzen™ AI Max+388, AMD Ryzen™ AI Halo Developer Platform, AMD Ryzen™ AI Embedded P100 Series, and AMD Ryzen™ AI Embedded X100 Series processors; expected benefits from ecosystem partner collaboration; expected future AI demand; and AMD’s role in and the expected benefits from the Genesis Mission, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this press release are based on current beliefs, assumptions and expectations, speak only as of the date of this press release and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and are generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Material factors that could cause actual results to differ materially from current expectations include, without limitation, the following: competitive markets in which AMD’s products are sold; the cyclical nature of the semiconductor industry; market conditions of the industries in which AMD products are sold; AMD’s ability to introduce products on a timely basis with expected features and performance levels; loss of a significant customer; economic and market uncertainty; quarterly and seasonal sales patterns; AMD's ability to adequately protect its technology or other intellectual property; unfavorable currency exchange rate fluctuations; ability of third party manufacturers to manufacture AMD's products on a timely basis in sufficient quantities and using competitive technologies; availability of essential equipment, materials, substrates or manufacturing processes; ability to achieve expected manufacturing yields for AMD’s products; AMD's ability to generate revenue from its semi-custom SoC products; potential security vulnerabilities; potential security incidents including IT outages, data loss, data breaches and cyberattacks; uncertainties involving the ordering and shipment of AMD’s products; AMD’s reliance on third-party intellectual property to design and introduce new products; AMD's reliance on third-party companies for design, manufacture and supply of motherboards, software, memory and other computer platform components; AMD's reliance on Microsoft and other software vendors' support to design and develop software to run on AMD’s products; AMD’s reliance on third-party distributors and add-in-board partners; impact of modification or interruption of AMD’s internal business processes and information systems; compatibility of AMD’s products with some or all industry-standard software and hardware; costs related to defective products; efficiency of AMD's supply chain; AMD's ability to rely on third party supply-chain logistics functions; AMD’s ability to effectively control sales of its products on the gray market; impact of climate change on AMD’s business; impact of government actions and regulations such as export regulations, import tariffs, trade protection measures and licensing requirements; AMD’s ability to realize its deferred tax assets; potential tax liabilities; current and future claims and litigation; impact of environmental laws, conflict minerals related provisions and other laws or regulations; evolving expectations from governments, investors, customers and other stakeholders regarding corporate responsibility matters; issues related to the responsible use of AI; restrictions imposed by agreements governing AMD’s notes, the guarantees of Xilinx’s notes and the revolving credit agreement; impact of acquisitions, joint ventures and/or strategic investments on AMD’s business and AMD’s ability to integrate acquired businesses, including ZT Systems; impact of any impairment of the combined company’s assets; political, legal and economic risks and natural disasters; future impairments of technology license purchases; AMD’s ability to attract and retain key employees; and AMD’s stock price volatility. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q.


1 Based on engineering projections by AMD Performance Labs in December 2025, to estimate the peak theoretical precision performance of AMD Instinct™ MI500 Series GPU powered AI Rack vs. an AMD Instinct MI300X platform. Results subject to change when products are released in market.
2 Trillions of Operations per Second (TOPS) for an AMD Ryzen processor is the maximum number of operations per second that can be executed in an optimal scenario and may not be typical. TOPS may vary based on several factors, including the specific system configuration, AI model, and software version. GD-243.

Contact: 
Phil Hughes
AMD Communications
512-865-9697
phil.hughes@amd.com

Liz Stine
AMD Investor Relations 
+1 720-652-3965
liz.stine@amd.com 


FAQ

What is AMD announcing about the Helios rack at CES 2026 (AMD)?

AMD previewed the Helios rack-scale platform designed to deliver up to 3 AI exaflops in a single rack for yotta-scale AI.

When will AMD Ryzen AI 400 Series systems (AMD) start shipping?

AMD said first systems with the Ryzen AI 400 Series will ship in January 2026 with broader OEM availability in Q1 2026.

What are the specs of the new Ryzen AI chips announced by AMD at CES 2026?

The Ryzen AI 400 Series and Ryzen AI PRO 400 Series deliver a 60 TOPS NPU and support AMD ROCm for cloud-to-client scaling.

What is the AMD Instinct MI440X announced at CES 2026 (AMD)?

The MI440X is an eight-GPU form-factor enterprise accelerator for scalable training, fine-tuning and inference in on-prem deployments.

What did AMD announce about next-generation MI500 GPUs (AMD) at CES 2026?

AMD previewed the MI500 Series targeting launch in 2027 with claims of up to 1,000x AI performance vs the MI300X.

How much did AMD commit to AI education at CES 2026 (AMD)?

AMD announced a $150 million commitment to bring AI into more classrooms and communities.
Advanced Micro Devices Inc

NASDAQ:AMD

AMD Rankings

AMD Latest News

AMD Latest SEC Filings

AMD Stock Data

330.77B
1.62B
0.51%
69.38%
2.41%
Semiconductors
Semiconductors & Related Devices
Link
United States
SANTA CLARA