STOCK TITAN

Notifications

Limited Time Offer! Get Platinum at the Gold price until January 31, 2026!

Sign up now and unlock all premium features at an incredible discount.

Read more on the Pricing page

Supermicro Expands NVIDIA Blackwell Portfolio with New 4U and 2-OU (OCP) Liquid-Cooled NVIDIA HGX B300 Solutions Ready for High-Volume Shipment

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Very Positive)
Tags

Supermicro (NASDAQ: SMCI) announced Dec 9, 2025 the commercial availability of new liquid-cooled NVIDIA HGX B300 systems in two form factors: a 4U Front I/O 19-inch EIA rack design and a compact 2-OU (OCP) 21-inch Open Rack V3 design.

Key facts: the 2-OU OCP platform supports up to 144 GPUs per rack (18 nodes), the 4U EIA option enables up to 64 GPUs per rack, each system can house NVIDIA Blackwell Ultra GPUs at up to 1,100W TDP, and systems deliver 2.1TB HBM3e GPU memory per system. DLC-2 liquid cooling captures up to 98% of system heat and the designs target up to 40% power savings and fabric throughput up to 800Gb/s.

Loading...
Loading translation...

Positive

  • Up to 144 GPUs per rack in 2-OU OCP ORV3
  • DLC-2 captures up to 98% of system heat
  • 2.1TB HBM3e GPU memory per system
  • Network throughput up to 800Gb/s with ConnectX-8 SuperNICs
  • Designed for up to 40% data center power savings

Negative

  • GPUs specified at up to 1,100W TDP each
  • Rack-scale deployment references 1.8MW in-row CDUs requirement

Key Figures

GPUs per standard rack 64 GPUs 4U liquid-cooled NVIDIA HGX B300 systems in 19-inch EIA racks
Rack GPU density 144 GPUs 2-OU (OCP) NVIDIA HGX B300 system in 21-inch ORV3 rack
GPU power rating 1,100W TDP Each NVIDIA Blackwell Ultra GPU in the 2-OU OCP system
SuperCluster GPUs 1,152 GPUs Eight compute racks plus networking and CDUs as a scalable unit
CDU capacity 1.8MW In-row coolant distribution units supporting ORV3 rack deployments
GPU memory per system 2.1TB HBM3e Total GPU memory per Supermicro NVIDIA HGX B300 system
Network throughput 800Gb/s Compute fabric throughput via NVIDIA ConnectX-8 SuperNICs
Power savings 40 percent Potential data center power savings using DLC-2 liquid-cooling technology

Market Reality Check

$35.37 Last Close
Volume Volume 20,939,608 is below 20-day average of 24,979,396 ahead of this announcement. normal
Technical Shares at $35.37 are trading below the $43.13 200-day moving average and 46.76% under the 52-week high.

Peers on Argus

SMCI was up 1.96% while key peers were mixed: HPQ -2.6%, PSTG -0.41%, WDC -0.67%, STX +1.57%, LOGI +0.08%, indicating a more stock-specific move.

Historical Context

Date Event Sentiment Move Catalyst
Nov 20 Investor conferences Positive -6.4% Announced participation in several December technology and AI investor conferences.
Nov 19 AI server launch Positive -3.4% Launched new 10U air-cooled AI server featuring AMD Instinct MI355X GPUs.
Nov 18 AI factory clusters Positive +2.4% Announced turnkey NVIDIA-based AI factory cluster solutions for scalable deployment.
Nov 17 HPC/AI showcase Positive +2.4% Showcased future HPC clusters and liquid-cooled AI infrastructure at Supercomputing 2025.
Nov 04 Earnings release Positive -6.6% Reported Q1 FY2026 results and reiterated at least $36B full-year revenue outlook.
Pattern Detected

Recent history shows several instances where positive AI or earnings news was followed by negative price reactions, suggesting a tendency to sell into good news.

Recent Company History

Over the past months, SMCI reported Q1 FY2026 results with $5.0B net sales and reiterated at least $36B revenue expectations, yet shares fell after that release. The company has repeatedly expanded its AI hardware portfolio, including AMD Instinct MI355X systems and NVIDIA Blackwell-based AI factory clusters, and highlighted liquid-cooled infrastructure at Supercomputing 2025. Despite these product and AI factory announcements, price reactions have been mixed, with both gains around +2.35% and pullbacks exceeding -6% on ostensibly positive updates.

Market Pulse Summary

This announcement expands SMCI’s NVIDIA Blackwell portfolio with liquid‑cooled HGX B300 systems that reach up to 144 GPUs per rack and a SuperCluster design totaling 1,152 GPUs. It builds on prior AI factory and liquid‑cooling initiatives aimed at performance per watt and faster time‑to‑online. Investors may track adoption of these 2.1TB HBM3e systems, the impact of up to 40 percent power savings, and how this complements previously disclosed multi‑billion‑dollar Blackwell order visibility.

Key Terms

direct liquid-cooling technical
"capturing up to 98% of system heat through DLC-2 (Direct Liquid-Cooling) technology"
A cooling method where a liquid is routed directly over or through heat-generating parts (such as computer chips or batteries) to remove heat more efficiently than air alone. Like placing a cold cloth directly on a hot object, it lets devices run faster or pack more components into the same space while using less energy for cooling, which can lower operating costs, reduce failure risk, and affect capital and energy budgeting decisions for investors.
OCP Open Rack V3 technical
"designed for 21-inch OCP Open Rack V3 (ORV3) specification with up to 144 GPUs"
OCP Open Rack v3 is a standardized design for data center server racks created by the Open Compute Project, like a common blueprint for how servers, power, and cooling are arranged inside a cabinet. For investors, it matters because using a shared design can lower hardware and operating costs, speed deployment, and increase competition among suppliers, which can improve margins and scalability for companies that run large-scale computing or cloud services.
InfiniBand technical
"scaling seamlessly with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8MW"
Infiniband is a high-speed data transport technology used inside data centers to move large amounts of information quickly and with very little delay, often used for servers, storage and computing clusters. Investors should care because it acts like a multilane expressway for data—companies that build, use or support such fast networks can gain competitive advantages in cloud services, high-performance computing and AI workloads, which can affect costs and revenue potential.

AI-generated analysis. Not financial advice.

  • Introducing 4U and 2-OU (OCP) liquid-cooled NVIDIA HGX B300 systems for high-density hyperscale and AI factory deployments, supported by Supermicro Data Center Building Block Solutions® with DLC-2 and DLC technology, respectively
  • 4U liquid-cooled NVIDIA HGX B300 systems designed for standard 19-inch EIA racks with up to 64 GPUs per rack, capturing up to 98% of system heat through DLC-2 (Direct Liquid-Cooling) technology
  • Compact and power-efficient 2-OU (OCP) NVIDIA HGX B300 8-GPU system designed for 21-inch OCP Open Rack V3 (ORV3) specification with up to 144 GPUs in a single rack

SAN JOSE, Calif., Dec. 9, 2025 /PRNewswire/ -- Super Micro Computer, Inc. (SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, today announced the expansion of its NVIDIA Blackwell architecture portfolio with the introduction and shipment availability of new 4U and 2-OU (OCP) liquid-cooled NVIDIA HGX B300 systems. These latest additions are a key part of Supermicro's Data Center Building Block Solutions (DCBBS) that deliver unprecedented GPU density and power efficiency for hyperscale data centers and AI factory deployments.

"With AI infrastructure demand accelerating globally, our new liquid-cooled NVIDIA HGX B300 systems deliver the performance density and energy efficiency that hyperscalers and AI factories need today," said Charles Liang, president and CEO of Supermicro. "We're now offering the industry's most compact NVIDIA HGX B300 solutions—achieving up to 144 GPUs in a single rack—while reducing power consumption and cooling costs through our proven direct liquid-cooling technology. Through our DCBBS, this is how Supermicro enables our customers to deploy AI at scale: faster time-to-market, maximum performance per watt, and end-to-end integration from design to deployment."

For more information, please visit https://www.supermicro.com/en/accelerators/nvidia

The 2-OU (OCP) liquid-cooled NVIDIA HGX B300 system, built to the 21-inch OCP Open Rack V3 (ORV3) specification, enables up to 144 GPUs per rack to deliver maximum GPU density for hyperscale and cloud providers requiring space-efficient racks without compromising serviceability. The rack-scale design features blind-mate manifold connections, modular GPU/CPU tray architecture, and state-of-the-art component liquid cooling solutions. The system propels AI workloads with eight NVIDIA Blackwell Ultra GPUs at up to 1,100W TDP each, while dramatically reducing rack footprint and power consumption. A single ORV3 rack supports up to 18 nodes with 144 GPUs total, scaling seamlessly with NVIDIA Quantum-X800 InfiniBand switches and Supermicro's 1.8MW in-row coolant distribution units (CDUs). Combined, eight NVIDIA HGX B300 compute racks, three NVIDIA Quantum-X800 InfiniBand networking racks, and two Supermicro in-row CDUs form a SuperCluster scalable unit with 1,152 GPUs.

Complementing the 2-OU (OCP) model, the 4U Front I/O HGX B300 Liquid-Cooled System offers the same compute performance in a traditional 19-inch EIA rack form factor for large-scale AI factory deployments. The 4U system leverages Supermicro's DLC-2 technology to capture up to 98% of heat generated1 by the system through liquid-cooling, achieving superior power efficiency with lower noise and greater serviceability for dense training and inference clusters.

Supermicro NVIDIA HGX B300 systems unlock substantial performance speedups, with 2.1TB of HBM3e GPU memory per system to handle larger model sizes at the system level. Above all, both the 2-OU (OCP) and 4U platforms deliver significant performance gains at the cluster level by doubling compute fabric network throughput up to 800Gb/s via integrated NVIDIA ConnectX®-8 SuperNICs when used with NVIDIA Quantum-X800 InfiniBand or NVIDIA Spectrum-4 Ethernet. These improvements accelerate heavy AI workloads such as agentic AI applications, foundation model training, and multimodal large scale inference in AI factories.

Supermicro developed these platforms to address key customer requirements for TCO, serviceability, and efficiency. With the DLC-2 technology stack, data centers can achieve up to 40 percent power savings1, reduce water consumption through 45°C warm water operation and eliminate chilled water and compressors in data centers. Supermicro DCBBS delivers the new systems as fully validated, tested racks ready as L11 and L12 solutions before shipment, accelerating time-to-online for hyperscale, enterprise, and federal customers.

These new systems expand Supermicro's broad portfolio of NVIDIA Blackwell platforms — including the NVIDIA GB300 NVL72, NVIDIA HGX B200, and NVIDIA RTX PRO 6000 Blackwell Server Edition. Each of these NVIDIA-Certified Systems from Supermicro are tested to validate optimal performance for a wide range of AI applications and use cases – together with NVIDIA networking and NVIDIA AI software, including NVIDIA AI Enterprise and NVIDIA Run:ai. This provides customers with flexibility to build AI infrastructure that scales from a single node to full-stack AI factories.

1https://www.supermicro.com/en/solutions/liquid-cooling

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions provider with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).

Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

All other brands, names, and trademarks are the property of their respective owners.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/supermicro-expands-nvidia-blackwell-portfolio-with-new-4u-and-2-ou-ocp-liquid-cooled-nvidia-hgx-b300-solutions-ready-for-high-volume-shipment-302637056.html

SOURCE Super Micro Computer, Inc.

FAQ

What did Supermicro (SMCI) announce on December 9, 2025 about NVIDIA HGX B300 systems?

Supermicro announced commercial availability of 4U and 2-OU (OCP) liquid-cooled NVIDIA HGX B300 systems supporting high GPU density and liquid-cooling for hyperscale AI.

How many GPUs does the Supermicro 2-OU (OCP) HGX B300 rack support for SMCI customers?

The 2-OU OCP Open Rack V3 design supports up to 144 GPUs per rack (18 nodes with 8 GPUs each).

What cooling and efficiency claims did Supermicro make for the new SMCI HGX B300 systems?

Supermicro said DLC-2 liquid-cooling can capture up to 98% of system heat and enable up to 40% power savings with 45°C warm water operation.

What memory and networking specs does the SMCI HGX B300 offer?

Each system can provide 2.1TB HBM3e GPU memory and double fabric throughput up to 800Gb/s using NVIDIA ConnectX-8 SuperNICs.

Are the new Supermicro HGX B300 systems ready for large-scale cluster builds?

Yes; Supermicro describes validated rack-scale L11/L12 solutions and a SuperCluster unit example delivering 1,152 GPUs per scalable unit.

What power characteristics should investors note about SMCI's new HGX B300 platforms?

The platforms use NVIDIA Blackwell Ultra GPUs rated up to 1,100W TDP, and rack deployments reference 1.8MW in-row coolant distribution units.
Super Micro Computer Inc

NASDAQ:SMCI

SMCI Rankings

SMCI Latest News

SMCI Latest SEC Filings

SMCI Stock Data

20.71B
498.61M
16.62%
52.13%
15.99%
Computer Hardware
Electronic Computers
Link
United States
SAN JOSE