STOCK TITAN

Supermicro Expands AI Solutions with the Upcoming NVIDIA HGX H200 and MGX Grace Hopper Platforms Featuring HBM3e Memory

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Very Positive)
Tags
AI
Rhea-AI Summary
Supermicro (NASDAQ: SMCI) expands its AI reach with support for the new NVIDIA HGX H200 and Grace Hopper Superchip, offering unprecedented performance, scalability, and reliability for AI, LLM training, and HPC applications. The company introduces the industry's highest density server with liquid-cooled 8-GPU systems, reducing TCO and energy costs while delivering the highest performance AI training capacity available in a single rack.
Positive
  • Supermicro's expansion into AI with support for NVIDIA's latest GPUs positions the company at the forefront of AI technology, enabling faster deployment of generative AI and HPC applications.
  • The introduction of liquid-cooled 8-GPU systems demonstrates Supermicro's commitment to reducing energy costs and environmental impact, aligning with the growing demand for green computing in data centers.
Negative
  • None.

Supermicro Extends 8-GPU, 4-GPU, and MGX Product Lines with Support for the NVIDIA HGX H200 and Grace Hopper Superchip for LLM Applications with Faster and Larger HBM3e Memory – New Innovative Supermicro Liquid Cooled 4U Server with NVIDIA HGX 8-GPUs Doubles the Computing Density Per Rack, and Up to 80kW/Rack, Reducing TCO

SAN JOSE, Calif., and DENVER, Nov. 13, 2023 /PRNewswire/ -- Supercomputing Conference (SC23) -- Supermicro, Inc. (NASDAQ: SMCI), a Total IT Solution Provider for AI, Cloud, Storage, and 5G/Edge, is expanding its AI reach with the upcoming support for the new NVIDIA HGX H200 built with H200 Tensor Core GPUs. Supermicro's industry leading AI platforms, including 8U and 4U Universal GPU Systems, are drop-in ready for the HGX H200 8-GPU, 4-GPU, and with nearly 2x capacity and 1.4x higher bandwidth HBM3e memory compared to the NVIDIA H100 Tensor Core GPU. In addition, the broadest portfolio of Supermicro NVIDIA MGXTM systems supports the upcoming NVIDIA Grace Hopper Superchip with HBM3e memory. With unprecedented performance, scalability, and reliability, Supermicro's rack scale AI solutions accelerate the performance of computationally intensive generative AI, large language Model (LLM) training, and HPC applications while meeting the evolving demands of growing model sizes. Using the building block architecture, Supermicro can quickly bring new technology to market, enabling customers to become more productive sooner.

Supermicro is also introducing the industry's highest density server with NVIDIA HGX H100 8-GPUs systems in a liquid cooled 4U system, utilizing the latest Supermicro liquid cooling solution. The industry's most compact high performance GPU server enables data center operators to reduce footprints and energy costs while offering the highest performance AI training capacity available in a single rack. With the highest density GPU systems, organizations can reduce their TCO by leveraging cutting-edge liquid cooling solutions.

"Supermicro partners with NVIDIA to design the most advanced systems for AI training and HPC applications," said Charles Liang, president and CEO of Supermicro. "Our building block architecture enables us to be first to market with the latest technology, allowing customers to deploy generative AI faster than ever before. We can deliver these new systems to customers faster with our worldwide manufacturing facilities. The new systems, using the NVIDIA H200 GPU with NVIDIA® NVLink™ and NVSwitch™ high-speed GPU-GPU interconnects at 900GB/s, now provide up to 1.1TB of high-bandwidth HBM3e memory per node in our rack scale AI solutions to deliver the highest performance of model parallelism for today's LLMs and generative AI. We are also excited to offer the world's most compact NVIDIA HGX 8-GPU liquid cooled server, which doubles the density of our rack scale AI solutions and reduces energy costs to achieve green computing for today's accelerated data center."

Learn more about the Supermicro servers with NVIDIA GPUs

Supermicro designs and manufactures a broad portfolio of AI servers with different form factors. The popular 8U and 4U Universal GPU systems featuring four-way and eight-way NVIDIA HGX H100 GPUs are now drop-in ready for the new H200 GPUs to train even larger language models in less time. Each NVIDIA H200 GPU contains 141GB of memory with a bandwidth of 4.8TB/s.

"Supermicro's upcoming server designs using NVIDIA HGX H200 will help accelerate generative AI and HPC workloads, so that enterprises and organizations can get the most out of their AI infrastructure," said Dion Harris, director of data center product solutions for HPC, AI, and quantum computing at NVIDIA. "The NVIDIA H200 GPU with high-speed HBM3e memory will be able to handle massive amounts of data for a variety of workloads."

Additionally, the recently launched Supermicro MGX servers with the NVIDIA GH200 Grace Hopper Superchips are engineered to incorporate the NVIDIA H200 GPU with HBM3e memory.

The new NVIDIA GPUs allow acceleration of today's and future large language models (LLMs) with 100s of billions of parameters to fit in more compact and efficient clusters to train Generative AI with less time and also allow multiple larger models to fit in one system for real-time LLM inference to serve Generative AI for millions of users.

At SC23, Supermicro is showcasing the latest offering, a 4U Universal GPU System featuring the eight-way NVIDIA HGX H100 with its latest liquid-cooling innovations that further improve density and efficiency to drive the evolution of AI. With Supermicro's industry leading GPU and CPU cold plates, CDU(cooling distribution unit), and CDM (cooling distribution manifold) designed for green computing, the new liquid-cooled 4U Universal GPU System is also ready for the eight-way NVIDIA HGX H200, which will dramatically reduce data center footprints, power cost, and deployment hurdles through Supermicro's fully integrated liquid-cooling rack solutions and our L10, L11 and L12 validation testing.

For more information, visit the Supermicro booth at SC23

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI) is a global leader in Application-Optimized Total IT Solutions. Founded and operating in San Jose, California, Supermicro is committed to delivering first to market innovation for Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure. We are a Total IT Solutions manufacturer with server, AI, storage, IoT, switch systems, software, and support services. Supermicro's motherboard, power, and chassis design expertise further enables our development and production, enabling next generation innovation from cloud to edge for our global customers. Our products are designed and manufactured in-house (in the US, Taiwan, and the Netherlands), leveraging global operations for scale and efficiency and optimized to improve TCO and reduce environmental impact (Green Computing). The award-winning portfolio of Server Building Block Solutions® allows customers to optimize for their exact workload and application by selecting from a broad family of systems built from our flexible and reusable building blocks that support a comprehensive set of form factors, processors, memory, GPUs, storage, networking, power, and cooling solutions (air-conditioned, free air cooling or liquid cooling).

Supermicro, Server Building Block Solutions, and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc. 

All other brands, names, and trademarks are the property of their respective owners.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/supermicro-expands-ai-solutions-with-the-upcoming-nvidia-hgx-h200-and-mgx-grace-hopper-platforms-featuring-hbm3e-memory-301985332.html

SOURCE Super Micro Computer, Inc.

FAQ

What is Supermicro's ticker symbol?

Supermicro's ticker symbol is SMCI.

What new products is Supermicro extending its product lines with?

Supermicro is extending its product lines with support for the NVIDIA HGX H200 and Grace Hopper Superchip for LLM applications, offering faster and larger HBM3e memory.

How does Supermicro's liquid-cooled 4U server with NVIDIA HGX 8-GPUs impact computing density per rack?

Supermicro's liquid-cooled 4U server with NVIDIA HGX 8-GPUs doubles the computing density per rack, with up to 80kW/Rack, reducing TCO.

What are the benefits of Supermicro's rack scale AI solutions?

Supermicro's rack scale AI solutions offer unprecedented performance, scalability, and reliability, accelerating the performance of computationally intensive generative AI, LLM training, and HPC applications.

What is the key feature of the new NVIDIA H200 GPU supported by Supermicro?

The new NVIDIA H200 GPU supported by Supermicro provides up to 1.1TB of high-bandwidth HBM3e memory per node in rack scale AI solutions, delivering the highest performance of model parallelism for LLMs and generative AI.

Super Micro Computer, Inc.

NASDAQ:SMCI

SMCI Rankings

SMCI Latest News

SMCI Stock Data

44.65B
47.91M
14.35%
70.76%
5.3%
Electronic Computer Manufacturing
Manufacturing
Link
United States of America
SAN JOSE

About SMCI

supermicro (nasdaq: smci) is a global leader in high performance, high efficiency server technology and innovation. we develop and provide end-to-end green computing solutions to the data center, cloud computing, enterprise it, big data, high performance computing, or hpc, and embedded markets. our solutions range from complete server, storage, blade and workstations to full racks, networking devices, server management software and technology support and services. we offer our customers a high degree of flexibility and customization by providing what we believe to be the industry's broadest array of server configurations from which they can choose the optimal solution which fits their computing needs. our server systems, subsystems and accessories are architecturally designed to provide high levels of reliability, quality and scalability, thereby enabling our customers benefits in the areas of compute performance, density, thermal management and power efficiency to lower their overall