STOCK TITAN

SuperX Launches the Latest XN9160-B300 AI Server, Blackwell Ultra Delivers 50% More Compute Over Blackwell

Rhea-AI Impact
(Neutral)
Rhea-AI Sentiment
(Neutral)
Tags
AI

SuperX (NASDAQ: SUPX) on October 3, 2025 launched the XN9160-B300 AI Server, an 8U flagship built around the NVIDIA Blackwell B300 (HGX B300) targeting large-scale AI training, inference and HPC.

Key specs: 8 Blackwell B300 GPUs (288GB HBM3E each; 2,304GB unified HBM3E), Dual Intel Xeon 6 CPUs, up to 32 DDR5 DIMMs, 8×800Gb/s OSFP networking, NVLink, eight Gen5 NVMe bays, and 12×3000W 80 PLUS Titanium redundant PSUs. NVIDIA cites +50% NVFP4 compute and +50% HBM per chip vs prior Blackwell. The server is positioned for hyperscalers, scientific research, finance, bioinformatics, and global systems modeling.

SuperX (NASDAQ: SUPX) il 3 ottobre 2025 ha lanciato l'XN9160-B300 AI Server, un flagship 8U costruito attorno al NVIDIA Blackwell B300 (HGX B300) mirato all'addestramento AI su larga scala, all'inferenza e HPC.

Caratteristiche chiave: 8 GPU Blackwell B300 (288GB HBM3E ciascuna; 2.304GB HBM3E unificate), Dual Intel Xeon 6 CPU, fino a 32 DIMM DDR5, 8×800Gb/s di rete OSFP, NVLink, otto vani NVMe Gen5 e 12×3000W 80 PLUS Titanium PSU ridondanti. NVIDIA riporta +50% di calcolo NVFP4 e +50% di HBM per chip rispetto al Blackwell precedente. Il server è destinato a hyperscalers, ricerca scientifica, finanza, bioinformatica e modellizzazione di sistemi globali.

SuperX (NASDAQ: SUPX) el 3 de octubre de 2025 lanzó el servidor AI XN9160-B300, un buque insignia 8U construido alrededor del NVIDIA Blackwell B300 (HGX B300) orientado al entrenamiento AI a gran escala, la inferencia y HPC.

Especificaciones clave: 8 GPUs Blackwell B300 (288GB HBM3E cada una; 2.304GB unificada HBM3E), Dual Intel Xeon 6 CPU, hasta 32 DIMMs DDR5, 8×800Gb/s de red OSFP, NVLink, ocho bahías NVMe Gen5, y 12×3000W 80 PLUS Titanium fuentes de alimentación redundantes. NVIDIA cita +50% de cómputo NVFP4 y +50% de HBM por chip frente al anterior Blackwell. El servidor está destinado a hyperscalers, investigación científica, finanzas, bioinformática y modelado de sistemas globales.

SuperX (NASDAQ: SUPX)2025년 10월 3일에 NVIDIA Blackwell B300 (HGX B300)을 중심으로 구축된 8U 플래그십 XN9160-B300 AI 서버를 출시하여 대규모 AI 학습, 추론 및 HPC를 타깃으로 합니다.

주요 사양: 8개 Blackwell B300 GPU (각 288GB HBM3E; 통합 2,304GB HBM3E), 듀얼 Intel Xeon 6 CPU, 최대 32 DDR5 DIMMs, 8×800Gb/s OSFP 네트워킹, NVLink, Gen5 NVMe 베이 8개, 및 12×3000W 80 PLUS Titanium 중복 PSU. NVIDIA는 칩당 +50% NVFP4 계산칩당 +50% HBM를 이전 Blackwell에 비해 제시합니다. 이 서버는 하이퍼스케일러, 과학 연구, 금융, 생정보학, 글로벌 시스템 모델링을 위한 것으로 자리매김하고 있습니다.

SuperX (NASDAQ: SUPX) a lancé le 3 octobre 2025 le serveur AI XN9160-B300, un joyau 8U construit autour du NVIDIA Blackwell B300 (HGX B300) visant l'entraînement AI à grande échelle, l'inférence et le HPC.

Caractéristiques clés : 8 GPU Blackwell B300 (288GB HBM3E chacun; 2 304GB HBM3E unifiée), Dual Intel Xeon 6 CPU, jusqu'à 32 DIMM DDR5, 8×800Gb/s réseau OSFP, NVLink, huit baies NVMe Gen5, et des PSUs redondantes 12×3000W 80 PLUS Titanium. NVIDIA indique +50% de calcul NVFP4 et +50% de HBM par puce par rapport au Blackwell antérieur. Le serveur est destiné aux hyperscalers, à la recherche scientifique, à la finance, à la bioinformatique et à la modélisation de systèmes mondiaux.

SuperX (NASDAQ: SUPX) hat am 3. Oktober 2025 den XN9160-B300 AI-Server vorgestellt, ein 8U-Flagship, gebaut um den NVIDIA Blackwell B300 (HGX B300) für groß angelegte KI-Training, Inferenz und HPC zu nutzen.

Wichtige Spezifikationen: 8 Blackwell B300 GPUs (je 288GB HBM3E; 2.304GB einheitliches HBM3E), Dual Intel Xeon 6 CPUs, bis zu 32 DDR5 DIMMs, 8×800Gb/s OSFP-Netzwerk, NVLink, acht Gen5 NVMe-Slots und 12×3000W 80 PLUS Titanium redundante Netzteile. NVIDIA nennt +50% NVFP4-Rechenleistung und +50% HBM pro Chip im Vergleich zum vorherigen Blackwell. Der Server ist konzipiert für Hyperscaler, wissenschaftliche Forschung, Finanzen, Bioinformatik und globale Systemmodellierung.

SuperX (NASDAQ: SUPX) في 3 أكتوبر 2025 أطلق خادم الذكاء الاصطناعي XN9160-B300، وهو طراز رئيسي بارتفاع 8U مبني حول NVIDIA Blackwell B300 (HGX B300) يستهدف التدريب AI على نطاق واسع والاستدلال والحوسبة عالية الأداء.

المواصفات الرئيسية: 8 وحدات GPU من Blackwell B300 (كل منها 288GB HBM3E؛ 2,304GB HBM3E موحدة)، معالجان Intel Xeon مزدوجان 6 CPUs، حتى 32 DIMMs DDR5، 8×800Gb/s شبكات OSFP، NVLink، ثمانية فتحات NVMe Gen5، و< b>12×3000W 80 PLUS Titanium مزودة بوجود طاقة احتياطية متكررة. تشير NVIDIA إلى +50% حوسبة NVFP4 و+50% HBM لكل شريحة مقارنةً بالـ Blackwell السابق. يستهدف الخادم الشركات المستعملة في الحوسبة السحابية الهائلة، البحث العلمي، المالية، علوم المعلومات، ونمذجة النظم العالمية.

SuperX (NASDAQ: SUPX)2025年10月3日 发布了 XN9160-B300 AI 服务器,这是围绕 NVIDIA Blackwell B300 (HGX B300) 构建的8U 旗舰机,面向大规模 AI 训练、推理和高性能计算。

关键规格:8 个 Blackwell B300 GPU(每个 288GB HBM3E;2,304GB 统一 HBM3E),双 Intel Xeon 6 CPU,最高 32 DDR5 DIMM8×800Gb/s OSFP 网络,NVLink,8 个 Gen5 NVMe 机箱,以及 12×3000W 80 PLUS Titanium 冗余 PSU。NVIDIA 指出每颗芯片相比上一代 Blackwell 有 +50% 的 NVFP4 计算+50% 的 HBM。该服务器面向超大规模云服务商、科学研究、金融、生物信息学以及全球系统建模。

Positive
  • +50% NVFP4 compute per chip vs Blackwell
  • 2,304GB HBM3E unified GPU memory per node
  • 8×800Gb/s OSFP networking for low-latency scaling
  • 8 Blackwell B300 GPUs in an 8U chassis
Negative
  • 36,000W peak power capacity per chassis (12×3000W PSUs) implying high energy draw

Insights

SuperX launched the XN9160-B300 AI server on October 3, 2025, offering 8 Blackwell B300 GPUs and very large memory.

The announcement presents a flagship, data-center-ready node with 8 NVIDIA Blackwell B300 GPUs, 2,304GB unified HBM3E memory, dual Intel Xeon 6 CPUs, and up to 8×800Gb/s InfiniBand.

What it means: customers targeting very large models and high-concurrency inference now have a single 8U platform that claims to avoid memory offload and scale with high-bandwidth networking; that directly addresses compute and memory limits described in the release.

Why it matters: the combination of 288GB HBM3E per GPU, the stated 50% uplift for Blackwell Ultra, and enterprise power/integration features make this a material product for hyperscale AI, research, and HPC deployments; monitor adoption after the launch on October 3, 2025.

SINGAPORE, Oct. 3, 2025 /PRNewswire/ -- Super X AI Technology Limited (NASDAQ: SUPX) ("the Company" or "SuperX") today announced the launch of its latest flagship product, the SuperX XN9160-B300 AI Server. Powered by NVIDIA's Blackwell GPU (B300), the XN9160-B300 is designed to meet the growing demand for scalable, high-performance computing across AI training, machine learning (ML), and high-performance computing (HPC) workloads. Engineered for extreme performance, the system integrates advanced networking capabilities, scalable architecture, and energy-efficient design to support mission-critical data center environments.

The SuperX XN9160-B300 AI Server is purpose-built to accelerate large-scale distributed AI training and AI inference workloads, providing extreme GPU performance for intensive, high-demand applications. Optimized for GPU-supported tasks, it excels in foundation model training and inference, including reinforcement learning (RL), distillation techniques, and multimodal AI models, while also delivering high performance for HPC workloads such as climate modeling, drug discovery, seismic analysis, and insurance risk modeling.

Designed for enterprise-scale AI and HPC environments, the XN9160-B300 combines supercomputer-level performance with energy-efficient, scalable architecture, offering mission-critical capabilities in a compact, data-center-ready form factor.

The launch of the SuperX XN9160-B300 AI server marks a significant milestone in SuperX's AI infrastructure roadmap, delivering powerful GPU instances and compute capabilities to accelerate global AI innovation.

Figure 1. SuperX XN9160-B300 AI Server, Powered by Blackwell Ultra

XN9160-B300 AI Server

The SuperX XN9160-B300 AI Server, unleashing extreme AI compute performance within a 8U chassis, features Intel Xeon 6 Processors, 8 NVIDIA Blackwell B300 GPUs, up to 32 DDR5 DIMMs, and high-speed networking with up to 8 × 800 Gb/s InfiniBand.

High GPU Power and Memory

The XN9160-B300 is built as a highly scalable AI node, featuring the NVIDIA HGX B300 module housing 8 NVIDIA Blackwell B300 GPUs. This configuration provides the peak performance of the Blackwell generation, specifically designed for next-era AI workloads.

Crucially, the server delivers a massive 2,304GB of unified HBM3E memory across its 8 GPUs (288GB per GPU). This colossal memory pool is essential for eliminating memory offloading, supporting larger model residence, and managing the expansive Key/Value caches required for high-concurrency, long-context Generative AI and Large Language Models.

Extreme Inference and Training Throughput

The system leverages the B300 Ultra's superior FP4/NVFP4 precision and second-generation Transformer Engine to achieve monumental performance leaps. According to NVIDIA, Blackwell Ultra delivers a decisive leap over Blackwell by adding 50% more NVFP4 compute and 50% more HBM capacity per chip, enabling larger models and faster throughput without compromising efficiency.[1] Scaling is effortless, thanks to eight 800Gb/s OSFP ports for InfiniBand or dual 400Gb/s Ethernet. These ports allow for the high-speed, low-latency communication necessary to connect the servers into vast AI Factories and SuperPOD clusters. The fifth-generation NVLink interconnects further ensure that the 8 on-board GPUs communicate seamlessly, acting as a single, potent accelerator.

Robust CPU and Power Foundation

The GPU complex is supported by a robust host platform featuring Dual Intel Xeon 6 Processors, providing the efficiency and bandwidth required to feed the accelerators with data. The memory subsystem is equally formidable, utilizing 32 DDR5 DIMMs supporting speeds up to 8000MT/s (MRDIMM), ensuring the host platform never bottlenecks the GPU processing.

For mission-critical reliability and sustained performance, the XN9160-B300 is equipped with 12 × 3000W 80 PLUS Titanium redundant power supplies, ensuring extremely high energy efficiency and stability under continuous peak load. The system also includes multiple high-speed PCIe Gen5 x16 slots and comprehensive storage options, including eight 2.5" Gen5 NVMe hot-swap bays.

Technical Specifications:

CPU

2x Intel® Xeon 6th Gen P-Core Processor up to 350W(SP)/500W(AP)

GPU

8* Nvidia Blackwell B300

Memory

32* 96GB DDR5 5600 RDIMM

System Disk

2* 1920GB SSD

Storage Disk

8* 3.84TB NVMe U.2

Network

8* OSFP (800G) from CX8 on the module

Dimension

8U 447mm(H) x 351mm(W) x 923mm(D)

Market Positioning

The XN9160-B300 is built for organizations pushing the boundaries of AI, where maximum scale, next-generation models, and ultra-low latency are core requirements:

  • Hyperscale AI Factories: For cloud providers and large enterprises building and operating trillion-parameter foundation models and highly demanding, high-concurrency AI reasoning engines.
  • Scientific Simulation & Research: For exascale scientific computing, advanced molecular dynamics, and creating comprehensive industrial or biological Digital Twins.
  • Financial Services: For real-time risk modeling, high-frequency trading simulations, and deploying complex large language models for financial analysis with ultra-low latency demands.
  • Bioinformatics & Genomics: For accelerating massive genome sequencing, drug discovery pipelines, and protein structure prediction at scales requiring the B300's immense memory capacity.
  • Global Systems Modeling: For national meteorological and governmental agencies requiring extreme compute for global climate and weather modeling and highly detailed disaster prediction.

About Super X AI Technology Limited (NASDAQ: SUPX)

Super X AI Technology Limited is an AI infrastructure solutions provider, offering a comprehensive portfolio of proprietary hardware, advanced software, and end-to-end services for AI data centers. The Company's services include advanced solution design and planning, cost-effective infrastructure product integration, and end-to-end operations and maintenance. Its core products include high-performance AI servers, High-Voltage Direct Current (HVDC) solutions, high-density liquid cooling solutions, as well as AI cloud and AI agents. Headquartered in Singapore, the Company serves institutional clients globally, including enterprises, research institutions, and cloud and edge computing deployments. For more information, please visit www.superx.sg

Safe Harbor Statement

This press release may contain forward-looking statements. In addition, from time to time, we or our representatives may make forward-looking statements orally or in writing. We base these forward-looking statements on our expectations and projections about future events, which we derive from the information currently available to us. You can identify forward-looking statements by those that are not historical in nature, particularly those that use terminology such as "may," "should," "expects," "anticipates," "contemplates," "estimates," "believes," "plans," "projected," "predicts," "potential," or "hopes" or the negative of these or similar terms. In evaluating these forward-looking statements, you should consider various factors, including: our ability to change the direction of the Company; our ability to keep pace with new technology and changing market needs; and the competitive environment of our business. These and other factors may cause our actual results to differ materially from any forward-looking statement.

Forward-looking statements are only predictions. The reader is cautioned not to rely on these forward-looking statements. The forward-looking events discussed in this press release and other statements made from time to time by us or our representatives, may not occur, and actual events and results may differ materially and are subject to risks, uncertainties, and assumptions about us. We are not obligated to publicly update or revise any forward-looking statement, whether as a result of uncertainties and assumptions, the forward-looking events discussed in this press release and other statements made from time to time by us or our representatives might not occur.

Follow our social media:
X.com: https://x.com/SUPERX_AI_
LinkedIn: https://www.linkedin.com/company/superx-ai
Facebook: https://www.facebook.com/people/Super-X-AI-Technology-Limited/61578918040072/#

[1] https://developer.nvidia.com/blog/inside-nvidia-blackwell-ultra-the-chip-powering-the-ai-factory-era/

 

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/superx-launches-the-latest-xn9160-b300-ai-server-blackwell-ultra-delivers-50-more-compute-over-blackwell-302574654.html

SOURCE SuperX AI Technology Ltd

FAQ

What did SuperX (SUPX) announce on October 3, 2025?

SuperX launched the XN9160-B300 AI Server featuring 8 NVIDIA Blackwell B300 GPUs and 2,304GB unified HBM3E memory.

How much memory does the SUPX XN9160-B300 provide for large models?

The system offers 2,304GB of unified HBM3E across 8 GPUs (288GB per GPU).

What compute and networking improvements does the SUPX XN9160-B300 deliver?

It uses Blackwell Ultra with +50% NVFP4 compute and 8×800Gb/s OSFP ports for high-throughput scaling.

What CPUs and power configuration does the SUPX XN9160-B300 use?

The server uses Dual Intel Xeon 6 processors and 12×3000W 80 PLUS Titanium redundant power supplies.

Which workloads does SUPX target with the XN9160-B300?

Targeted workloads include foundation model training, inference, reinforcement learning, HPC (climate, drug discovery) and finance use cases.
SUPER X AI TECHNOLOGY LIMITED

NASDAQ:SUPX

SUPX Rankings

SUPX Latest News

SUPX Latest SEC Filings

SUPX Stock Data

1.73B
12.74M
47.68%
3.1%
2.05%
Software - Infrastructure
Technology
Link
Singapore
Singapore