STOCK TITAN

SuperX Unveils the All-New SuperX XN9160-B200 AI Server, Powered by NVIDIA Blackwell GPU -- Accelerating AI Innovation by 30x as Compared to H100 Series with Supercomputer-Class Performance

Rhea-AI Impact
(Moderate)
Rhea-AI Sentiment
(Neutral)
Tags
AI

SuperX (NASDAQ:SUPX) has unveiled its groundbreaking XN9160-B200 AI Server, featuring NVIDIA's latest Blackwell B200 GPUs. The server delivers exceptional AI computing capabilities with 8 NVIDIA Blackwell B200 GPUs, 1440 GB of HBM3E memory, and 6th Gen Intel Xeon processors in a 10U chassis.

The system achieves remarkable performance metrics, including up to 15x faster inference compared to the H100 platform, processing 58 tokens per second per card on the GPT-MoE 1.8T model. The server features advanced reliability measures, including redundant power supplies and comprehensive quality control processes with a three-year warranty.

SuperX (NASDAQ:SUPX) ha presentato il suo innovativo XN9160-B200 AI Server, dotato delle più recenti GPU Blackwell B200 di NVIDIA. Il server offre capacità di calcolo AI eccezionali con 8 GPU NVIDIA Blackwell B200, 1440 GB di memoria HBM3E e processori Intel Xeon di 6ª generazione, il tutto racchiuso in un chassis da 10U.

Il sistema raggiunge prestazioni straordinarie, con un'inferenza fino a 15 volte più veloce rispetto alla piattaforma H100, elaborando 58 token al secondo per scheda sul modello GPT-MoE 1.8T. Il server include misure avanzate di affidabilità, come alimentatori ridondanti e rigorosi processi di controllo qualità, supportati da una garanzia di tre anni.

SuperX (NASDAQ:SUPX) ha presentado su revolucionario XN9160-B200 AI Server, equipado con las últimas GPU Blackwell B200 de NVIDIA. El servidor ofrece capacidades excepcionales de computación AI con 8 GPU NVIDIA Blackwell B200, 1440 GB de memoria HBM3E y procesadores Intel Xeon de 6ª generación en un chasis de 10U.

El sistema alcanza métricas de rendimiento notables, incluyendo una inferencia hasta 15 veces más rápida en comparación con la plataforma H100, procesando 58 tokens por segundo por tarjeta en el modelo GPT-MoE 1.8T. El servidor cuenta con medidas avanzadas de fiabilidad, como fuentes de alimentación redundantes y procesos exhaustivos de control de calidad, con una garantía de tres años.

SuperX (NASDAQ:SUPX)는 NVIDIA의 최신 Blackwell B200 GPU를 탑재한 혁신적인 XN9160-B200 AI 서버를 공개했습니다. 이 서버는 8개의 NVIDIA Blackwell B200 GPU, 1440GB HBM3E 메모리, 6세대 인텔 Xeon 프로세서를 10U 섀시에 탑재하여 뛰어난 AI 컴퓨팅 성능을 제공합니다.

시스템은 H100 플랫폼 대비 최대 15배 빠른 추론 속도를 달성하며, GPT-MoE 1.8T 모델에서 카드당 초당 58 토큰을 처리합니다. 또한, 중복 전원 공급 장치와 철저한 품질 관리 프로세스를 포함한 고급 신뢰성 기능을 갖추고 있으며, 3년 보증이 제공됩니다.

SuperX (NASDAQ:SUPX) a dévoilé son innovant XN9160-B200 AI Server, équipé des toutes dernières GPU Blackwell B200 de NVIDIA. Ce serveur offre des capacités de calcul IA exceptionnelles avec 8 GPU NVIDIA Blackwell B200, 1440 Go de mémoire HBM3E et des processeurs Intel Xeon de 6e génération dans un châssis 10U.

Le système atteint des performances remarquables, incluant une inférence jusqu'à 15 fois plus rapide comparée à la plateforme H100, traitant 58 tokens par seconde et par carte sur le modèle GPT-MoE 1.8T. Le serveur intègre des mesures avancées de fiabilité, telles que des alimentations redondantes et des processus de contrôle qualité rigoureux, avec une garantie de trois ans.

SuperX (NASDAQ:SUPX) hat seinen bahnbrechenden XN9160-B200 AI Server vorgestellt, der mit den neuesten NVIDIA Blackwell B200 GPUs ausgestattet ist. Der Server bietet außergewöhnliche KI-Rechenleistung mit 8 NVIDIA Blackwell B200 GPUs, 1440 GB HBM3E Speicher und Intel Xeon Prozessoren der 6. Generation in einem 10U-Gehäuse.

Das System erreicht beeindruckende Leistungswerte, darunter eine bis zu 15-fach schnellere Inferenz im Vergleich zur H100-Plattform und verarbeitet 58 Tokens pro Sekunde und Karte im GPT-MoE 1.8T Modell. Der Server verfügt über fortschrittliche Zuverlässigkeitsmaßnahmen, einschließlich redundanter Netzteile und umfassender Qualitätskontrollprozesse, mit einer dreijährigen Garantie.

Positive
  • Revolutionary performance with 15x faster inference compared to previous H100 platform
  • Advanced specifications including 8 NVIDIA Blackwell B200 GPUs and 1440GB HBM3E memory
  • Robust reliability features with redundant power supplies and 48-hour stress testing
  • Comprehensive three-year warranty and professional technical support
Negative
  • None.

Insights

SuperX's new B200-powered AI server delivers exceptional performance gains, positioning the company as a serious contender in the high-end AI infrastructure market.

SuperX has made a significant technological leap with its XN9160-B200 AI server. The integration of 8 NVIDIA Blackwell B200 GPUs represents a substantial competitive advantage in the rapidly growing AI infrastructure market. The fifth-generation NVLink technology providing 1.8TB/s inter-GPU bandwidth is particularly noteworthy, as this interconnect speed is critical for large-scale distributed AI training workloads.

The performance metrics are impressive: 3x speed improvement for large-scale AI model training and a remarkable 15x performance increase for inference workloads compared to H100-based systems. Specifically, the system achieves 58 tokens per second on GPT-MoE 1.8T model inference versus just 3.5 tokens on H100 platforms. This positions the XN9160-B200 as a formidable solution for enterprises working with trillion-parameter foundation models.

The system architecture shows thoughtful design beyond just raw GPU power. The inclusion of Intel's latest 6th Gen Xeon processors with 64 cores per CPU provides balanced computing resources for data preprocessing pipelines. The 3,072GB DDR5 memory configuration (32 modules at 96GB each) ensures sufficient capacity for complex AI workloads, while the redundant power supply design (1+1 redundant 12V and 4+4 redundant 54V) demonstrates engineering maturity for enterprise-grade reliability.

Most significantly, SuperX is targeting high-value vertical markets including large tech companies, research institutions, and sectors like finance and healthcare where AI computing demand is surging. This product positions SuperX to compete effectively against established players like Dell, HPE, and Lenovo in the premium AI server segment, potentially capturing market share in what has become one of technology's highest-growth hardware categories.

SINGAPORE, July 30, 2025 /PRNewswire/ -- Super X AI Technology Limited (Nasdaq: SUPX) ("Company" or "SuperX") today announced the launch of its latest flagship product — the SuperX XN9160-B200 AI Server. Powered by NVIDIA's Blackwell architecture GPU (B200), this next-generation AI server is engineered to meet the rising demand for scalable, high-performance computing in AI training, machine learning (ML), and high-performance computing (HPC) workloads.

The XN9160-B200 AI Server is purpose-built to accelerate large-scale distributed AI training and AI inference workloads. It is optimized for GPU-supported tasks to support intensive GPU instances, particularly for training and inference of foundation models using reinforcement learning (RL) and distillation techniques, multimodal model training and inference, as well as HPC applications such as climate modeling, drug discovery, seismic analysis, and insurance risk modeling. Its performance rivals that of a traditional supercomputer, offering enterprise-grade capabilities in a compact form.

The launch of the SuperX XN9160-B200 AI server marks a significant milestone in SuperX's AI infrastructure roadmap, delivering powerful GPU instances and compute capabilities to accelerate global AI innovation.

XN9160-B200 AI Server

The all-new XN9160-B200 features 8 NVIDIA Blackwell B200 GPUs, fifth-generation NVLink technology, 1440 GB of high-bandwidth memory (HBM3E), and 6th Gen Intel® Xeon® processors, unleashing extreme AI compute performance within a 10U chassis.

Built for AI —— Cutting-edge Training Performance

The SuperX XN9160-B200 is powered by its core engine: 8 NVIDIA Blackwell B200 GPUs, equipped with fifth-generation NVLink technology to provide ultra-high inter-GPU bandwidth of up to 1.8TB/s. This significantly accelerates large-scale AI model training, achieving up to a 3x speed improvement and drastically shortening the R&D cycle for tasks like pre-training and fine-tuning trillion-parameter models. For inference, it represents a quantum leap in performance: with 1440GB of high-performance HBM3E memory running at FP8 precision, it achieves an astonishing throughput of 58 tokens per second per card on the GPT-MoE 1.8T model. Compared to the 3.5 tokens per second of the previous generation H100 platform, this represents an extreme performance increase of up to 15x.

The inclusion of 6th Gen Intel® Xeon® processors, in tandem with 5,600-8,000 MT/s DDR5 memory and all-flash NVMe storage, provides key support for the system. These components effectively accelerate data pre-processing, ensure smooth operation in high-load virtualization environments, and enhance the efficiency of complex parallel computing, enabling the stable and efficient completion of AI model training and inference tasks.

To ensure exceptional operational reliability, the XN9160-B200 utilizes an advanced multi-path power redundancy solution. It is equipped with 1+1 redundant 12V power supplies and 4+4 redundant 54V GPU power supplies, effectively mitigating the risk of single point of failures and ensuring the system can run continuously and stably under unexpected circumstances, providing uninterrupted power for critical AI missions.

The SuperX XN9160-B200 has a built-in AST2600 intelligent management system that supports convenient remote monitoring and management. Each server undergoes over 48 hours of full-load stress testing, cold and hot boot validation, and high/low-temperature aging screening, combined with multiple production quality control processes to ensure reliable delivery. We also provide a three-year warranty and professional technical support, offering a full-lifecycle service guarantee to help enterprises navigate the AI wave and lead the future.

Technical Specifications:

CPU

2* Intel® Xeon® 6710E Processor 64Cores, 2.40 GHz, 205W

GPU

8* Nvidia B200

Memory

32* 96GB DDR5 5600 RDIMM

System Disk

1* 960GB SSD

Storage Disk

3.84TB NVMe U.2

Network

•  8* CX7 MCX75310 IB Card, 400G OSFP

•  1* BCM957608-P2200G, dual 200G QSFP56

•  1* BCM957412A4120AC, dual 10G SFP+

Dimension

440mm(H) x 448mm(W) x 900mm(D)

Market Positioning

The XN9160-B200 is designed for global enterprises and research institutions with demanding compute needs, especially:

  • Large Tech Companies: For training and deploying foundation models and generative AI applications
  • Academic & Research Institutions: For complex scientific simulations and modeling
  • Finance & Insurance: For risk modeling and real-time analytics
  • Pharmaceutical & Healthcare: For drug screening and bioinformatics
  • Government & Meteorological Agencies: For climate modeling and disaster prediction

Purchase & Contact Information

For product inquiries, sales, and detailed specifications, please contact our product sales team at: Sales@superx.sg

About Super X AI Technology Limited (SUPX)

Super X AI Technology Limited is an AI infrastructure solutions provider, and through its wholly-owned subsidiaries in Singapore, SuperX Industries Pte. Ltd. and SuperX AI Pte. Ltd., offers a comprehensive portfolio of proprietary hardware, advanced software, and end-to-end services for AI data centers. The Company's services include advanced solution design and planning, cost-effective infrastructure product integration, and end-to-end operations and maintenance. Its core products include high-performance AI servers, High-Voltage Direct Current (HVDC) solutions, high-density liquid cooling solutions, as well as AI cloud and AI agents. Headquartered in Singapore, the Company serves institutional clients globally, including enterprises, research institutions, and cloud and edge computing deployments. For more information, please visit www.superx.sg

Contact Information

Product Inquiries: sales@superx.sg

Investor Relation: ir@superx.sg

Follow our social media:

X.com: https://x.com/SUPERX_AI_

LinkedIn: https://www.linkedin.com/company/superx-ai

Safe Harbor Statement

This press release contains forward-looking statements. In addition, from time to time, we or our representatives may make forward-looking statements orally or in writing. We base these forward-looking statements on our expectations and projections about future events, which we derive from the information currently available to us. You can identify forward-looking statements by those that are not historical in nature, particularly those that use terminology such as "may," "should," "expects," "anticipates," "contemplates," "estimates," "believes," "plans," "projected," "predicts," "potential," or "hopes" or the negative of these or similar terms. In evaluating these forward-looking statements, you should consider various factors, including: our ability to change the direction of the Company; our ability to keep pace with new technology and changing market needs; and the competitive environment of our business. These and other factors may cause our actual results to differ materially from any forward-looking statement.

Forward-looking statements are only predictions. The reader is cautioned not to rely on these forward-looking statements. The forward-looking events discussed in this press release and other statements made from time to time by us or our representatives, may not occur, and actual events and results may differ materially and are subject to risks, uncertainties, and assumptions about us. We are not obligated to publicly update or revise any forward-looking statement, whether as a result of uncertainties and assumptions, the forward-looking events discussed in this press release and other statements made from time to time by us or our representatives might not occur.

Cision View original content to download multimedia:https://www.prnewswire.com/news-releases/superx-unveils-the-all-new-superx-xn9160-b200-ai-server-powered-by-nvidia-blackwell-gpu--accelerating-ai-innovation-by-30x-as-compared-to-h100-series-with-supercomputer-class-performance-302517113.html

SOURCE SuperX AI Technology Ltd

FAQ

What are the key features of SuperX's new XN9160-B200 AI Server?

The XN9160-B200 features 8 NVIDIA Blackwell B200 GPUs, 1440GB of HBM3E memory, 6th Gen Intel Xeon processors, and achieves 58 tokens per second per card on GPT-MoE 1.8T model.

How does the SUPX XN9160-B200 performance compare to previous generation?

The XN9160-B200 delivers 15x faster inference compared to the previous H100 platform, processing 58 tokens per second vs 3.5 tokens on the GPT-MoE 1.8T model.

What industries is the SuperX XN9160-B200 AI Server targeting?

The server targets large tech companies for AI model training, academic institutions for research, financial firms for risk modeling, healthcare for drug screening, and government agencies for climate modeling.

What reliability features does the SUPX XN9160-B200 include?

It features 1+1 redundant 12V power supplies, 4+4 redundant 54V GPU power supplies, 48-hour stress testing, and includes a three-year warranty with technical support.

What are the memory and storage specifications of the SuperX XN9160-B200?

The server includes 32x 96GB DDR5 5600 RDIMM memory, a 960GB SSD system disk, and 3.84TB NVMe U.2 storage.
SUPER X AI TECHNOLOGY LIMITED

NASDAQ:SUPX

SUPX Rankings

SUPX Latest News

SUPX Latest SEC Filings

SUPX Stock Data

223.45M
9.35M