STOCK TITAN

Penguin Solutions Introduces Industry's First Production-Ready CXL-Based KV Cache Server

Rhea-AI Impact
(High)
Rhea-AI Sentiment
(Neutral)
Tags

Key Terms

cxl technical
An abbreviation for “cancel” or “canceled,” used in brief notices to indicate that an event, order, meeting, dividend, or other planned action will no longer take place. Investors care because cancellations can change expected cash flows, timing of corporate actions, trading volume or regulatory filings—similar to someone calling off a planned event, which can alter plans and prompt reassessment of value or risk.
kv cache technical
A kv cache is a small, fast memory store that keeps recently used “keys” (identifiers) and their associated “values” (data) so a computer system can look them up instantly instead of re-calculating or re-fetching them. For investors, a kv cache matters because it can cut latency and computing costs and improve the responsiveness of services like algorithmic trading, market data feeds, or AI-driven analysis—similar to a clerk who remembers recent lookups so customers don’t wait.
gpu technical
A GPU (graphics processing unit) is a specialized computer chip designed to handle many calculations at once, originally for rendering images and video but now widely used for tasks like artificial intelligence, data analysis and high-performance computing. Investors watch GPU demand and prices because strong sales often signal growth for chip makers and their customers, affect profit margins and capital spending, and can forecast wider trends in gaming, AI adoption and cloud services.
ddr5 technical
DDR5 is the latest generation of high-speed volatile memory used as the short-term workspace for computers and servers, like a faster, larger desk where a processor keeps information it’s actively using. It matters to investors because upgrades to DDR5 drive demand across chip makers, computer builders and data-centre operators, affecting sales, pricing power and product cycles; companies that lead in DDR5 production or adoption can gain a competitive and financial edge.
nvme technical
NVMe is a fast data-transport standard that lets modern solid-state drives (SSDs) move information much more quickly and efficiently than older interfaces, acting like a wider, faster highway between storage and a computer’s processor. For investors, NVMe matters because it boosts device and server performance, can lower operating costs and power use in data centers, and influences which products and suppliers are competitive in markets where speed and efficiency drive revenue and margins.
retrieval-augmented generation (rag) technical
Retrieval-augmented generation (RAG) is a method that combines a fast search of relevant documents with an AI that writes answers, so the output is grounded in real source material rather than only the AI's memory. Think of it as a writer who looks things up in a library while drafting a report; for investors, this can mean more accurate, up-to-date analysis, faster research, and lower risk of misleading claims when companies use AI to summarize filings, earnings calls, or market data.
service-level agreements (slas) technical
Service-level agreements (SLAs) are formal promises in contracts that spell out how well a service must perform — for example, how often it must be available, how quickly problems must be fixed, and what compensation follows if targets are missed. For investors they matter because SLAs drive customer satisfaction, revenue stability and potential penalties; they act like a company’s performance guarantees, revealing operational risk and how reliably the business can deliver what it sells.

Penguin Solutions MemoryAI KV cache server, an 11TB memory appliance, enables efficient deployment of enterprise-scale AI inference

FREMONT, Calif.--(BUSINESS WIRE)-- Penguin Solutions, Inc. (Nasdaq: PENG), the AI factory platform company, today announced the industry's first production-ready KV cache server that utilizes CXL memory technology to address the critical "memory wall" challenge in AI inferencing—Penguin Solutions MemoryAI™ KV cache server. This innovative solution delivers up to 11 TB of CXL-based memory engineered to optimize performance of enterprise scale inference, including agentic AI. The result is lower latency, higher throughput, increased efficiency of GPU clusters, consistent achievement of stringent service-level agreements (SLAs), and faster time-to-first-token (TTFT).

Penguin Solutions MemoryAI KV cache server is the industry's first production-ready KV cache server that utilizes CXL memory technology to address the critical "memory wall" challenge in AI inferencing. The innovative solution delivers up to 11 TB of CXL-based memory engineered to optimize performance of enterprise scale inference, including agentic AI.

Penguin Solutions MemoryAI KV cache server is the industry's first production-ready KV cache server that utilizes CXL memory technology to address the critical "memory wall" challenge in AI inferencing. The innovative solution delivers up to 11 TB of CXL-based memory engineered to optimize performance of enterprise scale inference, including agentic AI.

While model training and tuning is primarily compute-bound and occurs episodically, the continuous memory-bound and latency-sensitive inference workloads required for inference and agentic AI are complex and fundamentally different. Inference demands are typically 30% compute driven (GPU) and 70% memory driven (RAM), elevating the need for greater memory capacity and causing performance bottlenecks and GPU idle time. Accelerating memory-dependent AI processes, Penguin’s MemoryAI KV cache server increases memory capacity by integrating 3 TB of DDR5 main memory and up to eight 1 TB CXL Add-in Cards (AICs).

“CXL-enabled KV cache technology delivers faster time-to-first-token, reduced time per output token, and increased overall end-to-end token throughput,” said Phil Pokorny, chief technology officer at Penguin Solutions. “These critical performance improvements enable enterprise-scale inferencing across many users who expect low latency and timely access to AI-generated insights. The introduction of Penguin’s MemoryAI KV cache server is designed to help enterprises sustain these performance improvements and consistent service standards as model size, context windows, precision requirements, and concurrency demands continue to grow.”

By significantly expanding the memory available to GPUs, the server enables organizations to mitigate GPU memory bandwidth limits, reduce redundant re-compute operations, and optimize clusters for inference performance. This increased system efficiency also enables organizations to train larger models and process expansive datasets faster.

Benefits of Penguin Solutions MemoryAI KV cache server in Cluster Design

With expanded, disaggregated memory, the server offers several operational benefits:

  • Support for larger context size and concurrency: Penguin’s MemoryAI KV cache server is particularly crucial for enterprise-scale tasks requiring large context windows and minimal latency, including real-time financial news parsing, retrieval-augmented generation (RAG) over massive 10-K datasets, and regulatory compliance analysis.
  • Flexibility to tier cluster memory: CXL-based KV cache delivered by the server creates a new tier of cluster memory to supplement existing high bandwidth memory (HBM) and system DRAM, delivering speeds 10x faster than NVMe-based approaches. This provides new flexibility in offloading KV data for faster access.
  • Compatibility with NVIDIA Dynamo: The solution is compatible with NVIDIA Dynamo, NVIDIA's software architecture for KV cache memory offloading.
  • Cost and power efficiency: The server enables organizations to maximize the efficient use of GPUs by adding large memory pools and optimizes clusters by right-sizing GPUs and memory. Additionally, the solution provides efficient operation, drawing less power than equivalent GPU servers.

The Penguin Solutions MemoryAI KV cache server builds upon Penguin Solutions’ legacy of innovation in high-performance computing expertise, with customers already deploying the solution to optimize cluster performance and meet demanding latency SLAs for production AI workloads.

Explore Penguin Solutions’ MemoryAI KV cache server page or visit booth #1031 at the NVIDIA GTC AI Conference and Expo March 16-19, 2026, in San Jose, Calif.

MemoryAI and Penguin Solutions are trademarks or registered trademarks of Penguin Solutions, Inc. or its affiliates. All other trademarks are the property of their respective owners.

About Penguin Solutions

The most transformative technological advancements are often the hardest to deploy and optimize. Penguin Solutions, the AI factory platform company, has the innovative technologies, skills, experience, and partnerships needed to turn your AI ambitions into reality.

In addition to our AI capabilities, Penguin Solutions offers memory and LED solutions serving a wide range of high-performance and specialized applications.

For more information, visit https://www.penguinsolutions.com.

PR Contact

Maureen O’Leary

Corporate Communications, Penguin Solutions

1-602-330-6846

pr@penguinsolutions.com

Source: Penguin Solutions, Inc.

Penguin Solutions Inc

NASDAQ:PENG

View PENG Stock Overview

PENG Rankings

PENG Latest News

PENG Latest SEC Filings

PENG Stock Data

920.33M
50.83M
Information Technology Services
Semiconductors & Related Devices
Link
United States
FREMONT