Marvell Expands Custom Compute Platform with UALink Scale-up Solution for AI Accelerated Infrastructure
- Introduction of open standards-based technology enabling scalability for hundreds/thousands of AI accelerators
- Comprehensive IP portfolio including advanced 224G SerDes and packaging options
- Strategic partnership and support from major industry player AMD
- Solution addresses growing market need for efficient AI infrastructure scaling
- None.
Insights
Marvell's UALink offering strengthens its AI acceleration portfolio, positioning it favorably in the expanding AI infrastructure market.
Marvell Technology has strategically expanded its AI infrastructure capabilities with the announcement of its custom Ultra Accelerator Link (UALink) scale-up solution. This addition to Marvell's IP portfolio addresses a critical bottleneck in AI computing – the interconnect between accelerators and switches in large-scale AI deployments.
The technical specifications of the offering are impressive, featuring 224G SerDes (serializer/deserializer) and physical layer IP, configurable controllers, scalable low-latency switch fabric, and advanced packaging options including co-packaged copper and optics. These components enable the crucial scale-up capabilities needed for deployments with hundreds or thousands of AI accelerators.
What makes this announcement particularly significant is its adherence to open standards through Marvell's participation in the UALink Consortium. This approach contrasts with proprietary solutions and positions Marvell favorably as hyperscalers increasingly demand standardized, interoperable infrastructure components.
The endorsement from AMD's Forrest Norrod validates Marvell's approach and suggests potential for ecosystem expansion. The timing is strategic as hyperscalers face mounting challenges in scaling their AI infrastructure while maintaining performance efficiency.
For Marvell, this represents a logical expansion of their semiconductor portfolio into high-growth AI infrastructure, where their expertise in high-speed connectivity and custom silicon development provides competitive differentiation in the rapidly evolving AI acceleration market.
- Optimized custom offering enables end-to-end UALink architecture for rack-scale AI
- Delivering high compute performance with low power and latency
- Open standards-based technology supports flexible interconnect solutions that allow customers to innovate at scale
The Marvell custom UALink scale-up solution features a comprehensive set of interoperable IPs, including:
- Best in class 224G SerDes and UALink Physical Layer IP
- Configurable UALink Controller IP
- Scalable low-latency Switch Core and Fabric IP
- Advanced packaging options including co-packaged copper and co-packaged optics
The custom UALink solution enables customers to deliver scale-up interconnects for hundreds or thousands of AI accelerators in a scale-up deployment. Paired with Marvell custom silicon capabilities, compute vendors can build solutions including custom accelerators with UALink controllers and custom switches. The combination of Marvell advanced packaging technology and the custom UALink architecture enables optimal performance for rack-scale AI.
Hyperscalers are increasingly challenged by the need to scale AI infrastructure while ensuring high performance. The Marvell custom UALink offering addresses these challenges with an open-standards based toolkit that enables direct, low-latency communication between accelerators and supports flexible, scalable switch topologies. Marvell empowers hyperscalers to build next-generation AI infrastructure with the performance, interoperability and efficiency required to support AI workloads.
"We are pleased to introduce our new custom UALink offering to enable the next generation of AI scale-up systems," said Nick Kucharewski, senior vice president and general manager, Cloud Platform Business Unit at Marvell. "This addition to our custom portfolio enables customers with flexibility to optimize their AI infrastructure with standards-based scale-up switch and interconnect technology."
"We are excited to see UALink custom solutions from Marvell, which are essential to the future of AI," said Forrest Norrod, executive vice president and general manager, Data Center Solutions Group, AMD. "At our core, we're committed to building large-scale AI and high-performance computing solutions grounded in open standards, efficiency and strong ecosystem support. We look forward to continued collaboration within the open UALink ecosystem to advance scale-up networks."
UALink Consortium
Marvell is a member of the UALink Consortium, an open industry standard group dedicated to developing UALink specifications. The consortium drives the adoption of a connection between accelerators and related devices, enabling seamless interoperability, communication, and high-performance computing – setting new standards for AI, empowering the development of next-generation applications, and driving transformative breakthroughs in the AI era.
About Marvell
To deliver the data infrastructure technology that connects the world, we're building solutions on the most powerful foundation: our partnerships with our customers. Trusted by the world's leading technology companies for over 25 years, we move, store, process and secure the world's data with semiconductor solutions designed for our customers' current needs and future ambitions. Through a process of deep collaboration and transparency, we're ultimately changing the way tomorrow's enterprise, cloud, automotive, and carrier architectures transform—for the better.
Marvell and the M logo are trademarks of Marvell or its affiliates. Please visit www.marvell.com for a complete list of Marvell trademarks. Other names and brands may be claimed as the property of others.
This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events, results or achievements. Actual events, results or achievements may differ materially from those contemplated in this press release. Forward-looking statements are only predictions and are subject to risks, uncertainties and assumptions that are difficult to predict, including those described in the "Risk Factors" section of our Annual Reports on Form 10-K, Quarterly Reports on Form 10-Q and other documents filed by us from time to time with the SEC. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.
For further information, contact:
Kim Markle
pr@marvell.com
View original content to download multimedia:https://www.prnewswire.com/news-releases/marvell-expands-custom-compute-platform-with-ualink-scale-up-solution-for-ai-accelerated-infrastructure-302478692.html
SOURCE Marvell