Cloud Tracker Pro

Who Will Win the "Scale Up" Challenge?

A Ichip

By: Mary Jander


A marketing war is brewing among vendors pushing “scale up” architecture for AI networking using evolving interconnection technologies. And there may be no clear winner anytime soon.

Recent news highlights the situation: The Ultra Accelerator Link (UALink) announced its first specification in April, pointing the way for AI infrastructure vendors to use an open standard to connect multiple GPUs together as a single entity. Then at the Computex show in Taiwan in May, NVIDIA unveiled NVLink Fusion, extending the vendor’s GPU interconnection technology to third-party chipmakers. And last week, Broadcom announced its Tomahawk 6 chip with Scale Up Ethernet (SUE) technology, which deploys enhanced Ethernet to compete directly with NVLink—and by implication, UALink.

Let’s take a closer look at each of these technologies and their proposed contributions to AI infrastructure.

UALink: Open Standard, Early Days

The first iteration of UALink (known as the UALink 200G 1.0 Specification) defines a low-latency interconnect for GPUs in back-end networks that supports 200-Gb/s bidirectional data rates for 1, 2, or 4 lanes connecting up to 1,024 accelerators in a pod. Hence, maximum bidirectional bandwidth is 800 Gb/s.

To access the rest of this article, you need a Futuriom CLOUD TRACKER PRO subscription — see below.


Access CLOUD TRACKER PRO


Subscribe for Access
Activate your CLOUD TRACKER PRO Subscription,
$48/month or $528/year per individual.
Click Here  to  

CLOUD TRACKER PRO Subscribers — Sign In
Subscribers please Click Here to Login.