At the Computex hardware showcase, Nvidia made a surprising move by opening up its high-speed NVLink interconnect technology to third-party processor vendors. The newly announced NVLink Fusion marks a shift in strategy, allowing select non-Nvidia accelerators to participate in Nvidia’s tightly integrated ecosystem.
What Is NVLink Fusion?
Originally exclusive to Nvidia GPUs and CPUs, NVLink is a specialized high-bandwidth interconnect designed to enable massive data transfer between processors—essentially making multiple GPUs behave like one unified computing unit. Now in its fifth generation, it delivers a staggering 1.8 terabytes per second of bidirectional bandwidth per GPU, enabling dense, high-performance AI clusters with up to 72 GPUs per rack. In comparison, PCIe 5.0 maxes out at 128 GB/s.
With NVLink Fusion, Nvidia opens the door for two new types of connections:
Linking custom CPUs to Nvidia GPUs
Connecting Nvidia’s Grace CPU line (and future chips) with non-Nvidia accelerators
Enabling Greater Flexibility in AI Infrastructure
Dion Harris, Nvidia’s Senior Director of HPC, Cloud, and AI, noted during a press call that NVLink Fusion enables more flexibility for organizations designing their own large-scale compute systems. He emphasized that this approach expands the AI ecosystem while keeping Nvidia’s platform at the core of future innovation.
“Our growing partner ecosystem—ranging from custom silicon developers to data center hardware builders—can now integrate their solutions more seamlessly,” Harris explained. “With NVLink Fusion, partners can bring their AI factories online faster by taking advantage of Nvidia’s performance and scalability.”
Still a Controlled Ecosystem
Although Nvidia is opening the door, it’s not giving away the keys entirely. Companies like AMD will not be able to connect their CPUs and GPUs directly using NVLink Fusion. For any deployment, at least one Nvidia component must be involved in the configuration—ensuring that Nvidia remains central to the system design.
First Adopters and Industry Alternatives
Several chip and interconnect vendors have already signed on to use NVLink Fusion, including MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence.
However, NVLink isn’t the only game in town. The Ultra Accelerator Link (UALink) consortium has developed an open alternative aimed at fostering interoperability between accelerators from various vendors. The recently released UALink 200G 1.0 Specification offers a low-latency, high-throughput architecture for communication within AI compute clusters—positioning itself as a multivendor answer to Nvidia’s proprietary solution.
Nvidia Broadens NVLink Access to Rival Chipmakers with New Fusion Technology
At the Computex hardware showcase, Nvidia made a surprising move by opening up its high-speed NVLink interconnect technology to third-party processor vendors. The newly announced NVLink Fusion marks a shift in strategy, allowing select non-Nvidia accelerators to participate in Nvidia’s tightly integrated ecosystem.
What Is NVLink Fusion?
Originally exclusive to Nvidia GPUs and CPUs, NVLink is a specialized high-bandwidth interconnect designed to enable massive data transfer between processors—essentially making multiple GPUs behave like one unified computing unit. Now in its fifth generation, it delivers a staggering 1.8 terabytes per second of bidirectional bandwidth per GPU, enabling dense, high-performance AI clusters with up to 72 GPUs per rack. In comparison, PCIe 5.0 maxes out at 128 GB/s.
With NVLink Fusion, Nvidia opens the door for two new types of connections:
Enabling Greater Flexibility in AI Infrastructure
Dion Harris, Nvidia’s Senior Director of HPC, Cloud, and AI, noted during a press call that NVLink Fusion enables more flexibility for organizations designing their own large-scale compute systems. He emphasized that this approach expands the AI ecosystem while keeping Nvidia’s platform at the core of future innovation.
“Our growing partner ecosystem—ranging from custom silicon developers to data center hardware builders—can now integrate their solutions more seamlessly,” Harris explained. “With NVLink Fusion, partners can bring their AI factories online faster by taking advantage of Nvidia’s performance and scalability.”
Still a Controlled Ecosystem
Although Nvidia is opening the door, it’s not giving away the keys entirely. Companies like AMD will not be able to connect their CPUs and GPUs directly using NVLink Fusion. For any deployment, at least one Nvidia component must be involved in the configuration—ensuring that Nvidia remains central to the system design.
First Adopters and Industry Alternatives
Several chip and interconnect vendors have already signed on to use NVLink Fusion, including MediaTek, Marvell, Alchip, Astera Labs, Synopsys, and Cadence.
However, NVLink isn’t the only game in town. The Ultra Accelerator Link (UALink) consortium has developed an open alternative aimed at fostering interoperability between accelerators from various vendors. The recently released UALink 200G 1.0 Specification offers a low-latency, high-throughput architecture for communication within AI compute clusters—positioning itself as a multivendor answer to Nvidia’s proprietary solution.
Archives
Categories
Archives
Microsoft Introduces Proxy 4: A Modern Take on Runtime Polymorphism in C++
August 30, 2025JetBrains Unveils Kineto: An AI-Powered No-Code App Builder
August 22, 2025Categories
Meta