CONNECT WITH US

Development of NVLink

Nick Chen
Nick Chen
Nvidia established a well-rounded ecosystem via NVLink, and the technology's scalability on systems become the development key for AI computing.
Abstract

As AI model sizes grow, the demand for AI computing power has significantly increased, making multi-GPU collaborative computing mainstream. However, the traditional PCIe standard faces numerous limitations in terms of transmission speed and scalability, failing to meet the needs of multi-GPU systems.

In 2014, Nvidia introduced NVLink, an interconnect technology specifically designed for high-speed communication between GPUs. Compared to PCIe, NVLink offers higher bandwidth, lower latency, reduced power consumption, memory pooling capabilities, and superior system scalability. This significantly enhances the computational efficiency of multi-GPU systems, establishing NVLink as an indispensable moat for Nvidia in the AI hardware sector.

With each generation of GPU accelerators, NVLink technology continues to evolve. Under the support of NVLink, Nvidia was the first to launch the world's first AI server, DGX, and has consistently improved it in subsequent products. The DGX-2 server is the first AI server to achieve GPUs' all-to-all connectivity, utilizing the NVSwitch chip to interconnect 16 GPUs, significantly enhancing computational efficiency.

Download full report (subscription required)

Published: December 26, 2024

Pick an option that is right for you

Single Report
  • US$900
Team or Enterprise subscription
Inquire
Have a question?
consultant
Customized market research services
We can customize the research to meet your specific needs, helping you make strategic and profitable business decisions.
Sample reports
Connect with a consultant