Microsoft announced on social media that its Azure platform is the first cloud platform to integrate Nvidia's GB200 chip servers, and the team is optimizing every layer of its technology to support the world's most advanced AI models. They also suggested further updates will be revealed at the Microsoft Ignite conference in November.
In the global cloud services market, Amazon Web Services (AWS) remains the firm leader, followed by Microsoft Azure in second place, and Google Cloud in third. Since 2023, Microsoft has gained significant momentum, driven by its successful investment in OpenAI, closing the gap with AWS.
On October 8, the Microsoft Azure team shared an image on social media showing a server rack housing AI servers powered by Nvidia's GB200 chips. The team is currently fine-tuning its technology stack, leveraging resources like the InfiniBand networking system, industry-leading chips, and advanced closed-loop liquid cooling solutions.
Further details will be revealed at the upcoming Microsoft Ignite conference in November. At the last Ignite event, held in November 2023, Microsoft introduced its self-developed chips for general-purpose computing and AI acceleration, used in its data centers.
Microsoft CEO Satya Nadella also highlighted the long-standing partnership between Microsoft and Nvidia, emphasizing their collaborative innovation, which continues to drive industry leadership and support the most complex AI workloads.
According to Statista, Microsoft and Meta were the top two buyers of Nvidia's H100 chips in 2023, followed by other major companies like Google, Amazon, Oracle, and Tencent.
Recently, cloud platform providers have focused their promotions on the continuous adoption of cutting-edge computing resources. They now offer a variety of computing options, including GPUs and in-house developed proprietary chips.
Shortly after Nvidia launched its Blackwell platform in late March, AWS announced its support for Blackwell GPUs and its integration into the joint Project Ceiba supercomputer initiative.
Google Cloud also revealed that its AI Hypercomputer would incorporate Blackwell GPUs, with services set to launch in 2025 via virtual machines (VMs). Data from Google Cloud shows that many customers, including startups like Character.ai, use both Google Cloud TPUs and Nvidia GPUs to meet their AI training and inference needs.
In June, Apple announced progress on its foundational models, revealing that it had used a combination of Google TPUs, cloud-based GPUs, and on-premises GPUs during the model pre-training phase.
The growing demand for computing power has also prompted many Taiwanese companies to invest in building data centers and supercomputing facilities. Alongside collaborations between chipmakers and server manufacturers, software companies are transforming by offering new services such as GPU partitioning, computing power leasing, and management solutions.
However, some software companies believe that the strength of cloud platforms lies in their financial resources, which allow them to stay on top of the latest GPU technologies. Yet, only a limited number of companies—mainly international cloud, network, and AI developers—actually require the latest GPUs.
For most small and medium-sized enterprises (SMEs) in Taiwan, the real challenge lies in managing hybrid computing resources (including older GPUs) and ensuring compatibility between software and hardware systems.
Companies have also noted that, due to concerns over data privacy and budget constraints, some users are moving part of their workloads from the cloud back to on-premises infrastructure.