Microsoft Unveils New AI Chips to Challenge Nvidia

Microsoft Unveils New AI Chips to Challenge Nvidia

ByGayane Tadevosyan
·2 min read

Microsoft has unveiled the second generation of its in-house artificial intelligence chip, marking a deeper push to challenge Nvidia’s dominance in both AI hardware and software. Announced on Monday, the new chip—called Maia 200—will begin operating this week at a Microsoft data center in Iowa, with a second deployment planned for Arizona. It follows the original Maia chip that Microsoft introduced in 2023 as part of its effort to build custom silicon for AI workloads.


The launch comes as major cloud providers, including Microsoft, Google, and Amazon Web Services—some of Nvidia’s largest customers—accelerate the development of their own AI chips to gain more control over performance, costs, and supply. At the same time, these companies are working to narrow Nvidia’s software advantage, which has long been anchored by its proprietary CUDA programming platform.


To that end, Microsoft said Maia 200 will be supported by a new suite of software tools, including Triton, an open-source programming framework with significant contributions from OpenAI.


Triton is designed to perform many of the same functions as CUDA, offering developers an alternative ecosystem for building and running AI models on Microsoft’s hardware.


Maia 200 is manufactured by Taiwan Semiconductor Manufacturing Co using advanced 3-nanometer process technology and features high-bandwidth memory. While it uses an older generation of memory than Nvidia’s upcoming flagship chips, Microsoft has offset that by including a large amount of SRAM, a type of fast on-chip memory that can improve response times for AI systems such as chatbots serving many users simultaneously.


By combining custom hardware with its own software stack, Microsoft aims to improve the efficiency of its Azure cloud platform, reduce reliance on Nvidia over time, and strengthen its position in the rapidly expanding AI infrastructure market.