Microsoft introduces newest in-house AI chip — Maia 200 is faster than other bespoke Nvidia competitors, built on TSMC 3nm with 216GB of HBM3e

Microsoft introduces newest in-house AI chip — Maia 200 is faster than other bespoke Nvidia competitors, built on TSMC 3nm with 216GB of HBM3e

Maia 200 has already been deployed in Microsoft's US Central Azure data center, with future deployments announced for US West 3 in Phoenix, AZ, and more to come as Microsoft receives more chips. The chip will be part of Microsoft's heterogeneous deployment, operating in tandem with other different AI accelerators as well.

Maia 200, originally codenamed Braga, made waves for its heavily delayed development and release . The chip was intended for 2025 release and deployment, maybe even beating B300 out of the gates, but this was not meant to be. Microsoft's next hardware release isn't certain, but it will likely be fabricated on Intel Foundry's 18A process , per reports in October.

Microsoft's efficiency-first messaging surrounding the Maia 200 follows its recent trends of stressing the corporation's concern for communities near its data centers , taking great lengths to deafen the backlash to the AI boom. Microsoft CEO Satya Nadella recently spoke at the World Economic Forum on how if companies cannot help the public see the supposed perks of AI development and data center buildout, they risk losing "social permission" and creating a dreaded AI bubble .

Follow Tom's Hardware on Google News , or add us as a preferred source , to get our latest news, analysis, & reviews in your feeds.

Sunny Grimm is a contributing writer for Tom's Hardware. He has been building and breaking computers since 2017, serving as the resident youngster at Tom's. From APUs to RGB, Sunny has a handle on all the latest tech news. ","collapsible":{"enabled":true,"maxHeight":250,"readMoreText":"Read more","readLessText":"Read less"}}), "https://slice.vanilla.futurecdn.net/13-4-12/js/authorBio.js"); } else { console.error('%c FTE ','background: #9306F9; color: #ffffff','no lazy slice hydration function available'); } Sunny Grimm Contributing Writer Sunny Grimm is a contributing writer for Tom's Hardware. He has been building and breaking computers since 2017, serving as the resident youngster at Tom's. From APUs to RGB, Sunny has a handle on all the latest tech news.

bit_user The article said: The new in-house AI chip is the next generation of Microsoft's Maia GPU line , a server chip designed for inferencing AI models with ludicrous speeds and feeds to outperform the custom offerings from hyperscaler competitors Amazon and Google. They don't use the term "GPU" anywhere on their site. No indication that it even has a GPU-like SIMD/SMT-oriented programming model, either. A lot of purpose-built AI accelerators are much more DSP-like, in their microarchitecture and programming model. I wouldn't toss around the term "GPU" to describe things that have neither any intrinsic graphics capabilities nor even (as far as we know) a GPU-like ISA. Some alternate nouns I use: "accelerator" and "processor". Use modifiers like "AI", "neural network", and "training"/"inference" as appropriate. P.S. Thanks for including the comparison table! I'd have liked to see a few more rows, such as transistor count and maybe SRAM, but I understand that takes some digging. I do think it probably deserves an asterisk that the Blackwell GPUs implement "NVFP4", which uses additional metadata and structure to squeeze more range & precision out of ~4-bit weights. The Microsoft link doesn't say there's anything special about their FP4 representation. Furthermore, I'm pretty sure that B300 Ultra uses two chips, FWIW. Doesn't make the comparison invalid, but does explain some of the disparities – especially when it's made on a larger node. Reply

bit_user BTW, if I understand correctly, the B300 Ultra is comprised of two GB110 chips, each with 51.2 MB of L2 cache. The each have 104B transistors, compared to Maia 200's 140B. Source: https://www.techpowerup.com/gpu-specs/nvidia-gb110.g1112 Also, there seem to be rumors that this chip was co-designed with Marvell, but that Microsoft might be switching to Broadcom for future designs. No official word on any of that, as far as I could find. Reply

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment