
Sources from the Financial Times (FT) say the key item in this discussion is OpenAI's Frontier multi-agent platform targeted at large enterprises. Broadly speaking, Frontier offers to make it easy for large enterprises to effectively use AI by wiring up multiple agents ("workers") with shared memory and business content.
Microsoft is apparently taking umbrage with the situation, despite its position in the partnership having been repeatedly revised. Redmond was originally OpenAI's sole cloud services provider, but eventually changed to having the right of first refusal over said services, and was further weakened in October 2025.
You may like Amazon invests $50 billion in OpenAI, comitting to 2 gigawatts of Trainium silicon Nvidia's plan to invest $100 billion in OpenAI appears unlikely OpenAI aims to secure $100 Billion in latest funding round, reportedly aiming for an $800 billion valuation The PR about that latest agreement states that "API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider." Under that logic, OpenAI has the freedom to develop and implement new products, but if they offer them as APIs, they have to go through Azure.
Redmond believes that OpenAI's offering access to Frontier via Amazon Web Services (AWS)'s Bedrock platform would be in breach of the agreement. Getting even more technical, the dispute may well come down to the definition of a "stateless" versus "stateful" when applied to AI models.
Even though it appears to remember your information, a standard chatbot is actually stateless — adding a new question requires the bot to re-process the entire conversation again. A storage and orchestration layer to facilitate something like Frontier is arguably a "stateful" implementation, more specifically a "Stateful Runtime Environment."
According to FT's sources, Microsoft thinks that running Frontier on AWS instead of Azure would breach either the spirit or the letter of the contract. This is illustrated by a report that Amazon is pointedly instructing its staff to never say that SRE "enables access" or "calls on" ChatGPT as a backend, instead preferring vaguer terms like "powered by," "enabled by," or "integrates with."
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-considering-suing-openai-over-altmans-recent-deal-with-amazon-report-claims-exclusivity-dispute-revolves-around-frontier-multi-agent-service#main
- https://www.tomshardware.com
- Snap Decisions: How Open Libraries for Accelerated Data Processing Boost A/B Testing for Snapchat
- Jensen Huang says gamers are 'completely wrong' about DLSS 5 — Nvidia CEO responds to DLSS 5 backlash
- NVIDIA and ComfyUI Streamline Local AI Video Generation for Game Developers and Creators at GDC
- Global chip supply chain under threat as US-Iran conflict enters third week — Strait of Hormuz blockade is days away from crippling Taiwan's semiconductor indus
- Nvidia Groq 3 LPU and Groq LPX racks join Rubin platform at GTC — SRAM-packed accelerator boosts 'every layer of the AI model on every token'
Informational only. No financial advice. Do your own research.