Google, Microsoft, and xAI agree to let US government test AI models before public release — OpenAI and Anthropic also on board after renegotiating deals with W

Google, Microsoft, and xAI agree to let US government test AI models before public release — OpenAI and Anthropic also on board after renegotiating deals with W

All five major frontier labs now give the Commerce Department early access to unreleased AI systems.

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works .

The agreements mean that every major U.S. frontier AI lab now participates in voluntary pre-release government evaluations. CAISI has completed more than 40 model assessments to date, including evaluations of unreleased state-of-the-art systems, according to the Commerce Department.

CAISI operates within NIST and was originally established in 2023 under Biden as the AI Safety Institute. The Trump administration renamed it last June, with Commerce Secretary Howard Lutnick calling the rebrand a move away from what he called regulation "used under the guise of national security ." Despite the shift in rhetoric, the center's core function has remained largely the same: evaluating frontier models for cybersecurity, biosecurity, and chemical weapons risks.

"These expanded industry collaborations help us scale our work in the public interest at a critical moment," CAISI director Chris Fall said of the new agreements. Fall took over the center after Collin Burns, a former Anthropic and OpenAI researcher, was pushed out just four days into the job. The Washington Post reported last month that White House officials were concerned about Burns's Anthropic ties, given the administration's ongoing dispute with the company. Burns had relocated across the country and given up Anthropic equity to take the position.

The center still lacks permanent legal standing, and some lawmakers have introduced draft legislation to codify it, but nothing has passed. Trump's AI Action Plan, announced in July last year , directs CAISI to serve as part of an "AI evaluations ecosystem" and lead national security-related model assessments. It also instructs regulators to explore using evaluations when applying existing law to AI systems.

Anthropic's renegotiated deal with CAISI sits alongside a separate and hostile set of interactions with the federal government. The Pentagon designated Anthropic a supply chain risk in March after it refused to lower guardrails on autonomous weapons, though a federal judge later called that move "Orwellian." Both Defense Secretary Pete Hegseth and Trump have outlined a six-month phaseout period for government use of Anthropic's tools, and two active lawsuits remain unresolved.

The new CAISI agreements also come one day after reports that the Trump administration was considering a mandatory pre-release review process for AI models via executive order, with Anthropic's Mythos model cited as the catalyst. The voluntary agreements announced Tuesday, and any potential mandatory review framework, would run in parallel, though it remains unclear how they might interact.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment