
The proposed order would create an “AI working group” of tech executives and government officials to develop oversight procedures, with White House staff briefing leaders from Anthropic, Google , and OpenAI on the plans last week. These discussions, if true, would represent a sharp departure from the administration’s current stance as something of a deregulatory champion — immediately upon taking office, the Trump administration revoked a Biden-era executive order addressing AI risks.
The sudden reversal coincides with a leadership vacuum in White House AI policy. David Sacks, who led the administration's deregulation push as AI czar, left the role in March, with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent having since taken a more active role in shaping AI policy, according to The New York Times .
You may like Google, Microsoft, and xAI agree to let US government test AI models before public release U.S. government preps sweeping export controls for Nvidia, AMD AI hardware Trump bans Anthropic AI from federal agencies after firm refuses to unlock capabilities The new approach sounds a lot like the UK's AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks before and after deployment. Officials told the New York Times that the NSA, the Office of the National Cyber Director, and the Director of National Intelligence could oversee the review. Critically, the system would grant the government early access to models without blocking their release.
Perhaps unsurprisingly, the catalyst for all this appears to have been Anthropic’s Mythos model , which the company’s marketing described as capable of finding thousands of critical software vulnerabilities and too dangerous for public release.
That naturally attracted a lot of unwanted government attention at a time when the Trump administration is already locking horns with Anthropic over the collapsed $200 million Pentagon contract . The Pentagon designated Anthropic a supply chain risk after the company refused to remove guardrails on autonomous weapons and mass surveillance, though a federal judge later called that "Orwellian."
The NSA has already used Mythos to assess vulnerabilities in government Microsoft software deployments, even as other agencies remain cut off from Anthropic's tools. Some analysts have questioned whether Mythos's capabilities justify Anthropic's dramatic framing, with some studies finding that cheaper models can achieve comparable results in vulnerability discovery.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/trump-administration-considers-mandatory-pre-release-vetting-of-ai-models#main
- https://www.tomshardware.com
- Into the Omniverse: Manufacturing’s Simulation-First Era Has Arrived
- New server-focused SPEC CPU 2026 benchmarking suite has results for a Raspberry Pi 5 — updated tools feature more tests and can run a wide range of systems
- China pushes for 70% homegrown silicon wafer use as domestic firm ramps up 12-inch production — government seeking to localize critical chip supply chain amid A
- Inland QN450 1TB SSD Review: Maximum efficiency, minimum spend
- Rethinking AI TCO: Why Cost per Token Is the Only Metric That Matters
Informational only. No financial advice. Do your own research.