The Pentagon announces AI deals with OpenAI, Google, Microsoft, Amazon, Nvidia, and more — LLMs to be deployed on classified Department of War networks ‘for law

The Pentagon announces AI deals with OpenAI, Google, Microsoft, Amazon, Nvidia, and more — LLMs to be deployed on classified Department of War networks ‘for law

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works .

It seems that the AI tools that these companies offer will, for now, be limited to data analysis and help make decision-making faster and easier as the U.S. faces complex situations. These tools are accessible via GenAi.mil, the Pentagon’s official AI platform, through the Department of War’s network and are widely available for its personnel.

“Over 1.3 million Department personnel have used the platform, generating tens of millions of prompts and deploying hundreds of thousands of agents in only five months,” the Pentagon said. “Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days.”

You may like Google signs classified Pentagon AI deal but exits $100 million drone swarm program OpenAI strikes deal with Pentagon following Claude blacklisting Google and Pentagon in talks to run custom AI chips inside classified environments Nevertheless, there have been concerns about the use of AI in military applications. Anthropic has famously refused to budge on the Department of War’s demand to lower its safeguards, saying that doing so could mean that its AI products could be used for mass surveillance or to create autonomous weapons. This move resulted in President Donald Trump banning the company from federal agencies , even going as far as designating it a supply chain risk for refusing to bow to the federal government’s demands.

While AI is certainly useful for distilling massive amounts of information and spotting patterns that humans can miss, it’s still not a 100% reliable tool for making decisions that could have a global impact. A researcher discovered this when they pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 against each other in a wargame, with 95% of the outcome ending in a tactical nuclear strike . Three scenarios even ended in a strategic nuclear strike that would have ended the world.

But even though these AI tools are limited to analysis and support, with a human operator at the helm still responsible for every decision, there’s also the risk of automation bias. This is a person’s tendency to follow a computer’s suggestion despite contradictory information, especially as AI systems can process a ton of data so much more quickly than any human could. However, the data the AI is relying on could be false, erroneous, or misinterpreted, so it’s crucial that humans apply their intuition and experience before accepting AI suggestions at face value.

The U.S. military isn’t the only one experimenting with and deploying AI technologies in operational use. China, for example, has been showing off a 200-strong AI drone swarm that can be controlled by a single soldier, as well as ground-based drone wolfpacks armed with machine guns and grenade launchers for urban combat. While we cannot stop these armed institutions from deploying AI tools for intelligence-gathering, reconnaissance, and decision-making on the battlefield, we can only hope that they do not ignore safeguards and never give AI the triggers to any weapon.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Key considerations

  • Investor positioning can change fast
  • Volatility remains possible near catalysts
  • Macro rates and liquidity can dominate flows

Reference reading

More on this site

Informational only. No financial advice. Do your own research.

Leave a Comment