
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works .
It seems that the AI tools that these companies offer will, for now, be limited to data analysis and help make decision-making faster and easier as the U.S. faces complex situations. These tools are accessible via GenAi.mil, the Pentagon’s official AI platform, through the Department of War’s network and are widely available for its personnel.
“Over 1.3 million Department personnel have used the platform, generating tens of millions of prompts and deploying hundreds of thousands of agents in only five months,” the Pentagon said. “Warfighters, civilians and contractors are putting these capabilities to practical use right now, cutting many tasks from months to days.”
You may like Google signs classified Pentagon AI deal but exits $100 million drone swarm program OpenAI strikes deal with Pentagon following Claude blacklisting Google and Pentagon in talks to run custom AI chips inside classified environments Nevertheless, there have been concerns about the use of AI in military applications. Anthropic has famously refused to budge on the Department of War’s demand to lower its safeguards, saying that doing so could mean that its AI products could be used for mass surveillance or to create autonomous weapons. This move resulted in President Donald Trump banning the company from federal agencies , even going as far as designating it a supply chain risk for refusing to bow to the federal government’s demands.
While AI is certainly useful for distilling massive amounts of information and spotting patterns that humans can miss, it’s still not a 100% reliable tool for making decisions that could have a global impact. A researcher discovered this when they pitted GPT-5.2, Claude Sonnet 4, and Gemini 3 against each other in a wargame, with 95% of the outcome ending in a tactical nuclear strike . Three scenarios even ended in a strategic nuclear strike that would have ended the world.
But even though these AI tools are limited to analysis and support, with a human operator at the helm still responsible for every decision, there’s also the risk of automation bias. This is a person’s tendency to follow a computer’s suggestion despite contradictory information, especially as AI systems can process a ton of data so much more quickly than any human could. However, the data the AI is relying on could be false, erroneous, or misinterpreted, so it’s crucial that humans apply their intuition and experience before accepting AI suggestions at face value.
The U.S. military isn’t the only one experimenting with and deploying AI technologies in operational use. China, for example, has been showing off a 200-strong AI drone swarm that can be controlled by a single soldier, as well as ground-based drone wolfpacks armed with machine guns and grenade launchers for urban combat. While we cannot stop these armed institutions from deploying AI tools for intelligence-gathering, reconnaissance, and decision-making on the battlefield, we can only hope that they do not ignore safeguards and never give AI the triggers to any weapon.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/the-pentagon-announces-ai-deals-with-openai-google-microsoft-amazon-nvidia-and-more-llms-to-be-deployed-on-classified-department-of-war-networks-for-lawful-operational-use#main
- https://www.tomshardware.com
- Celebrate Star Wars Day 2026 with these upgrades to your gaming PC setup — May the 4th bring you peace and prosperity with brand-new peripherals, games, collect
- Tech teardown specialist delids a Xeon with a blowtorch and hunting knife — wood chopping block makes a worthy stage for the sacrifice
- Anthropic in early talks to buy DRAM-less AI inference chips from UK startup — Fractile's SRAM architecture reduces need for pricey memory during extreme pricin
- Into the Omniverse: Manufacturing’s Simulation-First Era Has Arrived
- Talent over tokens: AI models are becoming more expensive to run, and productivity gains are limited — efficient workers might be the solution to strained budge
Informational only. No financial advice. Do your own research.