Skip to main content

The Pentagon Is Using AI Chatbots to Pick Military Targets

A defense official reveals how generative AI ranks and prioritizes strike targets for the US military, as the Iran school bombing fuels oversight demands.

Pentagon uses AI chatbots for military targeting decisions
Pentagon uses AI chatbots for military targeting decisions
  • A Defense Department official confirmed generative AI systems rank and prioritize strike targets for human review, adding a chatbot layer on top of Project Maven.
  • Palantir’s Maven Smart System uses Anthropic’s Claude to synthesize satellite, drone, and intelligence data into prioritized target lists with GPS coordinates.
  • The US struck over 1,000 targets in 24 hours during Operation Epic Fury — a task that required roughly 2,000 analysts during the 2003 Iraq invasion now takes about 20.
  • A strike on an Iranian school that killed over 170 people, mostly children, has triggered Congressional demands for AI oversight in military operations.

From Computer Vision to Conversational Chatbots in the Kill Chain

The US military has quietly added generative AI to the machinery of war. A Defense Department official told MIT Technology Review that target lists are now fed into large language models deployed in classified settings. Humans ask the system to analyze intelligence, prioritize targets based on factors like aircraft positioning, and generate recommendations — which are then vetted by military personnel before any strike proceeds.

This is a fundamentally different technology from what the Pentagon has relied on for the past eight years. Project Maven, the military’s flagship AI initiative since 2017, uses computer vision to sift through satellite imagery and drone footage — identifying objects, flagging patterns, highlighting potential targets on a map. Generative AI adds an interpretive layer on top: a chatbot that synthesizes data, answers questions in natural language, and produces ranked strike lists. Palantir demonstrated this integration at its AIPCON conference on March 13, showing how Maven ingests classified satellite and surveillance feeds, then uses Claude to output prioritized targets with GPS coordinates, weapons recommendations, and automated legal justifications.

The scale is staggering. US forces struck over 1,000 targets in the first 24 hours of Operation Epic Fury against Iran. Defense experts estimate the AI-driven pipeline has replaced the equivalent of roughly 2,000 intelligence analysts — the staffing level required for comparable operations during the 2003 Iraq invasion — with approximately 20 people. “AI precision targeting has fundamentally shifted modern warfare,” Palantir CEO Alex Karp said.

The Iran School Strike and the Fight Over Who Controls Military AI

The efficiency gains come at a devastating cost. On February 28, a US strike hit the Shajareh Tayyebeh elementary school in Iran, killing over 170 people — most of them children. The Washington Post reported that Maven and Claude were involved in targeting decisions in Iran, and a preliminary investigation found that outdated intelligence data from the Defense Intelligence Agency was partly responsible for the strike. More than 120 Democratic members of Congress have since demanded answers from the Pentagon about AI’s role in civilian casualties.

The political fallout extends far beyond the battlefield. The Pentagon designated Anthropic a national security supply chain risk after the company refused to grant unrestricted military access to Claude — specifically blocking its use for autonomous weapons and domestic mass surveillance. Anthropic sued the Pentagon on March 9, calling the designation unconstitutional. Yet Claude remains active inside Palantir’s tools and was reportedly used in operations in Iran and the capture of Venezuelan leader Nicolas Maduro.

Three AI Companies, Zero Guardrails Consensus

The Pentagon has moved fast to replace Anthropic’s cooperation with alternatives. OpenAI signed a classified-use agreement on February 28, though its standard ChatGPT guardrails remain nominally in place — how effectively those guardrails function in a military context is an open question. Elon Musk’s xAI followed days later with a deal granting Grok access to classified systems under an “all lawful use” standard — the exact language Anthropic rejected.

The result is a fractured landscape where the three leading AI companies occupy radically different positions on military use. Anthropic is in court fighting a government blacklist. OpenAI is inside the Pentagon with theoretical limitations. xAI has signed away any pretense of restriction. Meanwhile, the technology that all three companies built is already accelerating the speed at which humans decide who lives and who dies — and the school in Iran suggests the humans in the loop may not be checking fast enough.

MIT Technology Review | Bloomberg

Tags

#Pentagon #Military #Anthropic #OpenAI #Palantir

More in Artificial Intelligence