Despite a federal ban, the US military leveraged Anthropic’s AI for intelligence and target analysis in Iran strikes, fueling debate over ethics and security
Washington, D.C.: The US military reportedly employed Anthropic’s Claude AI system during recent air strikes on Iran, even after President Donald Trump issued a federal ban on its use. According to multiple sources, including The Guardian, the AI was used alongside Israeli forces for tasks such as intelligence analysis, target identification, and combat simulations — critical components of military planning.
Trump had announced the ban hours before the operation, labeling Anthropic a national security risk and ordering all federal agencies to stop using its technology immediately. However, the Department of Defense was granted a transitional period of up to six months, due to the AI model’s deep integration in military systems.
The conflict escalated after Anthropic declined Pentagon requests for unrestricted use, citing ethical concerns over applications in surveillance and autonomous weapons. This standoff highlighted the tension between AI safety and military necessity.
In response, OpenAI signed a deal with the Pentagon to supply its AI models for classified defense networks, positioning itself as an alternative to Anthropic’s Claude AI. Experts warn that the incident underscores the challenges of balancing ethical AI guidelines with national security demands.

