World

US military used Anthropic AI in Iran strikes despite Trump ban

The growing use of artificial intelligence in military planning has come under renewed scrutiny after reports that the United States military relied on AI tools developed by San Francisco-based company Anthropic during recent air operations targeting Iran — despite a presidential directive ordering federal agencies to halt the technology’s use.

According to international media reports, U.S. Central Command incorporated Anthropic’s AI cloud system into various operational tasks during the strikes. The technology was reportedly used to assist with intelligence assessment, identification of potential targets and the simulation of combat scenarios to support strategic planning. The development has raised questions about coordination within the federal government, as the operations allegedly continued hours after President Donald Trump issued instructions banning the company’s services across federal agencies.

The administration’s decision to block Anthropic’s AI systems followed a dispute between the company and the Pentagon. Officials said the ban was imposed after Anthropic declined to provide the Defense Department with unrestricted access to its advanced AI models. The White House reportedly categorized the company as posing a potential national security concern, arguing that limitations placed on the military’s use of the software could hinder operational flexibility in critical situations.

Defense officials have emphasized that artificial intelligence tools are increasingly central to modern warfare, particularly in processing large volumes of battlefield data and accelerating decision-making. Military planners argue that AI-assisted systems can analyze satellite imagery, intercepted communications and other intelligence streams more quickly than human analysts alone, thereby enhancing response times in fast-moving conflicts.

Anthropic, however, has rejected suggestions that it poses any security threat and has announced plans to challenge the federal ban in court. Company executives have maintained that their refusal to grant unlimited usage rights was rooted in safety and ethical considerations. They contend that AI systems should not be deployed in ways that could enable fully autonomous weapons or mass surveillance without meaningful oversight. The firm has warned that restricting limited, controlled use of its technology could be unlawful and counterproductive.

The dispute highlights a broader debate over how emerging technologies should be governed in the defense sector. Artificial intelligence is playing an expanding role in intelligence gathering, threat modeling and operational simulations, but the rapid pace of adoption has outstripped the development of clear regulatory frameworks. Experts note that tensions between private technology firms and government agencies are likely to intensify as AI becomes more deeply embedded in national security infrastructure.

Meanwhile, defense authorities are reportedly evaluating alternative AI platforms to replace Anthropic’s cloud system. Officials acknowledge that transitioning to a new provider could take several months, potentially affecting ongoing modernization efforts within the military’s digital command systems.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button