War in Iran

Artificial intelligence boosts the killing capacity of the US and Israel

The Project Maven system, used by the Pentagon, can process a thousand images per second and turn the information into potential targets.

Rescue teams from the Iranian Red Crescent Society (IRCS) work on a building damaged by an airstrike in Tehran's Resalat Square on March 10.
14/03/2026
5 min

LondonThe human eye can analyze a satellite image in about ten minutes. Project Maven, the Pentagon's AI-powered image analysis program, in which companies like [company names missing] participate, is a different story. Palantir TechnologiesIt can process a thousand images per second. This enormous computing power has transformed The war in Iran was one of the first conflicts where artificial intelligence systems play a central role in identifying and prioritizing targets, and where processing speed dictates the pace of operations. And this is far more pronounced than in previous conflicts. The war waged by Tel Aviv and Washington against Tehran marks the large-scale debut of a technology that until recently was little more than an experiment, although Ukraine and Gaza have already witnessed some initial widespread trials.

Balanç provisional de víctimes per països
Xifres oficials aproximades dels morts i els ferits confirmats

The possibilities of this Maven system, combined with generative AI models such as Claude, of the Anthropic companyThis allowed the Americans and Israelis to attack 3,000 targets during the first 24 hours of the offensive. This represents a true "metamorphosis of the battlefield," in the words of retired Royal Air Force Marshal Martin Sammy Sampson. "We are facing an unprecedented phase of warfare, where decision-making has shifted towards large-scale algorithmic architectures that culminate years of secret experiments," he states. Sampson is the executive director of the Middle East branch. think tank The International Institute for Strategic Studies (IISS) in London discussed this this week at the ARA during a briefing.

Technology, which Israel has already used in GazaThis has contributed to making it possible for as many attacks to be carried out in a single day as in months of a conventional campaign. A constant flow of data is transformed into potential targets in a process where, according to several specialists, human oversight can be reduced to a simple validation procedure.

How does Project Maven work?

Military history thus enters a phase in which, as Sampson states, what counts is "the cold speed of calculation." The marshal describes the system as "24/7 execution and planning": in short, the barrage of fire never stops. The pace surpasses even the Pentagon's most ambitious simulation exercises, such as the concept "a thousand decisions", designed to train commanders to identify a thousand targets per hour. In Iran, the scale has been double that of the famous Shock and Awe campaign of 2003, during the Iraq War, which involved 1,700 sorties in 48 hours. According to Sampson, AI "is not an add-on, it's the chorus, it's the chorus, optimizing the systematic elimination of targets." The system began development in 2017 as an experiment to classify drone videos. Over time, it has evolved into a platform capable of interpreting the overwhelming volume of data generated by satellites, digital maps, geolocation data, telemetry, and commercial imagery.

Using computer vision, it can classify objects with a high probability—for example, distinguish a military vehicle from a civilian truck—identify heat sources, or detect communications infrastructure in the information battlefield. In other words, it solves what no human team can do: process thousands of hours of video and signals in a timely manner.

Integration with AI models like Claude adds a new layer of sophistication: the ability to synthesize and interpret this enormous flow of information. Palantir Technologies provides the system's raw data—from satellites, sensors, and intercepted communications—while Claude acts as a "cerebral cortex," allowing the platform to be interrogated with questions posed in plain language and yielding operational responses in a matter of seconds, Sampson explains. For example, a commander might ask which enemy logistics centers are most vulnerable within a given radius. Claude cross-references the available data and generates a clear operational response, while the analysis systems filter out potential false positives and present commanders with a list of priority targets. The process follows the digital F2T2EA chain—detect, lock on, track, designate, attack, and assess—at a speed no human could match. The Project Maven system can analyze thousands of images in seconds, correlate thermal signatures—the heat signature emitted by an object or human body—and radio transmissions in real time, and present targets considered as priorities. Like a kind of video game, the operator-executor only has to review and authorize. At least in theory. In practice, the speed and demands of command can lead to systematic authorization every time. This eliminates what the military calls "cognitive bottlenecks." The result is that a thousand decisions are made every hour. Of course, the margin for human deliberation is also minimized.

The even darker side

And as if it were some kind of video game, the operator-executor only has to review and authorize the strike. At least in theory. In practice, the speed and demands of command can lead to systematic authorization every time. This eliminates what the military calls "cognitive bottlenecks." The result is that a thousand decisions are made in an hour. Of course, the margin for human deliberation is also minimized, and the possibility of errors increases.

In statements to Bloomberg last week, Captain Timothy Hawkins, spokesman for the United States Central Command (CENTCOM), insisted that AI does not decide what constitutes a target nor does it replace humans in decision-making. However, he acknowledged that it helps to "make smarter decisions faster." The time between detection and attack is reduced.

However, this efficiency has an even darker side. The same technology that promises surgical precision can catastrophically amplify errors. The attack on the Minab school in the early hours of the Iran war, where 168 people died—at least 110 of them girls—has become a symbol of this drift. According to several analyses, intelligence systems could have interpreted a concentration of thermal signatures—the heat pattern emitted by objects or human bodies and detected by infrared sensors—and electronic emissions as consistent with a military command center. But they failed to detect that it was an auditorium full of girls. The tragedy was compounded by the logic of double-tapa second wave of attacks planned to strike when rescue teams are already in the area, and that in Gaza the Israelis have repeatedly usedThe fact that a Revolutionary Guard facility was nearby strengthens the hypothesis of a US attack, although the Pentagon and President Donald Trump have denied responsibility. Even so, a report published last Wednesday by The New York Times It indicated that the Pentagon itself had detected that the data used by the artificial intelligence was outdated. Another error, then.

Faced with allied technological superiority, Iran has opted for an asymmetric strategy that Sampson describes as an "anarchic escalation." While the US AI seeks supposedly strategic targets, Tehran attempts to provoke social unrest in the region.

The absence of cruise missiles on the battlefield suggests, the marshal said, that Iran is stockpiling them, not that it has already used up 90% of its resources, as Secretary of Defense Pete Hegseth claimed yesterday: these low-flying missiles are especially difficult to detect by radar. At the same time, as Sampson points out, Tehran exploits the errors of Western algorithms and turns episodes like that of Minab—tragedy or mass murder—into narrative ammunition to erode the highly dubious legitimacy of an illegal offensive that is running rampant.

There are, however, two other major unknowns, impossible to verify at this time. On the one hand, whether artificial intelligence is as effective as its creators claim—some analysts assert that Iranian forces have spent years constructing decoys to confuse the algorithms of relatively few experimental systems; on the other hand, whether Iran's so-called underground missile cities have been seriously affected by two weeks of air campaigning. For the moment, it seems difficult to say.

The battle between Anthropic and the White House

Alongside the technological innovation revealed by the war, a seismic rift has opened in Washington. The White House administration has classified Anthropic as a "supply chain risk." The decision is extraordinary because the US government is usually the staunch protector of its technology companies against international regulations. Now, however, it has placed Anthropic under the same level of restriction as the Chinese giant Huawei. The reason for the discord is the stance of its CEO, Dario Amodei, who has imposed strict clauses: its technology cannot be used for lethal autonomous warfare or for the mass surveillance of citizens. The White House has responded with unprecedented harshness, labeling the company "left-wing radical and 'woke'" and asserting that it is not up to corporations to dictate the behavior of the military. Amodei, for its part, maintains a very ambiguous position: on the one hand, it has legally challenged the government's decision and, on the other, claims that its technology "is not yet good enough" to do the things the army wants to do.

stats