Apparent AI use in Iran war raises daunting questions: expert
GENEVA, Switzerland (AFP) — Suspected widespread use of artificial intelligence (AI) to select targets and launch attacks on Iran raises many questions, and fears that human control of war machinery could be slipping, a leading expert said Wednesday.
The United States (US) and Israel have carried out thousands of strikes across Iran since launching their offensive, including one that killed Iran’s supreme leader, Ayatollah Ali Khamenei on Saturday on the first day of the war.
Peter Asaro, an expert on artificial intelligence and robotics, told AFP it appeared likely the two countries had used AI to identify targets in Iran, pointing to what seemed to be a very short planning phase and large number of targets.
But while AI can speed things up, it also raises a host of moral and legal questions, he said.
“You can rapidly produce long lists of targets much faster than humans can do it, by automating that process,” said the associate professor of media studies at The New School in New York, who also serves as vice chair of the Stop Killer Robots campaign.
But then “the ethical and legal question is: to what degree are those humans actually reviewing the specific targets that have been listed, verifying their legality and their value militarily before authorising?”
“The desire (with) all those systems is to be able to make decisions and move faster than your enemy,” he said, adding though that the question arises: “Are you actually still in control of what’s happening?”
Discussions have been running for a decade around a possible future treaty regulating automated weapons use. Countries are due to decide later this year whether to launch full-on treaty negotiations.
But while there is no current specific treaty on AI and autonomous weapons, that does not mean these systems are operating in a legal vacuum: existing international law applies.
Speaking on the sidelines of discussions at the United Nations (UN) in Geneva, Asaro said a crucial part of the debate revolved around the selection of targets, and fears that meaningful human control could be lost.
While the “sales pitch” for using AI in warfare is typically that “these things are highly accurate and make fewer mistakes than humans”, he stressed that “we don’t actually know how these systems work”.
He pointed to how the AI runs on opaque classified systems, providing little insight into how they function and how they reach their conclusions.
There is no “easy way of evaluating the output of these systems” or determining what went wrong when mistakes are made, Asaro said.
“If something does go wrong, then who’s responsible,” he asked.
“How do you define this legally? Where are the moral lines?”
He pointed to the case of the school in the city of Minab that was hit on Saturday, killing more than 150 people, according to Iran.
Tehran has blamed the United States and Israel, but neither has confirmed the attack, and AFP has been unable to independently verify the toll or visit the site.
AFP has confirmed the building was located in close proximity to two sites controlled by the powerful Islamic Revolutionary Guard Corps.
Asaro highlighted reports about the strike that indicated the school had been clearly separate from the adjacent military site for at least a decade.
If a mistake was made, he said, it was far from obvious what caused it.
“They didn’t distinguish it from the military base as they should have, (but) who is they?” he asked — human or machine?
If AI was used for the attack, he said the question was: “How old is the data?”, and was this a “database error”?
Or was the targeting accurate, “but (had) just fallen short”? he asked
“There are all sorts of ways for things to fail.”
Another perhaps more frightening possibility, he said, would be that “the system actually reached some conclusion that … the school was a threat”.
That would in turn raise a bigger question of what the reasoning system was behind that conclusion.
“You have to really worry about how it is making these decisions,” Asaro said.