Austria yesterday called for new efforts to regulate the use of artificial intelligence in weapons systems that can create so-called "killer robots", as it hosted a conference aimed at reviving discussions on the topic.
While AI technology is developing rapidly, weaponized systems that can kill without human intervention are getting closer, presenting an ethical and legal challenge that most states need to address as soon as possible.
"We cannot let this moment pass without taking action. Now is the moment to agree on international rules and norms in order to ensure human control", said Austrian Foreign Minister Alexander Schallenberg at a meeting with representatives of non-governmental and international organizations, as well as with delegates from 143 countries.
"At least allow us to make sure that the most important decisions with far-reaching consequences, about who will live and who will die, remain in the hands of people, and not be decided by machines", he said in his opening address at the conference called "Humanity at the Crossroads: Autonomous weapon systems and regulatory challenges".
Years of talks at the United Nations have not brought many tangible results, and many participants at the two-day conference in Vienna said the window for action is closing fast.
"It is important to act and to act very quickly," said the president of the International Committee of the Red Cross, Mirjana Spoljarić.
"What we see today in different contexts of violence are the moral failures of the international community. And we don't want to accelerate those failures by handing over responsibility for violence, control over violence to machines and algorithms," she added.
Artificial intelligence is already being used on the front lines. Drones in Ukraine are designed to find their own way to a target when jamming technology cuts off their communication with the operator, diplomats said.
The United States said this month it was investigating media reports that the Israeli military was using AI to identify targets for bombing in Gaza.
"We have already seen artificial intelligence make selection errors in various ways, both in minor and major situations, from mistaking a referee's bald head for a football, to the deaths of pedestrians who were hit by autonomous vehicles that failed to recognize crosswalks. crosswalk," said Jan Talin, developer and technology investor, in his opening remarks.
"We must be extremely careful when relying on the accuracy of these systems, whether in the military or civilian sectors," he said.
Bonus video: