Image Credits: Curto News/BingAI

'Killer robots': Austria calls for regulation on the use of AI in weapons

Austria called, last Monday (29), for new efforts to regulate the use of artificial intelligence (AI) into weapons systems that could create so-called 'killer robots', by hosting a conference aimed at reviving largely stagnant discussions on the subject.

ADVERTISING

With AI technology advancing rapidly, weapons systems that could kill without human intervention are getting closer and closer, presenting ethical and legal challenges that most countries say need to be addressed soon.

“We cannot let this moment pass without taking action. Now is the time to agree on international rules and norms to ensure humane control,” said Austrian Foreign Minister Alexander Schallenberg.

“At least let us ensure that the most profound and comprehensive decision, who lives and who dies, remains in the hands of humans and not machines,” he said in an opening speech at the conference titled “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation”.

ADVERTISING

Years of discussions at the United Nations have produced few tangible results and many participants at the two-day conference in Vienna said time for action is quickly running out.

“It is so important to act and act very quickly,” International Committee of the Red Cross President Mirjana Spoljaric said in a panel discussion at the conference.

“What we see today in different contexts of violence are moral failures before the international community. And we do not want to see such failures accelerating by placing responsibility for violence, for control over violence, on machines and algorithms,” he added.

ADVERTISING

AI is already being used on the battlefield. Drones in Ukraine are designed to find their own way to their target when signal jamming technology disconnects them from their operator, diplomats say. You United States said this month who were investigating a media report that the Israeli army has been using AI to help identify bombing targets in Gaza.

“We’ve already seen AI make selection errors in ways big and small, from misrecognizing a referee’s bald head as a football, to pedestrian deaths caused by self-driving cars unable to recognize pedestrians crossing outside the lane,” said Jaan. Tallinn, a software programmer and technology investor, in a keynote speech.

“We must be extremely cautious about the reliability of these systems, whether in the military or civilian sectors.”

ADVERTISING

Read also

Scroll up