- Advertisement -spot_img
Monday, February 6, 2023

Killer robots are not science fiction. The impetus for banning them is growing.

It might sound like a little-known United Nations conclave, but this week’s meeting in Geneva was closely watched by experts in artificial intelligence, military strategy, disarmament and humanitarian law.

Reason for interest? Assassin robots are drones, guns and bombs that decide for themselves, with an artificial brain, whether to attack or kill – and what needs to be done to regulate or prohibit them.

Once in the realm of science fiction films such as The Terminator and Robocop, killer robots, better known as lethal autonomous weapons systems, were invented and tested at an accelerated pace without much oversight. Some prototypes have even been used in real-life conflicts.

The development of these machines is considered a potentially seismic event in war, akin to the invention of gunpowder and nuclear bombs.

For the first time this year, most of the 125 countries that have joined an agreement called the Convention on Specific Conventional Weapons (CCW) have said they want to limit the capabilities of killer robots. But they have been opposed by the members who are developing these weapons, most notably the United States and Russia.

The group’s conference ended on Friday with only a vague statement about considering possible action that would be acceptable to all. The campaign to combat killer robots, the disarmament group, said the result was “extremely short.”

The CCW, sometimes known as the Inhuman Weapons Convention, is a framework of rules that prohibit or restrict weapons deemed to cause unnecessary, unjustified and indiscriminate suffering, such as incendiary explosives, blinding lasers and booby-traps, that do not differentiate between combatants. and civilians. … The convention does not contain provisions on killer robots.

Opinions differ as to the precise definition, but they are widely regarded as a weapon that makes decisions with little or no human involvement. Such weapons were made possible by the rapid advancement of robotics, artificial intelligence, and image recognition.

The drones that the US has used extensively in Afghanistan, Iraq and other countries are not considered robots because they are remotely controlled by humans who select targets and decide whether to shoot.

For war planners, these weapons promise to keep soldiers out of danger and make decisions faster than humans, by placing more responsibility on the battlefield on autonomous systems such as drones and unmanned tanks that decide when to strike.

Critics argue that putting deadly decisions on machines, regardless of technological complexity, is morally disgusting. How does a car distinguish an adult from a child, a fighter with a bazooka from a civilian with a broom, a hostile fighter from a wounded or surrendered soldier?

“In essence, autonomous weapons systems raise ethical concerns in society about replacing human decisions about life and death with sensors, software and machine processes,” said Peter Maurer, president of the International Committee of the Red Cross and outspoken opponent of killer robots. Geneva Conference.

Ahead of the conference, Human Rights Watch and the Harvard Law School International Human Rights Clinic called for steps towards a legally binding agreement that requires constant human monitoring.

“Robots lack the compassion, empathy, compassion and judgment needed to treat people humanely and fail to understand the intrinsic value of human life,” the groups argued in a white paper supporting their recommendations.

Others said that autonomous weapons, instead of reducing the risk of war, could do the opposite – providing adversaries with ways of inflicting harm that minimize the risks to their own soldiers.

“Serial killer robots can lower the threshold of war by removing humans from the chain of destruction and releasing machines that can hit a human target without human control,” said Phil Twyford, New Zealand’s minister for disarmament.

The conference was widely seen by disarmament experts as the best opportunity so far to develop ways to regulate, if not ban, the use of killer robots in accordance with the CCW.

This was the culmination of years of discussion by a group of experts who were asked to identify problems and possible approaches to reduce the threats posed by killer robots. But the experts could not come to an agreement even on basic issues.

Some, such as Russia, insist that any decisions on restrictions must be taken unanimously, effectively giving opponents a veto.

The United States argues that existing international laws are sufficient and that a ban on autonomous weapon technology would be premature. Chief US delegate to the conference, Joshua Dorosin, proposed a non-binding “code of conduct” for the use of killer robots – an idea that disarmament advocates dismissed as a delaying tactic.

The US military has invested heavily in artificial intelligence, working with major defense contractors including Lockheed Martin, Boeing, Raytheon and Northrop Grumman. According to research by opponents of weapons systems, the work included projects to develop long-range missiles that detect moving targets based on radio frequency, unmanned aerial vehicles that can identify and attack a target, and automated missile defense systems.

According to Maaike Verbruggen, an expert on new military security technologies at the Center for Security, Diplomacy and Strategy in Brussels, the complexity and variety of uses of artificial intelligence make regulation more difficult than nuclear weapons or landmines. The lack of transparency about what different countries are building is causing military leaders “fear and anxiety,” which they must keep up with, she said.

“It is very difficult to understand what another country is doing,” said Ms. Verbruggen, who is working on her doctorate. on this topic. “There is a lot of uncertainty that drives military innovation.”

Franz-Stefan Gadi, a research fellow at the International Institute for Strategic Studies, said the “arms race for autonomous weapons systems has already begun and will not be canceled any time soon.”

Yes. Even as technology becomes more advanced, there has been a reluctance to use autonomous weapons in combat due to fear of error, Mr. Gadi said.

“Can the military command trust the judgment of autonomous weapons systems? Here the answer at the moment is unequivocally “no” and will remain so in the near future, “he said.

The debate over autonomous weapons has spilled over into Silicon Valley. In 2018, Google said it would not renew its contract with the Pentagon after thousands of its employees signed a letter protesting the company’s work on a program that uses artificial intelligence to interpret images that could be used to target drones. The company has also developed new ethical guidelines to prohibit the use of its technology for weapons and surveillance.

Others believe the United States is not going far enough to compete with rivals.

In October, former Air Force software director Nicholas Chillan told the Financial Times that he had stepped down due to weak technological progress in the US military, especially its use of artificial intelligence. He said politicians are being slowed down by questions about ethics, while countries like China are pushing forward.

There aren’t many proven battlefield examples, but critics point to several incidents that demonstrate the technology’s potential.

In March, United Nations investigators said a “deadly autonomous weapon system” was being used by government-backed forces in Libya against militias. A drone called Kargu-2, made by a Turkish defense contractor, tracked and attacked fighters as they flew a missile attack, according to the report.

In the 2020 war in Nagorno-Karabakh, Azerbaijan fought with Armenia using combat drones and missiles that hover in the air until they find the signal of an assigned target.

Many disarmament advocates said the outcome of the conference reinforced what they described as a determination to push a new treaty forward over the next few years, such as those that ban landmines and cluster munitions.

Daan Kaiser, an autonomous weapons expert at PAX, a peace advocate group from the Netherlands, said the conference’s refusal to even negotiate negotiations on killer robots was “a really clear signal that the CCW is not up to the task.”

Noel Sharkey, an expert on artificial intelligence and chairman of the International Committee on Robotic Arms Control, said the meeting demonstrated that a new treaty is preferable to further discussions by the CCW.

“There was a sense of urgency in the room,” he said, “that if there is no movement, we are not ready to stay on this treadmill.”

John Ismay made reporting.

World Nation News Desk
World Nation News Deskhttps://worldnationnews.com/
World Nation News is a digital news portal website. Which provides important and latest breaking news updates to our audience in an effective and efficient ways, like world’s top stories, entertainment, sports, technology and much more news.
Latest news
Related news
- Advertisement -


Please enter your comment!
Please enter your name here