As AI evolves, pressure mounts to regulate ‘killer robots’

Every day, we voluntarily give up information about ourselves to machines. This happens when we accept an online cookie or use a search engine. We barely think about how our data is sold and used before clicking “agree” to get to the page we want, dimly aware that it will be used to target us as consumers and convince us to buy something we didn’t know we needed.

But what if the machines were using the data to decide who to target as enemies that need to be killed? The UN and a group of non-governmental organisations are worried that this scenario is close to being a reality. They are calling for international regulation of Lethal Autonomous Weapons (LAWS) to avoid a near-future where machines dictate life-and-death choices.

Large-scale drone warfare unfolding in Ukraine

For several months, the Kherson region of Ukraine has come under sustained attack from weaponised drones operated by the Russian military, principally targeting non-combatants. More than 150 civilians have been killed, and hundreds injured, according to official sources. An independent UN-appointed human rights investigation has concluded that these attacks constitute crimes against humanity.

The Ukrainian army is also heavily reliant on drones and is reportedly developing a “drone wall” – a defensive line of armed Unmanned Aerial Vehicles (UAVs) – to protect vulnerable sections of the country’s frontiers.

Once the preserve of the wealthiest nations that could afford the most high-tech and expensive UAVs, Ukraine has proved that, with a little ingenuity, low-cost drones can be modified to lethal effect. As conflicts around the world mirror this shift, the nature of modern combat is being rewritten.

Kharkiv civilians have been hit by hundreds of Russian drone attacks (file, Feb 2025)

© UNICEF/Oleksii Filippov

Creeping ‘digital dehumanisation’

But, as devastating as this modern form of warfare may be, the rising spectre of unmanned drones or other autonomous weapons is adding fresh urgency to ongoing worries about ‘killer robots’ raining down death from the skies, deciding for themselves who they should attack.

“The Secretary-General has always said that using machines with fully delegated power, making a decision to take human life is just simply morally repugnant,” says Izumi Nakamitsu, the head of the UN Office for Disarmament Affairs. It should not be allowed. It should be, in fact, banned by international law. That’s the United Nations position.”

Human Rights Watch, an international NGO, has said that the use of autonomous weapons will be the latest, most serious example of encroaching “digital dehumanisation,” whereby AI makes a host of life-altering decisions on matters affecting humans, such as policing, law enforcement and border control.

Several countries with major resources are investing heavily in artificial intelligence and related technologies to develop, land and sea based autonomous weapons systems. This is a fact,” warns Mary Wareham, advocacy director of the Arms Division on Human Rights Watch. “It’s being driven by the United States, but other major countries such as Russia, China, Israel and South Korea, have been investing heavily in autonomous weapons systems.”

Advocates for AI-driven warfare often point to human limitations to justify its expansion. Soldiers can make errors in judgment, act on emotion, require rest, and, of course, demand wages – while machines, they argue, improve every day at identifying threats based on behavior and movement patterns. The next step, some proponents suggest, is allowing autonomous systems to decide when to pull the trigger.

A UAV (Unmanned Aerial Vehicle) is pictured airborne over Afghanistan.

There are two main objections to letting the machines take over on the battlefield: firstly, the technology is far from foolproof. Secondly, the UN and many other organisations see the use of LAWS as unethical.

“It’s very easy for machines to mistake human targets,” says Ms. Wareham of Human Rights Watch. “People with disabilities are at particular risk because they of the way they move. Their wheelchairs can be mistaken for weapons. There’s also concern that facial recognition technology and other biometric measurements are unable to correctly identify people with different skin tones. The AI is still flawed, and it brings with it the biases of the people who programmed those systems.”

As for the ethical and moral objections, Nicole Van Rooijen, Executive Director of Stop Killer Robots, a coalition campaigning for a new international law on autonomy in weapons systems, says that they would make it very difficult to ascertain responsibility for war crimes and other atrocities.

“Who is accountable? Is it the manufacturer? Or the person who programmed the algorithm? It raises a whole range of issues and concerns, and it would be a moral failure if they were widely used.”

A ban by 2026?

The speed at which the technology is advancing, and evidence that AI enabled targeting systems are already being used on the battlefield, is adding to the urgency behind calls for international rules of the technology.

In May, informal discussions were held at UN Headquarters, at which Mr. Guterres called on Member States to agree to a legally binding agreement to regulate and ban their use by 2026.

Attempts to regulate and ban LAWS are not new. In fact, the UN held the first meeting of diplomats in 2014, at the Palais des Nations in Geneva, where the chair of the four-day expert talks, Ambassador Jean-Hugues Simon-Michel of France, described LAWS as “a challenging emerging issue on the disarmament agenda right now,” even though no autonomous weapons systems were being used in conflicts at the time. The view then was that pre-emptive action was needed to get rules in place in the eventuality that the technology would make LAWS a reality.

11 years later, talks are ongoing, but there is still no consensus over the definition of autonomous weapons, let alone agreed regulation on their use. Nevertheless, NGOs and the UN are optimistic that the international community is inching slowly towards a common understanding on key issues.

“We’re not anywhere close to negotiating a text,” says Ms. Rouijen from Stop Killer Robots. “However, the current chair of the Convention on Certain Conventional Weapons (a UN humanitarian law instrument to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately) has put forward a rolling text that is really quite promising and that, if there is political will and political courage, could form the basis of negotiations.”

Ms. Wareham from Human Rights Watch also sees the May talks at the UN as an important step forward. “At least 120 countries are fully on board with the call to negotiate a new international law on autonomous weapons systems. We see a lot of interest and support, including from peace laureates, AI experts, tech workers, and faith leaders.”

“There is an emerging agreement that weapon systems that are fully autonomous should be prohibited,” says Ms. Nakamitsu, from the UN Office for Disarmament Affairs. “When it comes to war, someone has to be held accountable.”

Source link

‘Politically unacceptable, morally repugnant’: UN chief calls for global ban on ‘killer robots’

“There is no place for lethal autonomous weapon systems in our world,” Mr. Guterres said on Monday, during an informal UN meeting in New York focused on the use and impact of such weapons.

“Machines that have the power and discretion to take human lives without human control should be prohibited by international law.”

The two-day meeting in New York brought together Member States, academic experts and civil society representatives to examine the humanitarian and human rights risks posed by these systems.

The goal: to lay the groundwork for a legally binding agreement to regulate and ban their use.

Human control is vital

While there is no internationally accepted definition of autonomous weapon systems, they broadly refer to weapons such as advanced drones which select targets and apply force without human instruction.

The Secretary-General said in his message to the meeting that any regulations and prohibitions must make people accountable. 

“Human control over the use of force is essential,” Mr. Guterres said. “We cannot delegate life-or-death decisions to machines.”

There are substantial concerns that autonomous weapon systems violate international humanitarian and human rights laws by removing human judgement from warfare.

The UN chief has called for Member States to set clear regulations and prohibitions on such systems by 2026.

Approaching a legally binding agreement

UN Member States have considered regulations for autonomous weapons systems since 2014 under the Convention on Certain Conventional Weapons (CCW) which deals with weapons that may violate humanitarian law.

Most recently, the Pact for the Future, adopted in September last year, included a call to avoid the weaponization and misuse of constantly evolving weapons technologies.

Stop Killer Robots – a coalition of approximately 270 civil society organizations – was one of the organizations speaking out during this week’s meeting. 

Executive Director Nicole van Rooijen told UN News that consensus was beginning to emerge around a few key issues, something which she said was a “huge improvement.”

Specifically, there is consensus on what is known as a “two-tiered” approach, meaning that there should be both prohibitions on certain types of autonomous weapon systems and regulations on others.

However, there are still other sticking points. For example, it remains unclear what precisely characterizes an autonomous weapon system and what it would look like to legislate “meaningful human control.”

Talks so far have been consultations only and “we are not yet negotiating,” Ms. Rooijen told UN News: “That is a problem.”

‘Time is running out’

The Secretary-General has repeatedly called for a ban on autonomous weapon systems, saying that the fate of humanity cannot be left to a “black box.”

Recently, however, there has been increased urgency around this issue, in part due to the quickly evolving nature of artificial intelligence, algorithms and, therefore, autonomous systems overall.

The cost of our inaction will be greater the longer we wait,” Ms. Rooijen told us.

Ms. Rooijen also noted that systems are becoming less expensive to develop, something which raises concerns about proliferation among both State and non-state actors.

The Secretary-General, in his comments Monday also underlined the “need for urgency” in establishing regulations around autonomous weapon systems.

“Time is running out to take preventative action,” Mr. Guterres said. 

Source link