EGP Resolution adopted at the 37th EGP Congress, Vienna, 2-3 June 2023
The Prohibition of Lethal Autonomous Weapon Systems
Because of the rapid advancement of digital technology, our everyday lives and society are increasingly confronted with automated decision-making.
In relation to defence and security this has led to the development of lethal autonomous weapon systems (LAWS).
There is currently no agreed definition of autonomous weapon systems, but the International Committee of the Red Cross defines them as “any weapons that select and apply force to targets without human intervention”.
In March 2020, a United Nations Security Council report on the Libyan civil war indicated that lethal autonomous weapons systems had been utilised in combat. The report stated that these weapons had the ability to attack targets without the need for data connectivity between the operator and the munition.
On May 21, 2021 Russian Defence Minister Army General Sergei Shoigu said at a lecture: "The serial production of combat robots has begun. What has emerged are not simply experimental, but robots that can be really shown in science-fiction films as they are capable of fighting on their own”. Other countries, such as China, Israel, and the US, are also investing in the development of weapon systems with increasing autonomy.
The use of lethal autonomous weapon systems or "killer robots” poses serious legal, ethical and humanitarian concerns and constitutes a serious threat to humanity.
First of all, there is a lack of accountability. The lack of human intervention is the key difference with other weapon systems. Who is responsible for the actions of an autonomous weapon system when it leads to violations of international humanitarian law or international human rights law? When an attack happens without human oversight it makes it difficult to assign responsibility.
And what if a lethal autonomous weapon system makes a mistake? Misjudgement of an approaching threat could lead to an increase in target overkill and disproportionate retaliation. In addition, the inaccuracy of facial recognition technology e.g. is often mentioned. As a report of the
European Union Agency for Fundamental Rights states: “facial recognition technology has higher error rates when used on women and people of colour, producing biased results.”
Further, there is the dehumanisation of the decision-making process to use force, which is an ethical consideration. An algorithm should not have the power to decide about life and death. How would it be able to correctly interpret the difference between civilians and combatants, which is a fundamental principle in international law? Empathy, an important element often already lacking in armed conflicts, is completely absent in lethal autonomous weapon systems.
Additionally, lethal autonomous weapon systems decrease the need for the involvement of soldiers, which might lower the threshold to launch attacks. Potentially this can lead to more armed conflicts, as these conflicts would not be restrained by the loss of the lives of soldiers.
Finally, there is the risk of proliferation. The development of lethal autonomous weapon systems can start a new arms race, leading to additional risks to human security and more conflicts. There is an additional risk of these weapon systems falling into the hands of groups not under government control, such as terrorist groups or private military companies.
A call to ban “killer robots” was issued by United Nations Secretary-General António Guterres, on 11 November 2018 at the Paris Peace Forum: “Imagine the consequences of an autonomous system that could, by itself, target and attack human beings. I call upon States to ban these weapons, which are politically unacceptable and morally repugnant.”
However, until now the UN Convention on Certain Conventional Weapons only agreed to continue discussions.
Several times, such as in 2018 at the initiative of the Greens/EFA group, the European Parliament voted to work towards an international ban on “killer robots”. Recently, in the 2022 resolution on artificial intelligence in a digital age, the Parliament repeated its call for a ban, and called on the Council to adopt a joint position on autonomous weapons systems that ensures meaningful human control over their critical function; insisted on the launch of international negotiations on a legally binding instrument that would prohibit fully autonomous weapons systems; and stated that such an international agreement should determine that all lethal AI weapons must be subject to
meaningful human oversight and control, meaning that human beings remain in the loop, and are therefore ultimately responsible for the decision to select a target and take lethal action.
The Greens/EFA group managed to exclude in the European Defence Fund Regulation any of the Defence Fund's €13 billion from being spent on “killer robots”.
Taking into consideration all these factors, the European Green Party calls on:
The European Union as international actor, its Member States and all European countries should primarily always strive towards peace, human security, non-proliferation and global disarmament and the protection of human rights, while supporting the right to defend itself according to Article 51 of the UN Charter. In times of rapidly rising defence budgets and investments in military technology, we need to make sure that no new military systems are developed that do not meet basic legal and ethical standards such as meaningful human control and therefore pose great risks to humanity.
The international community to start negotiations and work on an international treaty on the prohibition of the development, production and use of lethal autonomous weapon systems that enable strikes to be carried out without meaningful human control, acquisition, transfer, stockpiling and use of lethal autonomous weapon systems that enable strikes to be carried out without meaningful human control, including the prohibition of engagement in aby preparations to use autonomous weapon system.
The EU, its Member States and all European countries to actively work towards this international ban. Since there are currently no hopeful developments within the UN Convention on Certain Conventional Weapons, the EU should take the lead in developing a multilateral ban with like-minded countries and supports all efforts in other forums that aim to advance the regulation of lethal autonomous weapon systems, such as the Human Rights Council and the United Nations General Assembly.
MPs and MEPs to support the parliamentary pledge of the Stop killer robots campaign.
Its member parties to take a leading role in the efforts to establish national and international bans on lethal autonomous weapon systems that are beyond human control, since their use violates international law and ethical principles.
 An overview of states’ positions can be found at https://automatedresearch.org/state-positions/?_state_position_negotiation=yes.