FOR THE CONTEST

AUTONOMOUS WEAPONS SYSTEMS:
Unresolved Questions of Ethics, Morals, International Law and Geopolitics.
Part 2

Nadiya Serbenko

 

4. Fundamental Principles of the International Humanitarian Law Regarding the Conduct of Hostilities and the Use of AWS

They are the principle of distinction and the principle of proportionality. AWS should function with observation of these principles, while ensuring their usage is the task of any state planning to use such a system.

According to the first principle, such weapons should identify the distinction between civilians, armed civilians (law enforcement officers or hunters), those who are out of action (injured or sick), surrendering prisoners, as well as combatants.

As for the second principle, the number of victims among the civilian population must be minimal. Accidental losses if they occur should not be excessive compared to the anticipated concrete and direct military advantage.

Based on this, we objectively understand that developers in the nearest future will hardly be able to create machines with programmed functions, taking into consideration these principles.

Supporters of the use of lethal autonomous weapons in an armed conflict adhere to the view that these systems can be programmed in such a way that they will act “cautiously” and accordingly civilian casualties, even if accidental, will be minimal. That is, they believe that autonomous weapons systems will have a sort of analytical skills that will enable them to determine: will the harm caused and the gained military superiority be proportionate. They also believe that the use of robots will significantly reduce the loss of military personnel. Besides, robots will be able to conduct operations in a biologically, chemically or radioactively contaminated environments, as well as in space and under water.

Opponents, on the contrary, insist that the use of AWS will never meet the fundamental principles of international humanitarian law and international law in general. In their view, such machines will never have the common sense that people have, and as a result — this can lead to very undesirable consequences. Peter Maurer, President of the International Committee of the Red Cross points out that today there is no guarantee that there will be no failures in the work of such robots. In addition, they may fall into the hands of criminals, terrorists, or be subject to a hacker attack. The risk of being killed by robots will significantly increase the level of anxiety and will create panic among the civilian population in the conflict zone, he emphasizes.

That is why the ICRC calls on states to address problems that may arise, and are arising, in connection with the AWS. Thus, first of all, their use in an armed conflict has to be agreed with the principles and norms of international humanitarian law. The key problem is that at present there is no specific provision related to autonomous weapons systems. The only thing international humanitarian law tells us is as follows: “… States should find out whether the use of some new weapon or means or method of warfare that it develops or acquires is not prohibited in some or all circumstances”. This requirement can be found in the Additional Protocol № 1 to the Geneva Conventions.

So, the use of the AWS in conflicts — is, in fact, a real gap in international law, which has to be filled with a specific legal norm, this is very important and at the same time difficult. Of course, one can say that everything new is difficult to be regulated, because for that representatives of states need to reach a consensus and work together to make a decision. Although, of course, it is clear that each state, which has AWS, in the long run aims at satisfying its own national interests (including to ensure national security) and won’t be eager to make concessions in making a consensus decision.

Today we can say with great regret that in geopolitical plans (projects) of a number of states on the question of the use of the AWS in modern wars and conflicts will increasingly prevail purely national geopolitics and somewhat modified law of force. And one of the major reasons for this is that today in international humanitarian law (international law) de facto there are neither legal mechanisms nor legal instruments, regarding regulation of the use of AWS.

And as we have already seen, autonomous weapons systems were almost openly developed earlier, are developed quite secretly today (including under the guise of industrial /and even the humanitarian/ projects) and will continue to be developed very actively, but completely secretly.

For example, recently the well-known company Google has managed to get a patent on the developed by it technology that allows to upgrade the robots and robotic devices. This patent describes methods of interaction of a huge number of robots through a “tag cloud” to carry out different tasks, at this, not limiting their maximum number. This means that the owner can set up his devices, as it is necessary and convenient to him, with the help of a smartphone or any other device that runs on Android. Their actions and behavior can be controlled from any place where the owner can be, as long as he specifies the necessary program or code in the gadget and settings. The company insists “… their technology is not a threat to society and global security”. According to them, it is intended solely for automation and optimization of services. Also, the management of the company has announced that they will try to protect their technology from spreading and being used in the “arms race”.

But it is in connection with the development of this technology that many online sites are simply overwhelmed with messages: “Google is preparing a robot army”. The UN Commission on Peacekeeping is also very concerned with this news, because robots can be used as a weapon and take the signs of militarized communities, which could lead to creation of an entire army consisting of “autonomous and deadly” participants who do not have a real intelligence, which is fraught with the unexpected technical failure and this “robot army’s” going out of control. Paradoxical in this situation is the fact that the United Nations is currently conducting consultations to decide whether to limit or even completely ban the development of autonomous lethal weapons, and what level of “intelligent human control” such weapons should have. As for the actual statements, the official Spokesman of Google has said, “… some ideas of ​​this patent in the future will become real products and services, some — will not,” and of course, not a word about their own “robot army”.

 

5. Some Ideas for the Improvement and Development of Legal Norms of Creation and Use of AWS

The well-known international NGO, “Amnesty International” constantly and emphatically declares that governments are obliged to prohibit the development of high-tech killer robots, which can pose a serious danger to people.

Thus, they defend the following position: a preventive ban should be adopted on development, stockpiling, transfer, deployment and use of fully autonomous weapons systems. Moreover, the presence of such weapons would have a negative impact on development of the geopolitical situation and thus would affect both the regional policy and international policy as a whole — insists “Amnesty International”.

As an option, it is considered possible to adopt a Protocol to the UN Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons, which is often referred to as the Convention on Certain Conventional Weapons or the Inhumane Weapons Convention on (IWC). And since the Protocol is an integral part of the IWC, it will have the same action as the Convention in accordance with Article 4. Point 5 (under international law, Protocol is an integral part of any Convention). Also, according to Article 8 (Revision of the action and amendments), in the future, the parties may make amendments to the IWC. Or to continue working on the idea of preventive ban of the AWS.

“Amnesty International” also conducts periodic briefings on the use by law enforcement agencies of specialized robots in order to protect law and order. According to their forecasts, these fully autonomous weapons systems on new technologies are expected to be created in 20-30 years.

But in reality, there is only one step from the existing prototypes of the UAVs and robotic systems to full-killer robots. As you know, companies in the USA, UK, Russia, Switzerland, Israel, Japan, China and other countries are already engaged in the development of the so-called “less lethal” robotic weapons for use in law enforcement and military spheres, which are either controlled remotely or fire automatically if detected. According to the ICRC, such weapons include unmanned aerial vehicles (drones) and ground-based complexes and platforms that can shoot with electric darts, tear gas and other less lethal damage agents, which could result in death or serious injuries of people.

 

For information:

The unmanned aerial vehicle “Shadow Hawk”The new autonomous unmanned supersonic aircraft “Taranis”One of the examples — the unmanned aerial vehicle “Shadow Hawk”, being developed in the USA by the company Vanguard Defense Industries. It has already been used in Afghanistan and East Africa against persons suspected of subversive and terrorist activities. Besides, it can be used for attacking of the suspected by electric shock from a height, and is equipped with a firearm (automatic cannon) of twelfth calibre and 37 mm or 40 mm grenade launchers.

The United Kingdom has also recently introduced its new autonomous unmanned supersonic aircraft “Taranis”. This aircraft has on board guided cassettes to contain various types of high-precision homing weapons.

 

As for the irrecoverable consequences of such technologies, according to the “Bureau of Investigative Journalism”, as a result of the use of drones in 2004-2014, 2500 to 3500 people (among them several hundred civilians and about 200 children) were killed and more than a thousand were wounded.

So, once again we see that the problem with AWS is not a phony issue, and great attention should be paid to all questions arising in this context. The ICRC emphasizes that the main problem is that decisions will be made not by people, but by machines, and this, in turn, will lead to weakening of human control over the use of force.

In these circumstances, it would be appropriate to entrust the International Law Commission to work out draft articles, which in the future could become the foundation for a new Convention or Protocol for the IWC. Of course, the truth is that we know that many projects (among them: the draft articles on responsibility of international organizations, the draft articles on reservations to treaties; the draft articles on diplomatic protection; draft articles on the law of trans-border aquifers, etc.) have remained drafts, but still it is worth a try.

It should also be noted that in recent years there have been several international meetings and conferences in Geneva, dedicated to use of combat autonomous robotic systems (CARS).

One of the participants of such conferences, Juergen Altmann (one of the founders of the International Committee for the Control of Robotic Weapons Systems), has said: “I was trying to establish a dialogue with arms manufacturers, but they are frankly not interested in complying with international humanitarian law”.

The UN expert Christof Heyns has suggested to introduce a moratorium on the production of fully autonomous combat robots, “It is urgent to introduce a moratorium because strict international norms, which would monitor the use of such ultra-modern weapons have not been developed yet”.

At the international meeting on CARS, which was held April 13-17, 2015 under the auspices of the United Nations, several special reports were presented concerning the development and application of AWS.

Thus, Stephen Goose, Director of Human Rights Watch’s Arms Division, in his report, made a statement that “…killer robots pose a threat to the basic provisions of international humanitarian law”. According to him, it is extremely important that “… both, in the police operation and in the war, a human being should decide when to use weapons”. He is worried that robots will be able to make decisions in “questions of life and death”.

Let us recall that “Human Rights Watch” is a co-founder of the campaign to “Stop killer robots” and serves as its coordinator. This is one of the few international organizations that call for a preventive ban on the development, production and use of fully autonomous weapons systems. It includes more than 50 non-governmental organizations.

Not long ago, together with the International Human Rights Clinic of Harvard Law School, they published a report entitled “Mind the Gap: The Lack of Accountability for Killer Robots”, which addresses the issue of accountability for the actions of fully autonomous weapons in both criminal and civil law. The conclusions suggest the need for a preventive ban on AWS, which should be provided by an international treaty, the improvement of state (national) laws as well as the by development of the relevant directives.

The lead author of the report, Bonnie Docherty adds from herself: “The lack of accountability means that there is no instrument of deterring from committing crimes, there is no compensation for victims, there is no social condemnation of the guilty. The presence of multiple barriers on the potential victims’ way to justice, points to the urgent need to ban fully autonomous weapons systems”.

Earlier another report was published, “Losing Humanity: the Case against Killer Robots”, which reads about many legal, ethical, moral and other problems associated with fully autonomous weapons systems. The following conclusion was drawn: fully autonomous weapons in the modern battlefield is unlikely to be consistent with the basic provisions of international humanitarian law. Its use is threatening with problems associated with responsibility, as it will not be clear who is legally responsible for the actions of killer robots.

 

the U.S. Department of Defense Directive 3000.09
http://www.dtic.mil/

Also noteworthy is the U.S. Department of Defense Directive 3000.09 of November 21, 2012, in which it was demanded that at the moment of deciding on the robot’s using fire, there should be ensured control of his actions on the part of a human being. The Directive also stipulates that within 10 years, the US Defense Department can develop and use only fully autonomous weapons systems that do not shoot to kill unless the Ministry gives up this policy at the high level.

In fact, this Directive introduces the world’s first moratorium on lethal autonomous weapons systems. But the matter is that the United States very often act quite cunningly. They do sign the most important conventions, while making a number of reservations, but do not ratify them. Or they bypass the Conventions ratified by all possible means (they become part of the legislation, and each next law, as we know, cancels the previous one). That is, the above-mentioned Directive — is, in fact, nothing more than a demonstration of their good will. The arms race continues anyway, and in the USA, it is one of the most important aspects of their geopolitics.

 

6. Conclusion

Scientific and technological progress in the sphere of arms gives rise to a number of issues relating to international law, to international humanitarian law to be exact.

The first question that has to be paid special attention is that of use and further development of autonomous weapons systems, the hallmark of which is their ability to perform military tasks without human interfering. The use of these new technologies so far has not been regulated by international humanitarian law. The only thing that is possible — is that at the development and use of new weapons, the state must take into consideration all the warnings and prohibitions of international humanitarian law. As of today, fully autonomous weapons systems have not been created yet, but with each new invention, the degree of their autonomy increases, and in the future autonomous weapons systems will be able to function outside the hard-coded time-space framework, when faced with varied and rapidly changing circumstances, at this selecting and hitting targets without direct human control.

Leading experts in the field of robotic technology and international humanitarian law believe that the use by law enforcement agencies and armed forces of robotized weapons is absolutely inconsistent with the provisions of international law on human rights and would lead to unlawful killings and injuries as a result of excessive use of force, and to undermining of the right to human dignity. Indeed, unlike the highly qualified staff, robots themselves are not able to peacefully settle conflicts, to tell eligible orders from unauthorized ones, to decide on the need for differentiated responses in order to minimize the harm, and, to all this, they cannot be held accountable for mistakes or failures in their work that can result in people’s loss of life or serious injuries.

Secondly: today the most promising development of combat robots, UAVs and autonomous robotic combat platforms is being done in the United States, Britain, Russia, Israel, Switzerland, Japan, China and other countries. And we objectively understand that if one or more states decide to use fully autonomous weapons systems, others may find it necessary to abandon the policy of deterrence, and this will lead to uncontrolled AWS arms race.

Thirdly: of serious concern are further race of robotized weapons, in which it will be difficult for states to stop, and changes in the geopolitical situation in the world and regions — as a result of that arms race, as well as the question of the ethics of giving machines the right to make decisions on questions of life and death on the battlefield. Human rights activists object to the use of combat robots because of their possible uncontrollableness, as they may kill the wounded and surrendering representatives of the enemy. Besides, it is also difficult for them to distinguish combatants from enemy and armed civilians.

Fourthly: in the development and use of autonomous weapons systems, it is very important to take into consideration the fundamental principles and norms of international law (principles of non-use of force or threat of use of force, peaceful settlement of international disputes; non-interference; cooperation; equality and self-determination of peoples; sovereign equality of states; fulfillment in good faith of obligations under international law; territorial integrity; respect for human rights; inviolability of borders) as well as of international humanitarian law (the principle of distinction and the principle of proportionality) in their entirety, including the UN Charter, human rights (especially the right to life), the rules of warfare.

Fifthly, international human rights and anti-war organizations rightly warn that the quite doubtful progress in remote-controlled military equipment could call into question the whole system of the Geneva Accords. That is why today the United Nations should maintain close and productive contacts with regional and national initiatives in the sphere of human rights, with “Human Rights Watch”, “Amnesty International”, the International Committee of the Red Cross, the International Committee for Robot Arms Control, including with the initiatives of the scientific community to build up capacity in order to ensure better compliance with international standards and to raise awareness about them.

Sixthly, in these circumstances, states should bring domestic legislation governing the use of force by autonomous weapons systems, as well as their combat use, in line with international standards. To increase the degree of responsibility, all operations using UAVs and AWSs should be absolutely transparent. It is also necessary to consider the adoption of a new convention, the development of which should be regulated by international legal acts of drafting articles or protocol to the Convention on Certain Conventional Weapons.

Finally, we should not exclude that new technologies can bring positive results in specific circumstances and if international law is observed in their use. For example, possible disruption of functioning of certain services and systems used in both military and civilian purposes with the help of a special cyber operation, will cause less damage than the complete destruction of such infrastructure by aircraft or artillery bombardment. Theoretically, it is also possible to program autonomous weapons systems in such a way that they behave more ethically and more carefully on the battlefield than humans. But, of course, today this is possible only in theory, and time will show how some states will/will not take advantage of the opportunity they get.

Схожі публікації