Et voila. Un’intera nuova sezione della mia tesi dedicata agli aspetti etici dell’impiego di robot autonomi in ambito militare. Siccome il post e’ lunghetto direi di saltare i convenevoli e lasciarvi direttamente alla lettura. Enjoy.
As we have seen in the previous chapter, the aerial autonomous robotics domain is characterised by a strong military footprint. Most of the latest developments in the filed come from military research and the battlefields are the arenas in which the technological innovations are put at test. The US, who have been employing several thousands of military robots during operations Enduring Freedom and Iraqi Freedom , have launched several plans (see for example the Future Combat Systems1 running from 2003 to 2009, the Unmanned Systems Roadmap 2007-2032 , and the Unmanned Aircraft Systems Roadmap 2005-2030 ) aiming to increase the proportion of autonomous robots within the American military apparatus. According to their plans, one third of the US operational ground combat vehicles will be unmanned by 2015 . No exact figures have been provided for UAVs, but it is not utopia to believe that these numbers could be even higher for unmanned aircraft.
At the current stage the widely employed Predator and Reaper UAVs, as well as the counterparts employed by other militaries, are only semi-autonomous robots, as they rarely fly in complete autonomy and they still require a man-in-the-loop to decide when to perform a potentially lethal operation (e.g. to fire a missile). The legal and ethical implications for the men in control of these UAVs are the same as for piloting an aircraft or calling in the coordinates for a traditional air strike. However, one the new goals of the military research now consists in getting rid of the man-in-the-loop and let the military robots, either aerial, underwater or terrestrial, to acquire and fire their targets autonomously [4, 5, 6]. Apart from the technological challenges that this plan implies, there are several ethical considerations that have, not only to be taken into account, but also to be promptly addressed.
In the next pages we are not going to discuss the morality of the research in autonomous robotics applied to the military world. Anyone may have his opinion on the topic, which I personally respect and which I do not intend to affect in any way. Rather, we will analyse the ethical implications of having autonomous robots on the battlefield in the light of the modern laws of war. Keeping in mind that, as suggested by Arkin , when properly functioning robots can be even “more ethical” than humans, as they are unlikely to imitate the countless war atrocities committed by human soldiers during history.
The laws of war and the ethics of modern conflicts
Despite the naive eyes might not recognise that, modern armed conflicts adhere to a rigorous set of rules concering both under which conditions wars can be started, and how, once began, they must be conducted. This body of law is generally referred to as the “laws of war.”
Modern laws of war take inspiration from the “Just War theory”, a doctrine of military ethics whose roots date back to 2,000 years ago2. The main goal of this theory consists in defining the criteria for a war to be considered “just”, thus started, and those according to which carry it out. In its very essence, Just War holds that a violent conflict ought to meet philosophical, religious or political criteria, reflecting the footprint left over the years by several Christian philosophers3.
Just War theory consists of two main principles: jus ad bellum, and jus in bello4 . The criteria belonging to the jus ad bellum category define the right to wage war:
- just cause: innocent human lifes must be in imminent danger and intervention must be a mean to protect these. The reason for going to war can not be solely in recapturing things or punishing people who have misbehaved;
- comparative justice: the injustice suffered by one of the parties involved in a conflict must be significantly higher that suffered by the other(s);
- competent authority: a genuine war must be paired with genuine justice. Thus a just war can only be initiated by a political authority within a political system that allows distinctions of justice;
- right intention: force must only be used for the purpose of correnting a suffered wrong, without any material/economical implication;
- probability of success: the use of weapons must not be advocated in futile causes or where disproportionate measures would be required to achieve success;
- last resort: force is the last resort. It must only be used when every peaceful and viable alternative have been seriously thried and exhausted, or are clearly not practical;
- macro-proportionality: the anticipated benefits of waging a war must be proportionate to its expected evils or harms.
For what concerns jus in bello instead, its principles dictate how combatants are expected to act once war has begun:
- discriminality: the acts of war must be directed towards enemy compatants only, and not towards non-combatants (e.g. civilians);
- proportionality: an attack can only be launched against military objectives in the knowledge that the incidental civilian injuries would not be clearly excessive in relation to the estimated military advantage;
- military necessity: the governing principle of a just war must be the one of minimum force. An attack must be targeted to a military objective and intended to help in the military defeat of the enemy. The harm caused to civilians and to their properties must be proportional and not excessive in relation to the direct military advantage anticipated;
- fair treatment of prisoners of war: any solider, either captured or surrendered, no longer poses a threat. Therefore he must not be tortured or mistreated in any way;
- no means malum in se: combatants must not use weapons or other methods of warfare considered as evil (e.g. mass rape, forcing soldiers to fight against their own side, or using weapons whose effect cannot be controlled).
In modern times, two widely adhered international treaties have implemented the principles of Just War. The first is the Hague convention, signed in 1899 and further extended in 1907; the second is the 1949 Geneva convention.
The experts’ point of view on the ethicality of military autonomous robotics
Why the above parentheses about how modern wars are regulated? Because this is the context that several scientists fear autonomous robots might not be able to cope with once humans are taken out of the control loop. Amongst all the roboticists, philosophers and war strategists who have studied the potential impact of autonomous military systems a prominent role has been played by British scientist Noel Sharkey. Since a few years ago, Sharkey is involved in a fierce campaign aimed to convince policymakers around the world that today’s robot are nowhere near to fulfil their expectations. In his work ’’Weapons of indiscriminate lethality” published in 2009 , Sharkey specifically addresses two elements of the jus in bello principles that he believes are still out of reach to modern robotics systems: discriminality and proportionality.
Concerning discriminality, Geneva convention suggests the use of “common sense” in discriminating between civilians and combatants. An additional protocol ratified in 1977 specifies that is not a combatant must be classified as a civilian. How to instill “common sense” in an artificial system is one of the most challenging issues faced by modern AI . How could an artificial system autonomously classify between combatants and civilians? The task is not easy by any means. Is anyone wearing a uniform a combatant? Surely not. But even if that would be the case, what classifies a certain garment as a uniform? Should we state instead that anyone carrying a weapon is a combatant? Not necessarily, as anything could be considered a weapon depending both on the context and on the intentions of who holds it. In other words, to apply human-like “common sense” is a hugely complex task, which requires either a wide amount of information available and the ability to process this information in the light of a wider environmental context. Even assuming to have access to extremely reliable robot sensor systems, so sophisticated to be capable of extracting any useful piece of information from the environment (something that current technology does not allow yet), matched with algorithms that can use this information together with previously learned knowledge to classify in real-time between civilians and combatants, the discriminality problem would still not be solved completely. On one hand the friendly fire issue remains a concern. How to discriminate between an ally and an enemy soldier and act accordingly? Some authors, as for example Garfinkel , have suggested to equip every soldier with RFID tags, thus making the recognition task as simple as possible. This solution has nonetheless a number of drawbacks. For example an enemy unit might get rid of his RFID tag, or, even worse, he can produce a fake one pretenting to be an ally rather than an enemy5. On the other side the legality of the combatant in front of the robot has to be taken into account. An enemy soldier may be wearing a uniform, carrying a weapon and having the proper RFID tag with him, but his intention could be that of surrendering rather than fighting. How can a robot understand that without a proper theory of mind embedded in its circuits? Some authors, as for example Canning , have found a shortcut for this problem proposing a working principle for military robots which can be summarised in the sentence “let machines target other machines only”. Sharkey, more radically, proposed to ban the military use of autonomous robots until they can pass a sort of “innocent discrimination test” .
The second potential element of troubles identified by Sharkey is the principle of proportionality, which requires that the anticipated loss of life and damage to property incidental to attacks must not be excessive in relation to the concrete and direct military advantage expected to be gained. In other words, the “force” to be used during a military action must be “proportionate”, i.e. neither excessive or insufficient, to the advantages that can be achieved. How to calculate the right amount of force to apply in a certain operation? Unfortunately there is a lot of uncertainty on how to make such calculation. Military officials are specifically trained for years for this purpose. The difficulty involved in this operation is partly due to the fact that the entire process relies on a extremely wide array of factors, such that it has never been possible to capture all of them in an algorithm (so to be implemented on a computer). Furthermore the military decision-makers, in performing their calculations, must also take into account the possibility for at least some of the intelligence they have at disposal to be inaccurate (as it is has be proven to generally be the case ).
Alongside discriminality and proportionality there is nonetheless another very important factor to consider when thinking about the introduction of military robots inside warfare environments. This factor, which has been extensively studied by Sparrow , is responsibility. Who has to be considered responsible in case something goes wrong? If, for example, a robot as the SWORDS6 decides to exterminate the civilian population of a village? Or, simpler and much more likely, if it fires a single bullet which misses its designated target and ends up injuring an unfortunate ally soldier? Again we are facing a tough scenario. The entire chain which brings a robot to the battlefield is a long one (as it includes manufacturers, programmers, designers, etc.) and errors can take place at any stage. Even a well projected robot might suddenly behave unexpectedly because of some unavoidable hardware failure . Modern militaries rely on rigorous procedures to determine the responsible for any sort of adverse event that could potentially occur during a conflict. But machines have never been considered other than tools, thus being exent of any possible attribution of responsibility. Autonomous robots require the military theorists to elaborate new responsibility attribution procedures. As Sharkey hyronically put :
“Who is to be held responsible for the lethal mishaps of a robot? Certainly not the machine itself. There is no way to punish a robot. We could just switch it off but it would not care anymore about that than my washing machine would care. Imagine telling your washing machine that if it does not remove stains properly you will break its door off. Would you expect that to have any impact?”
Although the author agrees with several of the issues raised by Sharkey, he also believes that the British scientist is somehow too pessimistic in his views. It is certainly true that robotics is a growing but not yet mature area of studies. It is true as well that today’s robots are not capable of performing tasks that government decision-makers believe are at their reach instead. Are these solid enough reasons for entirely banning robots from the warfare scenarios? Probably not. Nonetheless they surely can serve as useful warnings that every person working in the field should take into proper consideration. There are no reasons, in the author’s opinion, for halting the research on such robotics systems and the associated field tests, as long as military planners do not expect to see robots performing Hollywood-like operations in the war field.
Furthermore a few flaws can be found in Sharkey’s reasoning. First of all the British scientist seems to always refer in his publications to AI systems based on explicit knowledge representation, thus implicitely restricting the entire Artificial Intelligence arena to the symbolic approaches only. He plainly seems to be unaware (although, as an expert on the field, he surely is not) of the several design methodologies for intelligent systems developed in the last decades that do not rely either on explicit representations of knowledge, on formal decision-trees, or on rule-based systems. The work we are presenting in this thesis constitutes a perfect example in this sense. We will see autonomous controllers for unmanned aerial vehicles based on evolved neural networks that, by definition, can perform complex task without the need for any formal representation of knowledge. Second, in pointing out the limitations of modern robots, Sharkey (expecially in ) likes to think of military autonomous systems dealing with irregular insurgents. It is certainly true that a clearly identifiable post-Cold War trend is the one towards asymmetric warfare. As the continuous advances in military technologies tremendously widen the gap between the war capabilities of different nations (and the militarily most advanced countries prefer to fight each other over diplomatic channels rather than on the field), less and less countries are prone to wage war to each other. Much more common is the case in which a regular army has to face insurgents rather than another conventional army, as recently happened in Afghanistan during operation Enduring Freedom. At the same time the existence of this trend does not imply that the research in military equipment for “conventional” wars has to be stopped. Political equilibrium, as history demonstrates, can change suddenly. Of fundamental importance, for the military forces of every country, is to be ready and well equipped in case the unexpected happens. Autonomous robots, as we have extensively discussed in previous sections, can constitute a very strong asset in any military force. And, even if military robots can arguably do their best in a “regular” war, this fact does not prevent them to be potentially very useful in different warfare environments as well. In particular when both ongoing and future research will have released their outcome.
2: Cicero’s “De Officiis” discussed “just war” in 44BC.
3: Amongst the various contributors, a crucial role was played by Thomas Aquinas.
4: Although some theorists have recently proposed an additional third category, jus post bellum.
5: Similar topics are covered by Richard Clarke and Robert Knake in their book “Cyber War” .
 Sharkey, N. Automated Killers and the Computing Profession. IEEE Computer (Nov. 2007), 122–124.
 Clapper, J., Young, J., Cartwright, J., and Grimes, J. Unmanned Systems Roadmap 2007-2032. Tech. rep., U.S. Department of Defense, Dec 2007.
 Cambone, S., Krieg, K., Pace, P., and II, L. W. Unmanned Aircraft Systems Roadmap 2005-2030. Tech. rep., U.S. Department of Defense, Office of the Secretary of Defense, 2005.
 Borenstein, J. The Ethics of Autonomous Military Robots. Studies in Ethics, Law, and Technology 2, 1 (Apr. 2008).
 Sharkey, N. Grounds for Discrimination: Autonomous Robot Weapons. RUSI Defence Systems (Oct. 2008), 86–89.
 Sharkey, N. Saying ‘No!’ to Lethal Autonomous Targeting. Journal of Military Ethics 9, 4 (2010), 369–383.
 Arkin, R. C. Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative / Reactive Robot Architecture – PART I: Motivation and Philosophy. In Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction (2008), pp. 121–128.
 Childress, J. F. Just-War Theories: The Bases, Interrelations, Priorities, and Functions of Their Criteria. 1978.
 Sharkey, N. Weapons of Indiscriminate Lethality. FIfF-Kommunikation, 1 (2009), 26–29.
 McCarthy, J. Artificial Intelligence, Logic and Formalizing Common Sense. Philosophical Logic and Artificial Intelligence (1989), 161–189.
 Garfinkel, S. L., Juels, A., and Pappu, R. RFID Privacy: An Overview of Problems and Proposed Solutions. IEEE Security & Privacy (2005), 34–43.
 Canning, J. S. A Concept of Operations for Armed Autonomous Systems, 2006.
 Betts, R. K. Analysis, War, and Decision: Why Intelligence Failures Are inevitable. World Politics 31, 1 (1978), 61–89.
 Sparrow, R. Killer Robots. Journal of Applied Philosophy 24, 1 (Feb. 2007), 62–77.
 Clarke, R. A., and Knake, R. Cyber War. The Next Threat to National Security and What to Do About It. Ecco Press, Manhattan, NY, 2010.