Autonomous Weapons Systems in the Arms Race

Introduction:

Research and development companies are continuously working towards discovering the next best invention. The goal is genuinely to introduce something new, functional, and efficient that the market will desire. It takes decades, at times, to produce the new invention and release it into the market. A lot of the time the market will be unaware of what the companies are truly working on until the prototypes are proven successful and the company is nearing the final steps for distribution. Research and development companies focused on military advancement are no different.

There are many designs either in the central focus of research and development companies, or in the beginning stages of entering the market. Robots able to identify outlines and movement, distinguish between friendly and foe, and shoot at threats to a perimeter.[1] Remotely piloted aircraft under the control of personnel on a different continent. Autonomous flying drones, also unmanned, but computed to observe and track threats independently while above adversarial territory. Independent miniature machines that are mobile and are capable of swarming and working in sync with a large number of these machines through advanced algorithms which can transform to attack enemies at a moment’s notice.[2] These are simply a few designs. Some of the designs are complete and already in action such as the parameter defending robot, while others will soon enter the battlefield.[3]

There is a supply and demand aspect to the incorporation of autonomous robots in war. On the supply side, there is an advancement of sensors and computational technologies that simply need an efficient platform. Once the platform is developed and connected with the sensors, the final step is assigning an appropriate algorithm directing the robot to kill or destroy. As for the demand side, there is pressure from both the military and the political realm to protect their combatants, civilians, and property. Research and development is a prime way to look for solutions to direct war related death.

The process of incorporating the robots in war will likely be slow. Non-lethal transition of autonomous robots in war more easily through both defensive and offensive strategies such as observation, data collecting, and sensory warnings. On the flip side, the introduction of lethal autonomous robots will likely include a fail-safe such as a person in the decision-making loop. Once more advancements are made in the artificial intelligence realm, then it is likely the human role will eventually be stripped from the robot’s decision-making loop.[4]

Autonomous robots entering war is inevitable. It is a matter of time for the robots to be equipped with the proper weapons and the appropriate algorithms to become lethal. Coupled with the entrance of the autonomous robots are many questions. The appropriate questions surround the military’s tactical agenda, but also include legal and ethical questions. The legal and ethical answers rely on society first acknowledging the evolution of the autonomous robots and its slow but inevitable transition into war. The United States’ policies on solving the questions should also rely on these observations. There is a high possibility that the development and deployment of autonomous robots, along with the humanitarian benefits that run parallel to the precision of the robots, will question the ethics and the applicability of a few suggested responses.[5] It is necessary that the United States forgo the urge of concealment and silence in regard to the applicable military technological advancements because there is a strong United States interest in constructing and molding the normative legal terrain that permits and restricts specific uses of autonomous robotics in war.[6] The expectations and foundations of the use of autonomous robots and machines in war is equally important to the actual development of the robots and machines. It is essential that the United States acts prior to the solidifying of international expectations about the use of these advanced technologies in war so that the expectations are not formed on impractical, useless or hazardous legal grounds or through actor’s desires who prefer little to no restriction.

  1. Accumulative Automation of Remotely Piloted Aircraft:

The accumulative advancement of future automated robots capable of killing humans, as well as the legal and ethical hurdles, are arguably illuminated through modern remotely piloted aircraft.[7] The United States has led the way in piloting unmanned aerial vehicles from different continents.[8] Remotely piloted aircraft are currently a critical aspect in the United States’ military and political agenda. As of 2012, it was estimated that one in three of the United States’ aerial arsenal was unmanned.[9] Advancement through research and development companies has likely increased the number of unmanned aerial vehicles since then and will continue to operate in this direction. Human decision-making is currently involved in all remotely piloted aircraft; there are no fully-autonomous aircraft currently used in war capable of targeting and releasing weapons.[10] Furthermore, there are no unclassified documents signaling a desire to withdraw humans from the firing decision-making loop.[11]

Although remotely piloted aircraft are not truly autonomous, there have been significant successful advancements in this already new technology. Remotely piloted aircraft can now efficiently land without human interference.[12] After a realization that the ever-so-slight lag between the aircraft controlling pilot and the aircraft itself causes millions of dollars’ worth of damage, the need to solve this problem placed the United States’ MQ-1 and MQ-9 aircraft at a significant advantage through the development of autonomous landing.[13] Another example of autonomous advancement is how a single remotely piloted aircraft pilot can control multiple aircraft at one time.[14] This is, of course, due to modifications and advancement in the already existing and deployed unmanned aircraft. Sensors and aircraft are drastically improving its capabilities with each stage of advancement in computing systems. The advancement is likely to continue without a near end in sight.

There are many theories about what the future of fighter aircraft will consist of. Although some people believe manned fighter jets, such as the F-15 Eagle, F-16 Falcon, and F-22 Raptor, will continue to soar in the sky, other people believe either the manned aircraft will be joined by unmanned aircraft, or manned aircraft will merely become a thing of the past.[15] Speed, movement and air flexibility are key to an efficient aircraft that has an edge over adversarial aircraft. At the rate technological advancement is going, it is highly probable our generation will see, or possibly invent, aircraft (whether remotely piloted or autonomous) capable of faster speeds and torques, higher g-forces, and other physical stressors unendurable by a human pilot and possibly at a much cheaper cost. It is an advantage for the design of the aircraft to emphasize as many autonomous functions as possible, furthering the United States’ edge above adversarial aerial systems.[16]

Aerial weapons may also have to be controlled at a pace much quicker than the human controlling pilot can respond.[17] The enemy an aircraft battles are typically other aircraft or anti-aircraft systems such as surface-to-air missiles (SAMs).[18] The responses to these enemy may have to reflect the same speeds of the enemies. The idea is that there must not be a communication lag or there is a high probability the unmanned aircraft facing the adversarial aircraft will be blown out of the sky.

For illustrative purposes, a similar concept can be seen in existing United States naval vessels Aegis ballistic missile defense system.[19] The United States navy, for many years, has been able to target and deflect incoming missiles autonomously.[20] The vessel’s missile system is capable to searching, identifying, targeting, and warning military personnel of incoming missiles.[21] The military personnel simply monitor and confirms the systems analysis prior to giving the okay.[22] The importance of the navel’s missile system if focused on the ideology that a human’s decision-making process is drastically too slow when faced with multiple incoming missiles.[23]

There are many differences between current remotely piloted aircraft and future autonomous robots, but the legal and ethical concerns are relatively the same. Current debates about the legitimacy of remotely piloted aircraft and operations using these aircraft predict the heated debates surrounding weapons systems, robots, and other autonomous tools that will inevitably enter the battlefield.[24] The United States can and should hone in on the debates and extract lessons worthwhile to guide the United States in both short-term and long-term policy directed at autonomous robots. The systems may be drastically different, but the concerns are relatively the same.

It is arguable that the developing emanate autonomous weapons are an intelligently advanced modification of current self-guided missiles that do not require a line of sight to strike a specific target.[25] Self-guided missiles do not require any interaction with the launcher after being fired.[26] The launcher does not himself have to have the line of sight to the target.[27] Instead, the information the missile needs to effectively strike its target is programmed into the missile prior to launch.[28] Once the missile is launched, the missile guides itself with the combination of gyroscopes, accelerating gages, global positioning systems, radar, infrared systems, etc., built into the missile.[29] The systems built into the missile and the programmed information are adequately enough information to hone in on a target. An autonomous weapon system, such as a robot, would logistically be comprised of similar systems and the human weaponeer would assemble and input algorithms into a program the robot would adhere to.

  1. Arms Racing – Autonomous Weapons Systems:

The United States and other parties to conflict will deem advanced lethal autonomous robots and other autonomous weapons systems immensely attractive as technology as these areas begin to have a solid development and enter the international market. It is arguable that as artificial intelligence advances we will see a shift from automation robots to autonomous robots. The difference is that robots that are automated revolve around the algorithms specifically programmed into the robot, while the autonomous robots are capable to adapt their “thinking” and act in a manner appropriate in unpredictable situations.[30]

There are many different steps in-between an automated robot and an autonomous robot.  Consider the efforts to protect combatants on the ground. Miniature lethal robots could play a look-out role for the combatants. These miniature robots would have automation in the sense that they would be pre-programmed and could decipher adversarial threats and signatures then warn the combatant who has the sole decision-power to attack the adversaries. On the other end of the spectrum, the miniature autonomous robot would detect and decipher the adversary then attack the adversary on its own accord with no human interaction. I raise here two middle grounds. First is where the miniature robot does not have to report the threat to the combatant for permission to fire, but simply reports the threat to the combatant as an indication that the robot is about to act, but the combatant has to power to override the robot. Second, similar to current remotely piloted aircraft, the human controller can be located away from the hostilities but has full or partial control on how the robot responds to the detection. In all four of these situations, there is a significant communication link between the human controller and the lethal robot.

A problem with a communication link between humans and machines is a third-party hack. The link connecting the lethal machine to the human is not hack or jam prof, which furthers the legal and ethical concerns. Countries interested in artificial intelligence and autonomous weapons systems my spot this problem and push to extract the human connection altogether. One such support to sever the link outside of the hacking problem is the idea that the increase of speed and complications of the algorithms may be better left to the robot itself. If there was a way to program an initial algorithm in the robot and then the robot can continuously reconstruct and mold the algorithm to mimic human learning, then the robot would essentially be dependent on the execution of its own programming and therefore autonomous.[31] This concept may see far off, but realistically the first crucial step in achieving this already exists and is in play in the United States arsenal of aircraft.

The RQ-170 Sentinel operates in a non-continuous, only-when-necessary communication pattern with the controlling unit.[32] The controlling unit sends sporadic signals to the Sentinel when necessary. Each signal is a form of communication redirecting the Sentinel’s programming to other pre-configured commands creating choices for the Sentinel to deduce to the best option. The short and sporadic signals make it less likely for adversarial operatives to detect the presence of the Sentinel as well as any interference from hacking or jamming signals.

Conventional war within hostilities is not the only place we will likely see the use of autonomous robots. Another area where autonomous robotics could thrive is in covert and special operations. Think back to Operation Neptune Spear, the Osama bin Laden raid in May 2011.[33] Imagine how different the raid would have gone if Seal Team Six had available to them miniature robots capable of facial recognition and other surveillance technology. It is possible the positive identification of Osama bin Laden could have happened much earlier. The killings of the other non-combatant people in the house may not have happened. The technology to take such a hypothetical and turn it into a reality is already among us.[34] It is not a giant leap to apply the technology embedded in devises common consumers already own and employ the technology into other devices creating new weapons systems. The next step will simply be to make the systems autonomous.

Now, consider an example that lacks precise control. At some point in the near future a country other than the United States, (e.g., China, Russia or India) will likely research, model, fabricate, deploy and market autonomous weapons systems. It is possible the programming will direct the system to target anything that is firing a weapon or marked as unfriendly. The programming can exclude any coding that considers collateral damage such as non-combatants. Obviously, this raises extreme legal and humanitarian issues, not only for designing such a system, but also for selling the system on the international market.

The United States, being an actor commonly involved in wars, would face a double-edge sword-like decision. It could fold and use a similar weaponry system which it believes is illegitimated, or it could continue to fallow the laws and sacrifice combatants’ lives while the United States’ research and development companies design, fabricate, and test technological counters and defenses to the illegal weaponry system. Such a circumstance illustrates the implication an autonomous arms race creates in hostilities. On the holistic level, the arms race would be centered around countering and defending against threats. At the same time, all the countries involved would logistically race to place their mark in international norms and diplomacy.

  1. Law & Ethics Conditions:

With any new weapons system there is legal and ethical analysis.[35] Geneva Conventions Additional Protocol 1, article 36 says:

In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.[36]

Although the United States is not a party to the Protocol, it can be argued that the United States abides by this article as customary law. For instance, the United States went through extensive legal analysis when considering the legitimization of weapon systems in the cyber warfare realm.[37]

Determining whether a weapon system is legal is not a new concept. Many common weapons systems also underwent the legal and ethical analysis such as poison, the cross-bow, submarines, landmines, etc.[38] Like to the prior weapons systems in the spotlight, it is likely autonomous weapons systems will generate similar responses. First, there is the extreme response that a new weapons system is new and thus should be banned from the battle field because it goes outside of what is currently allowed. Although this argument is circular, it goes to show people are scared of new things. Second is the idea that if a weapon system drastically makes the war one-sided, and those analyzing the new weapons system believe it could benefit them particularly at some point in the future, then it is more likely to be adopted into the batch of acceptable weapons.

When a new weapons system is under scrutiny, it is not always a clear case whether the system will be permitted or prohibited. For instance, when the submarine was being analyzed, there was a lot of push to prohibit it.[39] There was a lot of political diplomatic assertiveness to ban submarines on a holistic level.[40] It was not until countries realized a submarine ban was impossible to enforce that they instead created restrictions to govern the use of submarines.[41] In this instance, the legal prohibition deteriorated leaving instead a handful of legal statutes for the use of the weapon in certain situations. On the flip side, there are some weapons that are fabricated in a new form but are equivalent to a weapon that was prohibited in the past. A common example is the use of poisonous gas.[42] In these situations, arguably, the weapons are drastically unethical and a violation of human dignity.

A line of questions emerges. Where might autonomous robots fit into this sequence? Are there specific features we can extract from the weapon that raises ethical and/or legal concern? What will be the best way to address the concerns as a matter of law and policy? For the most part, it is hard to answer these questions without a specific new weapon system to analyze. Autonomous weapons systems are unlike any system we have encountered before. A universal major concern is taking the human mind out to the lethal weapon’s decision-making loop.

With a lack of an autonomous weapon openly available to analyze, one answer to the above questions is to simply wait and see. Technology is an overarching and everchanging area of research. Different countries are at different levels in research and development; some countries may be further advanced than they claim to be. It is far too difficult to determine what way technology will take autonomous robots. That being said, we should defer answering such questions until a system exists and is presented to the analyzing committee. If we were to analyze the theorized systems, it would be the equivalent of assigning a legality to imaginative science fiction and fantasy movies such as answering the question, “Is Bumblebee, from the movie Transformers, a legitimized weapons system in the United States’ arsenal of ground equipment, or is Bumblebee a combatant who should thus be held to the Laws of War like any other human combatant?” Clearly, in answering this question we must go down many rabbit holes and consider so many what-ifs that it is not worth our time nor our energy unless we were actually faced with a real ‘Bumblebee.’

Although the ‘Bumblebee’ example may be extreme, in reality there are some short-sided autonomous innovations very near. The wait-and-see approach may not be most efficient here. In these instances, it might be better for the United States to dive into what-ifs and try to prepare answers to questions it knows concerned members of many countries will soon ask.[43]

Another response, in regard to the long-term scope of autonomous weapons, is that technology and modernization do not occur solely in a gap. This is the ideal time countries who want to shape the law for permitting autonomous weapons should act to consider and have input on how the law and ethical matters should persuade and control these systems. Soon enough the autonomous systems will enter the battlefield and the international community needs to be ready. If the laws begin developing now, there may still be time to put into effect some of the restrictions into the programming of the weapons systems before the systems are difficult to change. The international community can use this opportunity to suggest and stand behind a method for analyzing these systems that accounts for strategic and moral interests.[44]

  1. Principles of Distinction & Proportionality:

There are two specific principles that govern new weapons prior to permitting the weapons in war, distinction and proportionality. Geneva Conventions Additional Protocol I, article 48 provides “[T]he Parties to the conflict shall at all times distinguish between the civilian population and combatants.”[45] Although the various legal documents define the principle of distinction to refer to the act or using a weapon, the principle of distinction can be applied in extreme cases to a weapon itself. In such cases, the principle of distinction can require a weapon to be capable of being aimed at lawful targets (i.e., combatants) in a manner that discriminates between lawful targets and non-lawful targets (i.e. non-combatants or non-military objects). Autonomous robots must be able to distinguish between legal and illegal targets and must be able to direct its attack between such targets to be lawful.

The second principle that governs new weapons is the principle of proportionality. Article 51 (5)(b) of Protocol I defines this principle as, “an attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.”[46] The principle of proportionality, essentially, seeks to limit damage caused by military operations by requiring that the effects of the means and the methods of warfare used are not disproportionate to the military advantage underlining the attack. If the autonomous robot meets the principle of distinction test, it must also be able to analyze the situation and balance the anticipated military benefit against the foreseen harm to non-combatants and non-military objects. Although this principle seems much easier to quantify and program into a machine, there is no specific or accepted formula for doing this. One of the major hurdles in creating a formula capable of weighing the anticipated outcomes is the fact that the comparison is between two distinctly different categories with unlike quantities and values.[47] For example, you are comparing the number of squares to the number of triangles; they do not add up. Despite the distinction, the principle of proportionality is still one of the two fundamental requirements necessary for an autonomous robot to be considered legitimate in the law of war.

The autonomous robot taking a combatant role must hurdle both the principle of distinction and the principle of proportionality. It is necessary when thinking about a true robot combatant and its mandatory requirements to act within the law, to think about what society would require of a human combatant doing the same thing. There are currently people who specialize in robotic algorithms who are trying to capture the perfect algorithm for each distinction and proportionality.[48] Principle of distinction programming includes using a fixed lawful list of targets and the no-strike list.[49] One example is an algorithm allowing a robot to return fire if a person or an object is specifically firing at it. After the inclusion of the lawful target list and the no-strike list, the algorithms start to dig into the weeds by introducing inductive reasoning patterns that characterize other lawful targets outside of the list.[50]

The principle of proportionality algorithm is focused around the legal test applied to human combatants; it is a relative objective judgment.[51] The first part of the test is common sense, measure foreseeable military advantage compared to non-combatant harm.[52] Then it is necessary that the robot contrast that outcome to a fixed measure of excessive.[53] If the overall outcome is excessive, then the robot cannot and will not attack. The engineering behind the proportionality algorithms is essential in whether the autonomous robot meets the ethic and legality conditions.

  1. Criticisms:

There are four main criticisms I raise here with a focus on a version of an autonomous robot that has had many years to fix the problems associated with new technology. First is a skepticism that a human cannot possibly program a robot adequately enough to satisfy the principles of distinction and proportionality.[54] This is not the first-time engineering programmers have made promises while looking through an optimistic lens at potential artificial intelligence machines.[55] What can possibly be different about the outcome associated with these algorithms other than time? Is time the critical component that will provide a solution to programming these two principles in the robots?

The question has merit. Time certainly is the key to programming a successful algorithm capable of capturing both the principle of distinction and of proportionality. Technology certainly unfolds over time. Consider, for example the cell phone. The cell phone has only been around for 45 years and we have already seen it transform from a brick into a watch and soon into a device capable of folding into a pocket-sized wallet.[56] It is just as possible that autonomous robots may never achieve both principles, as it is possible they will. Just because engineering programmers may not achieve what they claim society is on the verge of achieving, it does not mean we should rule it out as a possibility. If we rule out such possibilities, they we might as well rule out the advancement of technologies within war that can limit non-combatant risks by further refining targeting. Without such hope and community support, it is less likely we will achieve such ingenious constructs. The clarity in the research and tests for legitimateness necessary for autonomous robots to adhere to helps guide technological development on the path to law of war’s vigilant goals.

The second criticism focuses on the morality behind withdrawing the human from the lethal robot’s decision-making loop. The idea is that it is necessary for a human to be involved in the loop in some way in order for it to not be wrong.[57] A machine does not have feelings. It cannot compute through a conscious that guides in moral judgments. It does not matter how effective or efficient the algorithms pre-programmed into the robot are. The robot will never adequately replace a human’s conscious.[58]

This argument is much more challenging to respond to. It is based on subjective opinions of a moral principle. Either you agree with the moral principle or you do not. Some people believe humans are better left outside of the decision-making chain because they add emotions and feelings which can disrupt the cause the humans are fighting for believing the cause is greater than the lives of individual humans. On the other hand, people believe emotions, compassion, and a conscious are key factors for distinguishing killing in hostilities from general criminal murder or war crimes. No matter which side you believe is more appealing, the thing to remember is that there is no absolutely right or absolutely wrong answer; it is your personal opinion. Although there is no solid answer, it does beg the question, where is the line drawn distinguishing what impermissible autonomy is given that autonomous robots and other autonomous weapons will likely develop in accumulative steps.

The third criticism forces the question, who is to blame for the war crimes committed by the autonomous robots given the fact that there is no human involved in the decision-making process.[59] If the robot truly is autonomous and there is no human interaction at the time of firing, who is ultimately culpable? Seeing as how most criminal law has a focus on deterrence and rehabilitation, a machine most certainly will not be affected by either focus. How far backwards should the culpability be connected?[60] The programming engineers? The designer engineers? The supplier of the fabrication materials? There is a slippery slope argument encompassing this criticism.[61]

This criticism is central to people who place their faith in laws of war as the omnipotent set of laws governing all connections to war, both direct and indirect. They want to see individual criminal liability and punishment for moral wrong doings. However, judicial responsibility after the illegal actions committed in war is merely one of a handful of ways to hold people accountable for the wrongful acts on the battlefield. A blinded allegiance to individual criminal liability through laws of war as the primary way of holding culpable people responsible for their actions risks deterring the progression of robotic systems that could possibly lessen non-combatant injuries near armed conflict.

The last criticism that I raise in this paper is the idea that withdrawing humans from the dangers of war as well as increasing the precision of weapons can lead political players to more openly turn to war.[62] One of the primary incentives forcing political players to steer away from war is the risk of combatants and non-combatants’ lives involved in or around the hostilities.[63] Instead, without that risk involved, the political players may be more apt to rely on military force and wage war.[64]

This moral criticism raises a counter-criticism for it would choose to replace relatively easy discontinuance of risk for combatants and non-combatants with the uneasiness that politics will push for more armed conflict. Even if this was an evenly balanced concern, the exact same political fear has already been raised related to remotely-piloted aircraft.[65]

The four criticisms listed above are important to autonomous robot research and development. Although they are briefly mentioned in this paper, it should be noted that each of the concerns are much more complex than I laid them out here. Furthermore, each of the criticisms could have an entire paper dedicated to the individual argument. Despite the entirety of each argument, each criticism is faced with a practical complication on how the autonomous robots will likely come into existence, as I tried to briefly illustrate above. Each of the critiques seem to presume an abrupt separation of the human from the robot’s decision-making loop. Logistics and modern technological advancement seem to indicate there will be a far more gradual process and the human will remain in the decision process for the majority of the transition (both the transition of society to the concept, and of the robot into a fully autonomous role assuming the robot can technologically advance to that point). The gradual transition and withdraw of the human component from the decision-making loop do not inherently make any of these arguments less credible. It does, however, imply a necessary discussion about managing the slow transition into an autonomous robotic war realm.

  1. The Difficulty of Accumulative Progression:

Some people may argue the United States is involved in a rigorous and reckless path of modern, highly-technical military superiority. The path will likely be short-lived as a response to other actors in war – whether states actors or non-state actors – develop or seize and reverse engineer the advanced weaponry.[66] The idea here is that if the United States were to cease its research and development in the autonomous robotics and weaponry field, then the battlefield would evolve much more slowly if at all.

I disagree with this view because the same technology being researched and developed for the advancement of technology on a battlefield is the same technology being researched and developed for common consumer uses such as the remotely piloted aircraft that more efficiently deliver packages, or the self-driving cars that are designed around enhancing the public’s safety. The advanced technologies will continue to grow and disseminate all around the world outside of the battlefield and will inevitably be integrated onto the battlefields regardless of whether or not the United States’ military research and development companies put in the hours. There is not a lot of difference in civilian use compared to military use of the platforms or the algorithms programmed into the systems attached to the platforms. For instance, a robotics platform can be programmed to perform a life-saving function at the slight detection of an emergency, but it can also have another algorithm which enables and instructs the platform to use a weapon as an attacking mode responding to a perceived threat.[67]

It is easy to look past the technological advancement the United States hones when it is trying to develop a solution to other actors’ violations of war crimes. The United States faces the challenge to research and develop solutions, typically as advanced technologies, to the opposition forces and their ever-so-quick changing criminal behaviors. The United States generally steers away from committing war crimes itself, and instead uses innovation, research, and design to fabricate new physical solutions either completely within the law of war, or in a gray area not yet discussed.

The United States has a strong interest in the legalization and ethical approval of autonomous robots directly involved in war. One reason the United States has this interest is because their adversaries often involve systems that the United States regards as lacking legal and ethical standing in hostilities. The United States needs to have a plan of attack for situations like this. It is arguable that if the United States withheld the research and development of the highly-technical commutive machines, then it would be reckless and unreasonable. It is foreseeable advanced machines (possibly autonomous machines) will enter the battlefield. If the United States decides to watch instead of act, it is putting the lives of its citizens in direct path to danger. There is, and will always be, an autonomous technological armed-race. It is necessary that the United States continue to develop its strategy and stance within the robotics field.

Many observers think the solution to the autonomous technological arms-race is another multilateral treaty.[68] It is likely an efficient treaty will be a regulating treaty which restricts the uses of the tolerable machines because of the many debates and failures to agree on specific interpretations surrounding such an adverse topic in an environment of accumulative progression. The other type of treaty option is a prohibitory treaty. These treaties are sought by potential actors who believe autonomous robots on a battlefield will never be universally ethically or legally acceptable, so why waste time and try.[69]

There are problems with utter reliance on a multilateral treaty to solve the concerns of autonomous robots. Regulative treaties may be harder to find support for than observers believe.  Limiting autonomous robots and other advanced technologies in hostilities may find traction amongst some groups who typically stay outside of hostilities or are outside the government because these actors do not have as available accesses to means and finances which allow the groups to directly contribute to the autonomous progression. Other smaller actors (i.e., not a leading or powerful State) who rely on the United States as a balance to keep other powerful states in check, such as China or Russia, will likely prefer the United States to take a powerful and driven stance in developing and fabricating autonomous robots because the other powerful states may not be as inclined to adhere to ethical and legal concerns. On the other end of the spectrum are the powerful states capable of researching, developing, and fabricating autonomous robots that will likely oppose limitations on the machines so that they can enhance their capabilities and advantages over opposition forces.

It is hard to say States who support either the restrictive treaty or the prohibitive treaty will find it easy to agree on the terms or the scope of the treaty. One major problem is the accumulative progression. The technology, in theory, will advance and drastically change. The terms, conditions, and scope of the treaty will likely become absolute and need amended. Even if a prohibitive treaty is the proposed answer, this answer does not adequately account for future humanitarian concerns. The advanced technology could morph into a more discriminating and ethically preferable alternative to human-to-human combat. An end-all-be-all approach casting a prohibitive net over the situation will likely inhabit the likelihood of these ethical benefits. The last major concern worth mentioning is the compliance factor. Just because there is a law in effect does not mean all actors will aide y the law. There is bound to be an imbalance between those who abide and those who do not. Of course, the compliance concern is a concern that effects all treaties and should just be noted.

  1. Regulations, Rules, & Methods:

There are many hazards correlated with the progression of technology. The United States has a strong interest in influencing the advancement of autonomous technology in the international sphere. The form of influence does not necessarily have to be through binding legal documents. Instead, the United States should shift its focus to tackling tough questions about the legal and ethical concerns and how states who will engage in autonomous technologies should strategize and operate. The United States should consider how its policies will/should guide or restrain the research and development companies, the initial introduction of the advanced technology on the battlefield, and the incorporation of the technology into the world weapons market. The United States has the opportunity to shape the direction of the legal governance on autonomous robots, and other autonomous weapons. It can establish common standards which will advocate a more cooperative and war environment among the international community. Furthermore, it will advance an international community that will hold adversarial actors to the accepted standards, raising the political costs of researching, developing, fabricating, selling, and using advanced technologies that fall outside of the accepted norms.

Rather than instantaneously relying on treaties to govern or prohibit the use of autonomous robots on the battlefield or the ways to research and develop autonomous weapons prior to deploying the weapons, the United States should focus on domestic state governance that can be observed by other foreign states and then adapted or pulled directly for international governance. If the United States took this route, then this allows for the initial problems to be dealt with and the ethical and legal concerns can be publicly debated with a diversity of viewpoints creating a myriad of possible solutions. This is a long-term process; however, it is necessary for a more universally accepted optimal result.

There are two challenges the United States must overcome in order to do this. First, it must resist the common urge of concealing the development process. Instead, the United States should openly discuss the obstacles it faces in the developmental process and openly receive criticism from a variety of observers. Second, the United States must not allow the skeptics to persuade it to cast a full prohibition net on the advancement of autonomous technologies or simply to wait for a treaty to provide restrictions. The United States must continue to internally unfold criterion through various regulations, rules and other governing methods that it regards as right for the research and development as well as the implication of autonomous lethal systems onto the battlefield. One of the key components to this is focusing rhetorically on clear and precise language illustrating the baseline legal and ethical principles in order to pave the path to a more acceptable view on autonomous weapons and the governing policies.

The fundamental principles the United States chooses to focus on should be derived as a variant of the current customary legal rules. The two principles I recommend focusing on is the principle of distinction and the principle of proportionality. It is necessary that the autonomous robot can distinguish and target lawful combatants and objects. The United States needs to address how good that the robot’s capability to distinguish must be and if there are any exceptions. If there are exceptions, what would the circumstances be that allow exceptions. The legal standard is historically dependent on both the technology and the intended use of the technology. The adaption and slight modification of the legal standards should contemplate the inevitable technology-driven autonomous capabilities matched against non-autonomous capabilities.

The principle of proportionality mandates that a person, or in our case a weapon, consider the collateral damages and balance those damages against the potential military gain. The proportionality balancing test will eliminate the easy technological advancement of a tunnel-visioned robot that automatically returns fire. The question the United States should focus on is that standard of care and narrowing the standard, but also applying/testing the standard in specific situations. This is a hurdle the programming engineers will likely face. They must develop an algorithm and a functioning system of programming that is capable of balancing collateral damage to non-combatants and non-military objects to military advantage. The vary different variables have varying degrees of value. It will be extremely difficult to capture this concept in a series of algorithms.

The legal and ethical concerns transform into legal and ethical methods that fully account for the concerns in all aspects of the advanced technology, including the research and development stages. If the United States focused solely on the research and development without considering the legal and ethical concerns until after an autonomous system had been fabricated, then there would be more work in the long run by having to adapt the systems to the policies or having to form widely accepted policies around already existing advanced technology. This rises the concern of whether adapting the systems is even possible at all. The programming constituting the autonomous artificial intelligence would already be in place consisting of millions of interplaying algorithms in both the hardware and the software. There may be political backlash for the potential write-off of time that was already devoted into the artificial intelligence platforms. The United States’ reputation may face scrutiny. Considering the legal and the ethical concerns while the platforms are being developed is crucial to shaping the future international legal ground.

It is in the United States’ best interest to shape the future international law on autonomous artificial intelligence through policies and methods that both govern and assess the platforms instead of simply guiding its specific platforms. It is ideal for the United States to include its allies and North Atlantic Treaty Organization members in the development of semi-universal tolerance and best practices in regard to the artificial intelligence accumulative progression. The construction of policies illustrating semi-universal acceptance will also be a slow, evolving process. It is best to unravel the policies as the United States faces each major material decision within the research and developmental stage.

When the United States decides on a policy, it should make the policy and the thought process behind the policy publicly known. Controversial topics, such as pure autonomous artificial intelligence, will arguably transition into a practical, useful militaristic tool only after the public understands the mental process backing the government’s decision on the specific policy. The thought process can simply be in the form of Secretary of Defense statements, or other applicable means. The United States has expressed its views to the public on new controversial weapons systems multiple times in the past. For example, the United States openly discussed the thought process behind cluster munitions, guided missiles, and landmines.[70] Illustrating the balance of concerns between military operational necessity and humanitarian necessity elevates a lot of concern amongst observers and, thus, allows more observers to accept the new concepts more easily.

Like the majority of other nations, the United States will feel the urge to conceal and disguise all information pertaining to the advancement of technology in the direction of autonomous artificial intelligence. There is a universal fear that if a state were to disclose what is happening in their research and development companies behind closed doors, other states will have insight on capabilities or complicated algorithms and will either close the technological gap between states by inviting reverse engineering or it can forewarn the observing states on new tactful advantages allowing the opposing states additional time to develop a counter-defense. Logistically, the statements released to the public will more than likely be vague and less informative than observers desire. A foreseeable desire is insight on how a machine can hone in on the principle of proportionality and the principle of distinction solely from complicated algorithms. The United States will likely refrain from this disclosure in order to protect classified information and their future weapons system’s effect on adversaries.

The contradicting desires create a true ethical concern. There are two plausible solutions. The first solution is for the United States to withstand the urge to conceal its advancement of technologically weaponized systems. Its interest in concealment is equivalent to its interest in developing and directing the international legalities for all future actors. The United States should also consider society’s regard and concerns surrounding the matter. The best move the United States should make during the developmental and initial testing time is to continuously and devotedly explain the establishment, methodology, and adherence to a semi-universal international standard. If the United States risks this, then it clears the way for other actors to shape the laws. There is no possible way to determine how the other states may shape the laws, specifically whether they would push for destructive, hazardous laws or simply unrealistic laws. The United States needs to act on the opportunity while it is available.

The second solution is for the United States to openly discuss its step-by-step research and development process. This includes revealing the ways in which the United States brainstorms, researches, develops, and evaluates the newly constructed autonomous artificial intelligence weapon. International law requires the legal analysis of all new weapons systems.[71] At some point along the development and integration line of the weapon, the United States would disclose this information to a committee. Notwithstanding the restriction for disclosing the internal programming and algorithms of the autonomous artificial intelligence due to classification concerns, the United States ought to reveal relevant information regarding its vetting methods from the initial developmental stage through the actual engagement in hostilities stage.

Unfortunately, the United States will not likely discuss the outcomes of each evaluation because of the lurking fear that the adversaries may gain an advantage on it, however, the United States ought to discuss this information with its allies and other trusted states. If the United States was open about this information, it could help universalize and create normative standards. Foreshadowing the future, each of the guidelines and principles the United States abides by itself when researching and developing the autonomous artificial intelligence may transform into the fundamental international control guidelines. If the United States were to aid its allies by supplying this information, the allies and other states in the future who fabricate their own autonomous artificial intelligence will rely on the United States’ original guidelines and thus it is likely there will be a more unison artificial intelligence concept.[72]

  1. Conclusion:

After reading this paper, an observer may rely on a more fundamental concern outside all the concerns already mentioned: it is necessary that the United States not proactively restrict itself to specific conditions when it does not have to because there is ultimately uncertainty about how, when, and if technology will improve in this manner and how society will respond to the changes. These observers may argue the United States should hold off on committing itself to anything and instead only commit when absolutely necessary.

This criticism does not fully acknowledge the fact that these autonomous artificial intelligence platforms, and the platforms leading up to fully autonomous weaponry, is already a key project in the United States and other countries’ research and development companies and societies’ interests. It may be the case that the placement of the systems on the battle field is a long-term consideration, but the beginning stages of the new technologically autonomous world are occurring now. Creating the legal and ethical principles it a time-consuming process. The United States needs to be fully devoted to the developmental process of both the fabrication and the principles or other states may take this privilege from the United States without thinking twice.

Everything I have discussed in this paper relies on the historical, yet current, law of war doctrines. I have applied concepts driven from these documents to govern what appears to be an innovative, technology-driven, and moral issue. I chose to rely on the current law of war principles in my discussion because the difficulty of governing the extremist advancement in lethal weapons systems is not new. It is arguable the automated artificial intelligence accumulative progression is an issue specifically for the laws of war. Instead, I believe if the United States integrates both ethical and legal standards into artificial intelligence research, development, and fabrication, then it is possible the accumulative progression from automation to fully autonomous artificial intelligence will be applicable and abide by law in conflict.


[1] https://www.engadget.com/2010/07/13/south-korea-enlists-armed-sentry-robots-to-patrol-dmz/ ; and https://www.dailymail.co.uk/sciencetech/article-2756847/Who-goes-Samsung-reveals-robot-sentry-set-eye-North-Korea.html . In 2010, South Korea developed and strategically placed a team of Sentry Robots along the parameter of the demilitarized zone (DMZ). The robot is capable of detecting and shooting at active threats two miles away with little human interaction.

[2] http://www.slate.com/articles/technology/technology/2012/03/quadrotor_drones_are_amazing_and_cute_and_they_will_probably_destroy_us_all_.html.

[3] See generally Peter Singer.     Wired for War: The Robotics Revolution and Conflict in the 21st Century.

[4] See generally http://www.fas.org/irp/program/collect/uas_2009.pdf

[5] http://stlr.org/download/volumes/volume12/marchant.pdf

[6] The legal terrain referred to includes, but is not limited to, the outline of international law and the international outlook on appropriate autonomous robotic conduct.

[7] http://media.hoover.org/sites/default/files/documents/EmergingThreats_Harris.pdf

[8] https://science.howstuffworks.com/predator.htm

[9] https://www.wired.com/2012/01/drone-report/

[10] ADD FOOTNOTE

[11] ADD FOOTNOTE

[12] See _____, supra note 8.

[13] See Singer, supra note 3 at 36; also see https://www.wired.com/2010/01/us-drone-goes-down-over-pakistan-again/

[14] See Id _____, supra note 8.

[16] AF Flight Plan, p. 16 & p. 41

[18] See Id.

[19] https://www.mda.mil/system/aegis_bmd.html

[20] See Singer, supra note 3 at 124.

[21] Id.

[22] Id.

[23] Id.

[24] One large concern surrounding the use of remotely piloted aircraft to target and kill people in other countries is that the United States is using technology to shift the risk of losing U.S. combatants’ lives to the risk of killing civilians outside of the United States.

[25] http://datagenetics.com/blog/august22014/index.html

[26] Id.

[27] Id.

[28] Id.Some examples of information the self-guided missile can be programmed with is, but not limited to, coordinates, radar measurements, velocity, are infrared imagery.

[29] Id.

[30] See generally https://link.springer.com/article/10.1007/s10676-013-9335-0

[31] https://www.rollingstone.com/politics/politics-news/the-rise-of-the-killer-drones-how-america-goes-to-war-in-secret-231297/

[33] Thomas P. Athridge, American Presidents at War ….. p. 248. https://books.google.com/books?id=x6g8DwAAQBAJ&pg=PA248&lpg=PA248&dq=Osama+bin+Laden-compound+raid&source=bl&ots=DM1rnfwPT1&sig=c3lNQugMWhLP3v2uaEW3wSWIEx8&hl=en&sa=X&ved=2ahUKEwigtLCbpv_dAhUS-6wKHTPPAk0Q6AEwGHoECAEQAQ#v=onepage&q=Osama%20bin%20Laden-compound%20raid&f=false

[34] https://www.macworld.com/article/3225406/iphone-ipad/face-id-iphone-x-faq.html . Facial recognition technology is now common in a typical Apple iPhone X or newer.

[35] Protocol 1, art. 36 (1977) :

[36] Id.

[37] See generally https://nsarchive2.gwu.edu/NSAEBB/NSAEBB424/docs/Cyber-053.pdf

[38] ADD FOOTNOTE

[39] See generally Natalino Ronzitti,     The Law of Naval Warfare: A Collection of Agreements and Documents With Commentaries       https://books.google.com/books/about/The_Law_of_Naval_Warfare.html?id=efrj_WFbpO4C

[40] Id.

[41] Id.

[42] See generally Joost Hiltermann A Poisonous Affair: America, Iraq, and the Gassing of Halabja.   https://books.google.com/books?id=5JGldonhk6wC&printsec=frontcover&dq=Joost+Hiltermann,+A+Poisonous+Affair:+America,+Iraq,+and+the+Gassing+of+Halabja&hl=en&sa=X&ved=0ahUKEwjWwOKpvP_dAhUEXKwKHX4dCMAQ6AEIKTAA#v=onepage&q=Joost%20Hiltermann%2C%20A%20Poisonous%20Affair%3A%20America%2C%20Iraq%2C%20and%20the%20Gassing%20of%20Halabja&f=false

[43] See generally Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons      https://books.google.com/books?id=ZS0HDAAAQBAJ&pg=PR4&dq=Killer+Robots:+Legality+and+Ethicality+of+Autonomous+Weapons&hl=en&sa=X&ved=0ahUKEwjJ57Hcwf_dAhUQPa0KHfIEDlIQ6AEIKTAA#v=onepage&q=Killer%20Robots%3A%20Legality%20and%20Ethicality%20of%20Autonomous%20Weapons&f=false

[44] https://www.gov.uk/government/publications/unmanned-aircraft-systems-jdp-0-302

[45] Protocol I art. 48; also The Obama Administration and International Law, Speech by the Legal advisor of the US department of state given at the annual meeting of the American society of international law, Washington D.C., 25 March 2010. The Obama administration defined the United States’ definition of the principle of distinction as, “require[ing] that attacks be limited to military objectives and that civilians or civilian objects shall not be the object of the attack . . .”

[46] Protocol I, art. 51(5)(b) ; Customary Law Rule 14. Customary International Law defines the principle of proportionality as, “Launching an attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated, is prohibited.”

[47] International Criminal Tribunal for Yugoslavia (ICTY), Final Report to the Prosecutor by the Committee Established to Review the NATO Bombing Campaign Against the Federal Republic of Yugoslavia (2000), at paragraphs 48 . The court states in paragraph 48,”One cannot easily assess the value of innocent human life as opposed to capturing a particular military objective.”

[48] https://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf ; also see Ronald C. Arkin, Governing Lethal Behavior in Autonomous Robots …. https://www.crcpress.com/Governing-Lethal-Behavior-in-Autonomous-Robots/Arkin/p/book/9781420085945

[49] See generally Id.

[50] See generally Id.

[51] See generally Id.

[52] See generally Id.

[53] See generally Id.

[54] See Economist, supra note 15.

[57] http://www.article36.org/statements/ban-autonomous-armed-robots/

[58] https://philosophynow.org/issues/71/Moral_Machines_Teaching_Robots_Right_from_Wrong_by_Wendell_Wallach_and_Colin_Allen

[59] http://articles.latimes.com/2012/jan/26/business/la-fi-auto-drone-20120126

[61] See Sparrow, supra note ___, at 178-79.

[62] See Economist, supra note 15.

[63] Id.

[64] Singer, supra note ___, at 431-33; also http://www.cybersophe.org/writing/Asaro%20Just%20Robot%20War.pdf .

[66] See Billy Perrigo, A Global Arms Race For Killer Robots Is Transforming The Battle Field, …… http://time.com/5230567/killer-robots/

[67] See Sparrow, supra note ___ at 28.

[68] See generally https://www.icrac.net/about-icrac/ . The International Committee for Robots Arms Control advocates for a multilateral treaty as a solution. Their mission statement illustrates this solution more specifically as:

Given the rapid pace of development of military robotics and the pressing dangers that these pose to peace and international security and to civilians in war, we call upon the international community to urgently commence a discussion about an arms control regime to reduce the threat posed by these systems.
We propose that this discussion should consider the following:
* Their potential to lower the threshold of armed conflict;
* The prohibition of the development, deployment and use of armed autonomous unmanned systems; machines should not be allowed to make the decision to kill people;
* Limitations on the range and weapons carried by “man in the loop” unmanned systems and on their deployment in postures threatening to other states;
* A ban on arming unmanned systems with nuclear weapons;
* The prohibition of the development, deployment and use of robot space weapons.

[69] http://www.article36.org/statements/ban-autonomous-armed-robots/

[71] ADD FOOTNOTE

Professor

Leave a Reply

Пост опубликован: