Artificial intelligence in war weapons rarely makes it into the news.  Yet, it requires our careful attention because it is already part of international and national lawmaking.  The United Nations (UN) Group of Governmental Experts (GGE) on the Convention on Certain Conventional Weapons (CCW) on Lethal Autonomous Weapons Systems (LAWS) has been meeting regularly on this concern since at least 2014.1  The US Department of Defense (DoD) has Directive 3000.09 entitled “Autonomy in Weapon Systems” for the military.2  Yet, when artificial intelligence in weapons is covered in news articles, its treatment belies its seriousness.  A recent article referenced “Killer AI Robots” and had a photo of an early sci-fi thriller humanoid robot.3  Using terms like “killer robots” or “slaughterbots”4 is intended to catch our attention.  However, terms and photos like these minimize the sophistication of their deadly capabilities and the critical crossroads at which we stand by relegating them to the realm of fictional horror shows.

Human accountability is one of the critical issues of autonomous weapons.  Rather than picturing outdated science fiction humanoid robots, we should associate them with weaponized drones, Moon-Rover-type vehicles, and the new Navy drone ship.5  These are more realistic portrayals currently and capture the actual lethality inherent in these machines.  Weaponized drones already have some level of autonomy in their functions, and the complexity of that autonomy is rapidly advancing.

Why Did Google Want to Keep its DoD Maven Contract Secret?

One recent news items regarding autonomous weapons was the petition by thousands of Google employees and walk-out of some of them that eventually led Google to announce it would not seek to renew its Maven contract.6  Google worked hard just to get the necessary security certification to be eligible for that contract7 and competed against companies like IBM and Amazon for it.8

The goal of Project Maven is to automate, using artificial intelligence, some of the more mundane parts of analyzing surveillance footage.9  Automating this function is a key stepping stone to the ability to fully automate target acquisition. “Targets” are often humans and, at times, any “military-age” male in a strike zone has been determined by the United States to be legitimate target.10  As Quaker House counselors to the GI Rights Hotline have learned in working with clients, intensive analysis of surveillance material to learn everything about a human target creates a connection to that person and to that person’s family and loved ones.11  Now that the military knows this and understands that it can lead to moral injury for its analysts and drone operators, it is not surprising that the objective of this early-stage artificial intelligence project is to remove as much of that human connection as possible.

Part of the Maven contract specified that the Department of Defense would not identify Google as working on the project without its permission.12  I would hope that Google wanted to keep it secret because the American public would not approve of autonomous weapons.  But, I do not think that is the case.  Americans have accepted weaponized drones—because they terrify and kill in lands far away.  Instead, I think Google knew that people would react with outrage and fear of the possibility of losing more of their sense of privacy13 in their own neighborhoods and near their own homes.

Understanding the implications of the work assigned to them, Google employees took a stand by petitioning their employer to abandon the project, and several of them resigned from their jobs.  Google has since released guiding principles14 for its future work on artificial intelligence (it will finish its Maven contract).15  These principles include the assertion  that “technologies will be subject to appropriate human direction and control,” and they will not pursue contracts for “weapons or other technologies . . . to cause or directly facilitate injury to people” or that “contravene widely accepted principles of international law and human rights.”16  Maven is only developing automation of analysis of surveillance footage, though.   Apparently, this comes too close to the line for Google’s comfort, saying it would likely not have taken on this contract under these principles.17  That should tell us something about the direction and capabilities of the extended project.

Humanity

Quaker House advocates for more diligence in building peace and in ending war.  Simultaneously, we must work to ensure that the conduct of war continues to be proscriptively restricted, by law and international norms and treaties, within some boundaries, and by some code of conduct.  Abandoning attention to that line risks catastrophic steps backwards.

The United States is resisting regulation, and even definition, of these weapons. In its statement to this UN Group of Governmental Experts, it states, “Of course, humans cannot maintain complete control over weapons at all times.  Once a bullet leaves a gun, the rifleman ceases to have control over that bullet.”18  It is disappointing and distressing to see the US attempt this type of analogy to buttress its position.  The operation of the laws of physics is fundamentally different than the programming and initiation of software that is continuously refining its output.

The focus of much of the discussion by governments, experts, concerned public, nonprofit organizations, and the United Nations is the accountability concern.  The more autonomy is shifted from people to weapons, the more the accountability of nations for the resulting harm is degraded and the link tenuous.  It is telling that one of “Possible Guiding Principles” the UN working group published from its August 2018 session was “lethal autonomous weapons systems should not be anthropomorphized.”19  Anthropomorphizing weapons would be a first mental step to transferring accountability from humans to machines,20 falsely imbuing our perceptions of them with life and, subsequently, judgement-making capacity.

Quaker House sees an additional reason to insist on a chain of personal accountability with these weapons.  There is that of God (or Light, conscience, heart, universal love) in everyone, but not in algorithms or machines.  That Light is a double-edged sword.  It can lead to moral injury and overwhelming depression when it confronts war.  But, it is only if the horrors of war are confronted by that Light within each of us that we can ever hope to bring an end to war.21

~Kindra, Quaker House Executive Director

This post originally appeared as an article in our Autumn 2018 newsletter. Contact us or fill out the form on this web site (home page, at the bottom) if you would like to be added to our mailing list.

_______________

  1. The website of The United Nations Office at Geneva (UNOG), “2014 Meeting of Experts on LAWS.”
  2. The website of the United States of America Department of Defense, “Department of Defense Directive, Number 3000.09, Subject: Autonomy in Weapons Systems, November 21, 2012, incorporating change I May 8, 2017.
  3. ZDNet, “Killer AI Robots must be outlawed, says UN Chief,” Steve Ranger, November 6, 2018, 10:27 GMT. Interestingly, UN General Secretary António Guterres did not use the term “killer robots” in his quoted statement.
  4. IEEE Spectrum, “Why You Shouldn’t Fear ‘Slaughterbots,’” Paul Scharre, December 22, 2017, 14:45 GMT.
  5. The website of Stars and Stripes, “Navy’s revolutionary Sea Hunter drone ship being tested out of Pearl Harbor, Wyatt Olson, November 7, 2018.
  6. Gizmodo, “Google Plans Not to Renew Its Contract for Project Maven, a Controversial Pentagon Drone AI Imaging Program,” Kate Conger, June 1, 2018, 2:38 pm.
  7. Ibid.
  8. The Verge, “Google pledges not to develop AI weapons, but says it will still work with the military,” Nick Statt and James Vincent, June 7, 2018, 3:31 pm EDT.
  9. Ibid.
  10. The website of The New York Times, “Secret ‘Kill List’ Tests Obama’s Principles and Will,” Jo Becker and Scott Shane, May 29, 2012.
  11. The Quaker House Blog, “Spiritual Re-Awakenings,” Steve Woolford and Lenore Yarger, December 9, 2017.
  12. Gizmodo, “Google Plans Not to Renew Its Contract,” Kate Conger.
  13. Ibid.
  14. The website of Google AI, “Artificial Intelligence at Google–Our Principles,” accessed November 12, 2018.
  15. The Verge, “Google pledges not to develop AI weapons,” Nick Statt and James Vincent.
  16. The website of Google AIArtificial Intelligence at Google–Our Principles, “,” accessed November 12, 2018.
  17. The Verge, “Google pledges not to develop AI weapons,” Nick Statt and James Vincent.
  18. The website of the U.S. Mission to International Organizations in Geneva, “Meeting of the Group of Governmental Experts to the CCW on Lethal Autonomous Weapons Systems (LAWS): Statement by the United States Delegation as Delivered by Shawn Steene, Senior Force Planner, U.S. Department of Defense, Geneva, August 27, 2018.”
  19. The website of The United Nations Office at Geneva (UNOG), Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems” Section IIIA “Possible Guiding Principles,” 26 (h), October 23, 2018.
  20. Ibid, at 26 (b).
  21. Some readers may recall the perceptive treatment of this concept by the original Star Trek “A Taste of Armageddon”, S01E23, originally aired February 23, 1967. This episode was brought to my attention by Roderick Lewis upon his learning of the subject matter of this article.