[Kara Slade] Unauthorized disclosures: Unmanned aerial vehicles, aerospace systems design, and the problem of “engineering ethics,” Part 1
I wrote this paper last spring and proposed it for the annual meeting of the Society of Christian Ethics. It wasn’t accepted, and I’m not completely sure why, but the only comment I received from the reviewers was “What about the voices of the victims?” One of the points I was trying to make in the paper was that those voices are faintly heard, if at all, by those who design and build UAV’s. There are many aspects to the problem of the use of drones that I don’t address here – I’m writing as an engineer who cares deeply about engineering education and the moral formation that happens (or doesn’t happen) in that environment. I’ll be posting an edited version here as a series, in the hope that someone may find something in it useful as Christian ethics tries to speak to the increasing reliance on this problematic technology.
Let us pretend we’ve got it together,
Let us ignore the coming sun
We’ll sing the body electric
Until machine and soul are one.
– The Lonely Forest, “We Sing in Time”
A brief story on a new U.S. military action in Pakistan appeared in the April 22, 2011 edition of the New York Times, but its substance was predictably similar to many previous reports from the region. It reported that “an American drone attack killed 23 people in North Waziristan on Friday . . . in a strike against militants that appeared to signify unyielding pressure by the United States on Pakistan’s military.”[1] While the targets of this action were, as always, “militants,” it also killed “five women and four children,” according to a Pakistani official.[2] The use of unmanned aerial vehicles (UAV) has been a prominent but controversial feature of American involvement in Afghanistan and Pakistan since 2001, with the controversy stemming primarily from the number of civilian deaths associated with them. Despite ongoing ethical and legal questions, military and political leaders have repeatedly expressed the need to expand the role of unmanned aircraft and other robots in combat, with an emphasis on the development of autonomous decision-making capabilities.
This class of weapons is but one example of a broader problem in the story of modern military technology: the design of ethically dubious objects by professional engineers who are unwilling or unable to object to participation in such projects based on their own moral convictions. While traditional approaches to “engineering ethics” have focused on the decision to object, or on the development of proper moral convictions (and methodologies of moral reasoning), it is my contention that the true problem lies elsewhere – in the narrative of progress underlying the profession, the habituation into certain patterns of analytical thought occurring in engineering education, and in the professional and personal isolation common to the profession that makes an act of true moral courage difficult.
I. FOREVER NEW FRONTIERS: AUTONOMOUS UNMANNED COMBAT AERIAL VEHICLES AND THE RHETORIC OF PROGRESS
In a 2004 article in Air and Space Power Journal, Maj. James Hoffman described the increasing importance of unmanned aircraft for combat roles in terms of a future in which “technological factors will no longer restrain the development of unmanned aircraft,” with the only “impediment” to “progress” being resistance to “cultural change” on the part of some pilots. Hoffman calls for “a new generation of leaders” to ensure that further UAV “development” continues.[3] From the perspective of United States doctrine and policy, the desirability of that development is assumed, while official reflection on the moral implications of the technology and its use is, for the most part, strangely absent. In both the 2007 and 2009 editions of the 200-page Unmanned Systems Integrated Roadmap, the words “ethics,” “moral,” or “social” do not appear at all. The “public” is referred to only insofar as the government perceives a need to “increase . . . positive public attitude” to “foster greater trust in unmanned systems.”[4] Any expression of negativity, whether on the part of military professionals or members of the public, is attributed to ignorance or cultural resistance that must be overcome for the sake of progress and the desires of the state.
There are several directions in which UAV technology is being pushed simultaneously, based on the government’s desire for more, faster, and smarter. The General Atomics MQ-9 “Reaper,” shown in Figure 1 below, is the current state-of-the-art operational UAV in the United States inventory. It is capable of carrying fourteen AGM-114 “Hellfire” missiles or, alternatively, four missiles and two 500-pound “Paveway II” laser-directed bombs. It represents a marked increase in payload capacity over its predecessor, the MQ-1 “Predator,” which ordinarily carries two missiles. These two aircraft represent a longstanding facet of military technology development, the desire for more. In this case, the contractor anticipated the government’s desire, and financed the design of the MQ-9 from its own funds. It was rewarded with an order for over 300 vehicles.
Figure 1. MQ-9 Reaper firing missile
In addition to more, new aerospace technologies may be developed in response to a desire for faster or longer in terms of propulsion. The propeller-driven MQ-9, with a cruise speed of 200 mph, is not a rapid-response vehicle, and its range is limited. As a long-range program, the Department of Defense has set a goal of delivering a large payload from within the continental United States to anywhere in the world in a time of “less than two hours.”[5] The U.S.’ research effort in hypersonic weapons delivery has been plagued with high-profile mishaps: one test vehicle exploded in 2001, and another was lost in 2010. Research in this direction continues under the Defense Advanced Research Projects Agency’s FALCON program, although its current scope is difficult to discern.
Figure 2. Artist’s conception of X-43A hypersonic vehicle
The third leg of UAV development, smarter, is the aspect of the technology that has generated the most vigorous debate, largely due to the novelty of the presenting issues. The National Defense Authorization Act of 2000 introduced a Congressional mandate for the development and use of unmanned deep-strike aircraft and ground vehicles, creating “increasing pressure to develop and deploy robotics, including autonomous vehicles.”[7] Along with the “compelling military utility” of the use of robots for “dull, dirty, and dangerous” tasks, some researchers have identified the repeated commission of atrocities by soldiers in Iraq and Afghanistan as a reason to pursue the alternative of robotic weapons.[8] Ronald Arkin, a computer scientist at the Georgia Institute of Technology, argues that “an unmanned system can perform more ethically than human soldiers,” although he admits that it would not be possible to create a “perfectly ethical” system.[9] He advocates the implementation a deontic logic based on the adaptation of the categorical imperative to “a set of more direct and relevant assertions regarding acceptable actions towards noncombatants and their underlying rights.”[10] Tellingly, he selects this strategy based not on its suitability as an ethical system but on “computational tractability.”[11] In the development of a proposed architecture, Arkin admits that he has made “strong (and limiting) simplifying assumptions” about the actual functioning of the system.[12]
Another analysis argued that this quasi-Kantian system would require “an impossible computational load due to the requirements for knowledge . . . and the difficulty of estimating the sufficiency of initial information.”[13] The authors recommend a virtue ethics approach that would enable the robot to “embody the right tendencies in their reactions to the world and other agents in the world.”[14] Leaving aside the question of whether or not a machine can embody anything at all, the admission that “morally intelligent behaviour may require much more than being rational” would seem to obviate any chance that a computing device could be programmed for moral intelligence.[15] However, the authors recommend continued work towards that end, “before irrational public fears or accidents arising from military robotics derail research progress and national security interests.”[16] In doing so, they ask that engineers conduct “extensive pre-deployment testing” and “think carefully about how the subsystem they are working on could interact with other subsystems . . . in potentially harmful ways,” while simultaneously ensuring that they can confidently certify safety.[17] If for no other reason, due to the formal limitations on the predictability of complex systems, I would argue that those requirements cannot be met by any engineer. However, the existence of this very significant problem, along with several other issues, does not seem to dissuade researchers from continuing.
Next time: Part 2, “When did you know?”: The design process and the mainstream of ‘engineering ethics’
[1] Jane Perlez and Ismail Khan, “Deadly Drone Strike by U.S. May Fuel Anger in Pakistan,” The New York Times, April 22, 2011, http://www.nytimes.com/2011/04/23/world/asia/23pakistan.html.
[2] Ibid.
[3] James C. Hoffman and Charles Tustin Kamps, “At the Crossroads: Future ‘Manning’ for Unmanned Aerial Vehicles,” Air and Space Power Journal, Spring 2005, http://www.airpower.au.af.mil/airchronicles/apj/apj05/spr05/hoffman.html.
[4]Department of Defense, FY 2009-2034 Unmanned Systems Integrated Roadmap, http://www.aviationweek.com/media/pdf/UnmannedHorizons/UMSIntegratedRoadmap2009.pdf .
[5] “Andrews Space Wins Two DARPA FALCON Contracts,” Space Daily, December 3, 2003, http://www.spacedaily.com/news/falcon-03b.html.
[6] (Deleted)
[7] Patrick Lin, George Bekey, and Keith Abney, Autonomous Military Robots: Risk, Ethics, and Design, California Polytechnic State University San Luis Obispo, Ethics + Emerging Sciences Group, Report for Office of Naval Research, December 20, 2008, 6.
[8] Patrick Lin, George Bekey, and Keith Abney, Autonomous Military Robots: Risk, Ethics, and Design, California Polytechnic State University San Luis Obispo, Ethics + Emerging Sciences Group, Report for Office of Naval Research, December 20, 2008, 7.
[9] R. C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture – Part I: Motivation and Philosophy”, Proc. Human-Robot Interaction 2008, Amsterdam, NL, March 2008, 124.
[10] R. C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture – Part III: Representational and Architectural Considerations,” Proceedings of Technology in Wartime Conference, Palo Alto, CA, January 2008, 4.
[11] Ibid.
[12] Ibid., 9.
[13] Lin et al, 34.
[14] Ibid., 38
[15] Patrick Lin, George Bekey, and Keith Abney, Autonomous Military Robots: Risk, Ethics, and Design, California Polytechnic State University San Luis Obispo, Ethics + Emerging Sciences Group, Report for Office of Naval Research, December 20, 2008, 37.
[16] Ibid., 91.
[17] Ibid., 69.