0% found this document useful (0 votes)
36 views14 pages

Gunkel2020 Article MindTheGapResponsibleRoboticsA

The essay explores the complexities of assigning responsibility in the context of robotics and autonomous technology. It critiques the traditional instrumental theory of technology, highlighting how recent innovations challenge established notions of responsibility. The author evaluates various responses to these challenges, including instrumentalism 2.0, machine ethics, and hybrid responsibility, emphasizing the need for a clearer understanding of accountability in the age of advanced robotics.

Uploaded by

animesh22074
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views14 pages

Gunkel2020 Article MindTheGapResponsibleRoboticsA

The essay explores the complexities of assigning responsibility in the context of robotics and autonomous technology. It critiques the traditional instrumental theory of technology, highlighting how recent innovations challenge established notions of responsibility. The author evaluates various responses to these challenges, including instrumentalism 2.0, machine ethics, and hybrid responsibility, emphasizing the need for a clearer understanding of accountability in the age of advanced robotics.

Uploaded by

animesh22074
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Ethics and Information Technology (2020) 22:307–320

DOI 10.1007/s10676-017-9428-2

ORIGINAL PAPER

Mind the gap: responsible robotics and the problem


of responsibility
David J. Gunkel1

Published online: 19 July 2017


© Springer Science+Business Media B.V. 2017

Abstract The task of this essay is to respond to the ques- technology. But “the concept of responsibility,” as Riceour
tion concerning robots and responsibility—to answer for (2007, p.11) pointed out in his eponymously titled essay,
the way that we understand, debate, and decide who or is anything but clear and well-defined. Although the clas-
what is able to answer for decisions and actions under- sical juridical usage of the term, which dates back to the
taken by increasingly interactive, autonomous, and sociable nineteenth century, seems rather well-established—with
mechanisms. The analysis proceeds through three steps or “responsibility” characterized in terms of both civil and
movements. (1) It begins by critically examining the instru- penal obligations (either the obligation to compensate for
mental theory of technology, which determines the way one harms or the obligation to submit to punishment)—the
typically deals with and responds to the question of respon- philosophical concept is confused and somewhat vague.
sibility when it involves technology. (2) It then considers
In the first place, we are surprised that a term with
three instances where recent innovations in robotics chal-
such a firm sense on the juridical plane should be
lenge this standard operating procedure by opening gaps in
of such recent origin and not really well established
the usual way of assigning responsibility. The innovations
within the philosophical tradition. Next, the cur-
considered in this section include: autonomous technology,
rent proliferation and dispersion of uses of this term
machine learning, and social robots. (3) The essay con-
is puzzling, especially because they go well beyond
cludes by evaluating the three different responses—instru-
the limits established for its juridical use. The adjec-
mentalism 2.0, machine ethics, and hybrid responsibil-
tive ‘responsible’ can complement a wide variety of
ity—that have been made in face of these difficulties in an
things: you are responsible for the consequences of
effort to map out the opportunities and challenges of and
your acts, but also responsible for others’ actions to
for responsible robotics.
the extent that they were done under your charge or
care…In these diffuse uses the reference to obliga-
Keywords Robot · Robotics · Ethics · Machine ethics ·
tion has not disappeared, it has become the obligation
Technology · Responsibility · Philosophy
to fulfill certain duties, to assume certain burdens, to
carry out certain commitments (Riceour 2007, pp.
11–12).
Introduction
Riceour (2007, p. 12) traces this sense of the word
However it comes to be defined and characterized, “respon- through its etymology (hence the subtitle to the essay “A
sible robotics” is about responsibility of and for emerging Semantic Analysis”) to “the polysemia of the verb ‘to
respond’,” which denotes “to answer for….” or “to respond
to… (a question, an appeal, an injunction, etc.).” It is in this
* David J. Gunkel sense of the word that the question concerning responsi-
[email protected]
bility has come to be associated with robotics. One of the
http://gunkelweb.com
principal issues for responsible robotics, if not the prin-
1
Northern Illinois University, Dekalb, IL 60115, USA cipal issue, is to decide who or what can be or should be

13
Vol.:(0123456789)
308 D. J. Gunkel

responsible for the consequences of decisions and actions “Morality,” as Hall (2001, p. 2) points out, “rests on human
instituted by robots or robotic systems? Who or what, in shoulders, and if machines changed the ease with which
other words, can or should assume the obligations—the things were done, they did not change responsibility for
burden or duty—of answering for what a robot does or doing them. People have always been the only ‘moral
does not do? agents.’” This seemingly intuitive and common sense
The task of this essay is to respond to the question con- response is persuasive precisely because it is structured
cerning robots and responsibility—to answer for the way and informed by the answer that is typically provided for
that we understand, debate, and decide who or what is able the question concerning technology. “We ask the question
to answer for decisions and actions undertaken by increas- concerning technology,” Heidegger (1977, pp. 4–5) writes,
ingly autonomous, interactive, and sociable mechanisms. In “when we ask what it is. Everyone knows the two state-
order to get at this, the following will proceed through three ments that answer our question. One says: Technology is
steps or movements. (1) I begin by critically examining the a means to an end. The other says: Technology is a human
instrumental theory of technology, which determines the activity. The two definitions of technology belong together.
way one typically deals with and responds to the question For to posit ends and procure and utilize the means to them
of responsibility when it involves technology. (2) I then is a human activity.” According to Heidegger’s analysis, the
consider three instances where recent innovations chal- presumed role and function of any kind of technology—
lenge this standard operating procedure by opening gaps in whether it be a simple hand tool, jet airliner, or a sophisti-
the usual way of assigning responsibility. The innovations cated robot—is that it is a means employed by human users
considered in this section include: autonomous technol- for specific ends. Heidegger calls this particular characteri-
ogy, machine learning, and social robots. (3) I conclude by zation of technology “the instrumental definition” and indi-
evaluating the three different responses—instrumentalism cates that it forms what is considered to be the “correct”
2.0, machine ethics, and hybrid responsibility—that have understanding of any kind of technological contrivance.
been made in face of these difficulties in an effort to map As Feenberg (1991, p. 5) summarizes it, “The instru-
out the opportunities and challenges of and for responsi- mentalist theory offers the most widely accepted view of
ble robotics. The analysis is designed to be critical and not technology. It is based on the common sense idea that tech-
normative. The goal of the effort, therefore, is not to con- nologies are ‘tools’ standing ready to serve the purposes of
demn instrumentalism per se but (1) to diagnose the chal- users.” And because a tool or instrument “is deemed ‘neu-
lenges the instrumentalist way of thinking is now under due tral,’ without valuative content of its own” a technological
to recent innovations in information technology and (2) to artifact is evaluated not in and of itself, but on the basis of
evaluate the range of possible responses that can be made the particular employments that have been decided by its
in the face of these challenges.1 human designer or user. Consequently, technology is only
a means to an end; it is not and does not have an end in
its own right. “Technical devices,” as Lyotard (1993, p. 33)
Default settings writes, “originated as prosthetic aids for the human organs
or as physiological systems whose function it is to receive
When it comes to the question of responsibility regarding data or condition the context. They follow a principle,
technology, the matter seems rather clear and indisputable. and it is the principle of optimal performance: maximiz-
ing output (the information or modification obtained) and
minimizing input (the energy expended in the process).
1
This effort is informed by and consistent with the overall purpose Technology is therefore a game pertaining not to the true,
and aim of philosophy, strictly speaking. Philosophers as different the just, or the beautiful, etc., but to efficiency: a technical
(and, at times, even antagonistic, especially to each other) as Hei- ‘move’ is ‘good’ when it does better and/or expends less
degger (1962), Dennett (1996), Moore (2005), and Žižek (2006), have energy than another.” According to Lyotard’s analysis, a
all, at one time or another, described philosophy as a critical endeavor
that is more interested in developing questions than in providing technological device, whether it be a cork screw, a clock, or
definitive answers. “There are,” as Žižek (2006, p. 137) describes it, a digital computer, is a mere instrument of human action.
“not only true or false solutions, there are also false questions. The It therefore does not in and of itself participate in the big
task of philosophy is not to provide answers or solutions, but to sub- questions of truth, justice, or beauty. It is simply and indis-
mit to critical analysis the questions themselves, to make us see how
the very way we perceive a problem is an obstacle to its solution.” putably about efficiency. A particular technological innova-
This is the task and objective of the essay—to identify the range of tion is considered “good,” if, and only if, it proves to be
questions regarding responsibility that can and should be asked in a more effective instrument (or means) to accomplishing a
the face of recent technological innovation. If, in the end, readers humanly defined end.
emerge from the experience with more questions—“more” not only
in quantity but also (and more importantly) in terms of the quality of This formulation not only sounds level-headed and rea-
inquiry—then it will have been successful and achieved its end. sonable, it is one of the standard operating presumptions of

13
Mind the gap: responsible robotics and the problem of responsibility 309

computer ethics. Although different definitions of “com- whatever we know how to order it to perform’ (her italics).”
puter ethics” have circulated since Walter Maner first intro- This objection—what Turing called “Lady Lovelace’s
duced the term in 1976, they all share a human-centered Objection”—has often been deployed as the basis for deny-
perspective that assigns responsibility to human designers ing independent agency or autonomy to computers, robots,
and users. According to Deborah Johnson, who is credited and other mechanisms. Such instruments, it is argued, only
with writing the field’s agenda setting textbook, “com- do what we have programmed them to perform. Since we
puter ethics turns out to be the study of human beings and are the ones who deliberately design, develop, and deploy
society—our goals and values, our norms of behavior, the these mechanisms—or as Bryson (2010, p. 65) describes
way we organize ourselves and assign rights and responsi- it, “there would be no robots on this planet if it weren’t
bilities, and so on” (Johnson 1985, p. 6). Computers, she for deliberate human decisions to create them”—there is
recognizes, often “instrumentalize” these human values always a human that is if not in the loop then at least on the
and behaviors in innovative and challenging ways, but the loop.
bottom-line is and remains the way human beings design Second there are moral problems. That is, holding a
and use (or misuse) such technology. And Johnson has robotic mechanism or system culpable would not only be
stuck to this conclusion even in the face of what appears to illogical but also irresponsible. This is because ascrib-
be increasingly sophisticated technological developments. ing moral responsibility to machines, as Siponen (2004,
“Computer systems,” she writes in a more recent article, p. 286) argues, would allow one to “start blaming comput-
“are produced, distributed, and used by people engaged in ers for our mistakes. In other words, we can claim that ‘I
social practices and meaningful pursuits. This is as true of didn’t do it—it was a computer error’, while ignoring the
current computer systems as it will be of future computer fact that the software has been programmed by people to
systems. No matter how independently, automatic, and ‘behave in certain ways’, and thus people may have caused
interactive computer systems of the future behave, they this error either incidentally or intentionally (or users have
will be the products (direct or indirect) of human behavior, otherwise contributed to the cause of this error).” This line
human social institutions, and human decision” (Johnson of thinking has been codified in the popular adage, “It’s a
2006, 197). Understood in this way, computer systems, no poor carpenter who blames his tools.” In other words, when
matter how automatic, independent, or seemingly intelli- something goes wrong or a mistake is made in situations
gent they may become, “are not and can never be (autono- involving the application of technology, it is the operator
mous, independent) moral agents” (Johnson 2006, p. 203). of the tool and not the tool itself that should be blamed.
They will, like all other technological artifacts, always be “By endowing technology with the attributes of autono-
instruments of human value, decision making, and action. mous agency,” Mowshowitz (2008, p. 271) argues, “human
According to the instrumentalist definition, there- beings are ethically sidelined. Individuals are relieved of
fore, any action undertaken via a technological system is responsibility. The suggestion of being in the grip of irre-
ultimately the responsibility of some human agent—the sistible forces provides an excuse of rejecting responsibil-
designer of the system, the manufacturer of the equipment, ity for oneself and others.” This maneuver, what Nissen-
or the end-user of the product. If something goes wrong baum (1996, p. 35) terms “the computer as scapegoat,” is
with or someone is harmed by the mechanism, “some understandable but problematic, insofar as it allows human
human is,” as Goertzel (2002, p. 1) accurately describes it designers, developers, or users to deflect or avoid taking
“to blame for setting the program up to do such a thing.” responsibility for their actions by assigning accountability
Consequently, holding a robot responsible for the decisions to what is a mere object. “Most of us,” Nissenbaum (1996,
it makes or the actions that it is instrumental in deploying p. 35) argues, “can recall a time when someone (perhaps
is to make at least two errors. First, it is logically incorrect ourselves) offered the excuse that it was the computer’s
to ascribe agency to something that is and remains a mere fault—the bank clerk explaining an error, the ticket agent
object under our control. As Sullins (2006, p. 26) con- excusing lost bookings, the student justifying a late paper.
cludes by way of the investigations undertaken by Brings- Although the practice of blaming a computer, on the face
jord (2007), computers and robots “will never do anything of it, appears reasonable and even felicitous, it is a barrier
they are not programmed to perform” and as a result “are to accountability because, having found one explanation
incapable of becoming moral agents now or in the future.” for an error or injury, the further role and responsibility of
This insight is a variant of one of the objections noted by human agents tend to be underestimated—even sometimes
Alan Turing in his agenda-setting paper on machine intel- ignored. As a result, no one is called upon to answer for
ligence: “Our most detailed information of Babbage’s Ana- an error or injury.” It is precisely for this reason that John-
lytical Engine,” Turing (1999, p. 50) wrote, “comes from a son and Miller (2008, 124) argue that “it is dangerous to
memoir by Lady Lovelace. In it she states, ‘The Analytical conceptualize computer systems as autonomous moral
Engine has no pretensions to originate anything. It can do agents.” Assigning responsibility to the technology not

13
310 D. J. Gunkel

only sidelines human involvement and activity, but leaves the will by external laws, namely the deterministic
questions of responsibility untethered from their assumed laws of nature. In this light the very mention of auton-
proper attachment to human decision making and action. omous technology raises an unsettling irony, for the
expected relationship of subject and object is exactly
reversed. We are now reading all of the propositions
The robot apocalypse backwards. To say that technology is autonomous is
to say that it is nonheteronomous, not governed by
The instrumental theory not only sounds reasonable, it an external law. And what is the external law that
is obviously useful. It is, one might say, instrumental for is appropriate to technology? Human will, it would
responding to the opportunities and challenges of increas- seem (Winner 1977, p. 16).
ingly complex technological systems and devices. This is
“Autonomous technology,” therefore, refers to techno-
because the theory has been successfully applied not only
logical devices that directly contravene the instrumental
to simple devices like corkscrews, toothbrushes, and gar-
definition by deliberately contesting and relocating the
den hoses but also sophisticated technologies, like comput-
assignment of agency. Such mechanisms are not mere tools
ers, smart phones, drones, etc. But all of that appears to be
to be used by human beings but occupy, in one way or
increasingly questionable or problematic, precisely because
another, the place of human agent. As Marx (1977, p. 495)
of a number of recent innovations that effectively challenge
described it, “the machine, therefore, is a mechanism that,
the operational limits of the instrumentalist theory.
after being set in motion, performs with its tools the same
operations as the worker formerly did with similar tools.”
Machine != Tool
Understood in this way, the machine does not occupy the
place of the tool used by the worker; it takes the place of
The instrumental theory is a rather blunt instrument, reduc-
the worker him/herself. This is precisely why the question
ing all technology, irrespective of design, construction, or
concerning automation, or robots in the work place, is not
operation, to the ontological status of a tool or instrument.
merely able to be explained away as the implementation of
“Tool,” however, does not necessarily encompass every-
new and better tools but raises concerns over the replace-
thing technological and does not, therefore, exhaust all
ment of human workers or what has been called, beginning
possibilities. There are also machines. Although “experts
with the work of Keynes (2010, p. 325), “technological
in mechanics,” as Marx (1977, p. 493) pointed out, often
unemployment.”
confuse these two concepts calling “tools simple machines
Perhaps the best example of the difference Marx
and machines complex tools,” there is an important and
describes is available to us with the self-driving car or
crucial difference between the two. Indication of this essen-
autonomous vehicle. The autonomous vehicle, whether the
tial difference can be found in a brief parenthetical remark
Google Car or one of its competitors, is not designed for
offered by Heidegger in “The Question Concerning Tech-
and intended to replace the automobile. It is, in its design,
nology.” “Here it would be appropriate,” Heidegger (1977,
function, and materials, the same kind of instrument that
p. 17) writes in reference to his use of the word “machine”
we currently utilize for the purpose of personal transporta-
to characterize a jet airliner, “to discuss Hegel’s defini-
tion. The autonomous vehicle, therefore, does not replace
tion of the machine as autonomous tool [selbständigen
the instrument of transportation (the car); it is intended to
Werkzeug].” What Heidegger references, without supply-
replace (or at least significantly displace) the driver. This
ing the full citation, are Hegel’s 1805-07 Jena Lectures, in
difference was recently acknowledged by the National
which “machine” had been defined as a tool that is self-suf-
Highway Traffic Safety Administration (NHTSA), which
ficient, self-reliant, and independent. Although Heidegger
in a 4 February 2016 letter to Google, stated that the com-
immediately dismisses this alternative as something that
pany’s Self Driving System (SDS) could legitimately be
is not appropriate to his way of questioning technology, it
considered the legal driver of the vehicle: “As a founda-
is taken up and given sustained consideration by Langdon
tional starting point for the interpretations below, NHTSA
Winner in his book-length investigation of Autonomous
will interpret ‘driver’ in the context of Google’s described
Technology.
motor vehicle design as referring to the SDS, and not to any
To be autonomous is to be self-governing, independ- of the vehicle occupants” (Ross 2016). Although this deci-
ent, not ruled by an external law or force. In the meta- sion is only an interpretation of existing law, the NHTSA
physics of Immanuel Kant, autonomy refers to the explicitly states that it will “consider initiating rulemak-
fundamental condition of free will—the capacity of ing to address whether the definition of ‘driver’ in Sec-
the will to follow moral laws which it gives to itself. tion 571.3 [of the current US Federal statute, 49 U.S.C.
Kant opposes this idea to “heteronomy,” the rule of Chapter 301] should be updated in response to changing

13
Mind the gap: responsible robotics and the problem of responsibility 311

circumstances” (Hemmersbaugh 2016). Similar proposals Tay.ai.2 Although not physically embodied mechanisms,
have been floated in efforts to deal with work-place automa- both AlphaGo and Tay demonstrate the “responsibil-
tion. In a highly publicized draft document submitted to the ity gap” (Matthias 2004) that is opening up in the wake
European Parliament in May of 2016, for instance, it was of recent innovations in machine learning. AlphaGo, as
argued that “sophisticated autonomous robots” (“machines” Google DeepMind (2016) explains, “combines Monte-
in Marx’s terminology) be considered “electronic persons” Carlo tree search with deep neural networks that have
with “specific rights and obligations” for the purposes of been trained by supervised learning, from human expert
contending with the challenges of technological unemploy- games, and by reinforcement learning from games of self-
ment, tax policy, and legal liability. Although the proposed play.” In other words, AlphaGo does not play the game
legislation did not pass as originally written, it represents of Go by following a set of cleverly designed moves feed
recognition on the part of lawmakers that recent innova- into it by human programmers. It is designed to formu-
tions in robotics challenge the way we typically respond to late its own instructions. Although less is known about
and answer for questions regarding responsibility. the inner workings of Tay, Microsoft explains that the
The instrumentalist theory works by making the system “has been built by mining relevant public data,”
assumption that all technologies—irrespective of design, i.e. training its neural networks on anonymized data
implementation, or sophistication—are a tool of human obtained from social media, and was designed to evolve
action. Hammer, computer, or UAV (unmanned aerial its behavior from interacting with users on social net-
vehicle), they are all just instruments that are used more works like Twitter, Kik, and GroupMe (Microsoft 2016).
or less effectively and/or correctly by human beings. But What both systems have in common is that the engineers
this way of thinking does not cover everything; there are who designed and built them have little or no idea what
also machines. Machines, as Marx (following Hegel’s ini- the systems will eventually do once they are in operation.
tial suggestions) recognized, occupy another ontological As Thore Graepel, one of the creators of AlphaGo, has
position. They are not instruments to be used (more or less explained: “Although we have programmed this machine
efficiently) by a human agent; they are designed and imple- to play, we have no idea what moves it will come up with.
mented to take the place of the human agent. Consequently, Its moves are an emergent phenomenon from the train-
machines—like the self-driving automobile and other ing. We just create the data sets and the training algo-
forms of what Winner calls “autonomous technology”— rithms. But the moves it then comes up with are out of
challenge the explanatory capability of the instrumentalist our hands” (Metz 2016). Consequently, machine learning
theory, presenting us with technologies that are intention- systems, like AlphaGo, are intentionally designed to do
ally designed and deployed to be something other. Pointing
this out, however, does not mean that the instrumental the-
ory is on this account refuted tout court. There are and will
continue to be mechanisms understood and utilized as tools Footnote 2 (continued)
to be manipulated by human users (i.e., lawn mowers, cork ing natural language generation (NLG) algorithms, black box trad-
screws, telephones, etc.). The point is that the instrumental- ing, computational creativity, self-driving vehicles, and autonomous
weapons. In fact, one might have expected this essay to have focused
ist perspective, no matter how useful and seemingly correct on the latter—autonomous weapons—mainly because of the way the
in some circumstances for answering for some technologi- responsibility gap, or what has also been called “the accountability
cal devices, does not exhaust all possibilities for all kinds gap,” has been positioned, addressed, and documented in the litera-
of devices. The theory has its limits. ture on this subject (Arkin 2009; Asaro 2012; Beard 2014; Hammond
2015; Krishnan 2009; Lokhorst and van den Hoven 2012; Schulzke
2013; Sharkey 2012; Sparrow 2007; Sullins 2010). I have, how-
Learning algorithms ever, made the deliberate decision to employ other, perhaps more
mundane, examples like AlphaGo and Tay.ai. And I have done so
for two reasons. First, questions concerning machine autonomy and
The instrumental theory, for all its notable success han- responsibility, although important for and well-documented in the
dling different kinds of technology (for a critical exami- literature concerning autonomous weapons, is something that is not
nation of examples of these “success stories,” see Hei- (and should not be) limited to weapon systems. Recognizing this fact
requires that we explicitly identify and consider other domains where
degger 1977; Feenberg 1991; Nissenbaum 1996 and;
these question appear and are relevant—domains where the issues
Johnson 2006), appears to be unable to contend with might be less dramatic but no less significant. Second, and more
recent developments in machine learning. Consider, for importantly, I wanted to deal with technologies that are actually in
example, Google DeepMind’s AlphaGo and Microsoft’s operation and not under development. Despite its popularity in inves-
tigations of machine agency and responsibility, autonomous weap-
ons are still somewhat speculative and in development. Rather than
2 address what might happen with technologies that could be developed
Because of the recent proliferation of and popularity surrounding
and deployed, I wanted to address what has happened with technolo-
connectionist architecture, neural networks, and machine learning,
gies that are already here and in operation.
there are numerous examples from which one could select, includ-

13
312 D. J. Gunkel

things that their programmers cannot anticipate, com- what we stand for, nor how we designed Tay. Tay is now
pletely control, or answer for. In other words, we now offline and we’ll look to bring Tay back only when we are
have autonomous (or at least semi-autonomous) computer confident we can better anticipate malicious intent that
systems that in one way or another have “a mind of their conflicts with our principles and values” (Lee 2016). But
own.” And this is where things get interesting, especially this apology is also frustratingly unsatisfying or interesting
when it comes to questions of responsibility. (it all depends on how you look at it). According to Lee’s
AlphaGo was designed to play Go, and it proved its abil- carefully worded explanation, Microsoft is only responsi-
ity by beating an expert human player. So who won? Who ble for not anticipating the bad outcome; it does not take
gets the accolade? Who actually beat Lee Sedol? Follow- responsibility or answer for the offensive Tweets. For Lee,
ing the dictates of the instrumental theory of technology, it is Tay who (or “that,” and words matter here) is named
actions undertaken with the computer would be attributed and recognized as the source of the “wildly inappropri-
to the human programmers who initially designed the sys- ate and reprehensible words and images” (Lee 2016). And
tem and are capable of answering for what it does or does since Tay is a kind of “minor” (a teenage AI) under the pro-
not do. But this explanation does not necessarily hold for tection of her parent corporation, Microsoft needed to step-
an application like AlphaGo, which was deliberately cre- in, apologize for their “daughter’s” bad behavior, and put
ated to do things that exceed the knowledge and control of Tay in a time out.
its human designers. In fact, in most of the reporting on this Although the extent to which one might assign “agency”
landmark event, it is not Google or the engineers at Deep- and “responsibility” to these mechanisms remains a con-
Mind who are credited with the victory. It is AlphaGo. tested issue, what is not debated is the fact that the rules
In published rankings, for instance, it is AlphaGo that is of the game have changed and that there is a widening
named as the number two player in the world (Go Ratings “responsibility gap.”
2016). Things get even more complicated with Tay, Micro-
Presently there are machines in development or
soft’s foul-mouthed teenage AI, when one asks the ques-
already in use which are able to decide on a course
tion: Who is responsible for Tay’s bigoted comments on
of action and to act without human intervention.
Twitter? According to the standard instrumentalist way of
The rules by which they act are not fixed during
thinking, we could blame the programmers at Microsoft,
the production process, but can be changed dur-
who designed the application to be able to do these things.
ing the operation of the machine, by the machine
But the programmers obviously did not set out to create a
itself. This is what we call machine learning. Tra-
racist algorithm. Tay developed this reprehensible behav-
ditionally we hold either the operator/manufacture
ior by learning from interactions with human users on the
of the machine responsible for the consequences of
Internet. So how did Microsoft answer for this? How did
its operation or “nobody” (in cases, where no per-
they explain and assign responsibility?
sonal fault can be identified). Now it can be shown
Initially a company spokesperson—in damage-control
that there is an increasing class of machine actions,
mode—sent out an email to Wired, The Washington Post,
where the traditional ways of responsibility ascrip-
and other news organizations, that sought to blame the vic-
tion are not compatible with our sense of justice and
tim. “The AI chatbot Tay,” the spokesperson explained, “is
the moral framework of society because nobody has
a machine learning project, designed for human engage-
enough control over the machine’s actions to be able
ment. It is as much a social and cultural experiment, as
to assume responsibility for them (Matthias 2004, p.
it is technical. Unfortunately, within the first 24 hours of
177).
coming online, we became aware of a coordinated effort by
some users to abuse Tay’s commenting skills to have Tay In other words, the instrumental definition of technology,
respond in inappropriate ways. As a result, we have taken which had effectively tethered machine action to human
Tay offline and are making adjustments” (Risely 2016). agency and responsibility, no longer adequately applies to
According to Microsoft, it is not the programmers or the mechanisms that have been deliberately designed to oper-
corporation who are responsible for the hate speech. It is ate and exhibit some form, no matter how rudimentary, of
the fault of the users (or some users) who interacted with independent action or autonomous decision making. Con-
Tay and taught her to be a bigot. Tay’s racism, in other trary to the instrumentalist way of thinking, we now have
word, is our fault. Later, on Friday the 25th of March, Peter mechanisms that are designed to do things that exceed our
Lee, VP of Microsoft Research, posted the following apol- control and our ability to respond or to answer for them.
ogy on the Official Microsoft Blog: “As many of you know But let’s be clear as to what this means. What has been
by now, on Wednesday we launched a chatbot called Tay. demonstrated is not that a machine, like AlphGo or Tay,
We are deeply sorry for the unintended offensive and hurt- is or should be considered a moral agent and held solely
ful tweets from Tay, which do not represent who we are or accountable for the decisions it makes or the actions it

13
Mind the gap: responsible robotics and the problem of responsibility 313

deploys. That would be going too far, and it would be inat- Jibo, we are told, occupies a place that is situated some-
tentive to the actual results that have been obtained. In fact, where in between what are mere things and who really
if we return to Riceour (2007) and follow his lead, which matters. Consequently Jibo is not just another instrument,
suggests that responsibility be understood as the “ability to like the automobile or toothbrush. But he/she/it (and the
respond,” it is clear that both AlphaGo and Tay lack this choice of pronoun is not unimportant) is also not quite
capability. If we should, for instance, want to know more another member of the family pictured in the photograph.
about the moves that AlphaGo made in its historic game Jibo inhabits a place in between these two ontological cat-
against Lee Sedol, AlphaGo can certainly be asked about egories.3 This is, it should be noted, not unprecedented. We
it. But the algorithm will have nothing to say in response. are already familiar with other entities that occupy a similar
In fact, it was the responsibility of the human program- ambivalent social position, like the family dog. In fact ani-
mers and observers to respond on behalf of AlphaGo and mals, which since the time of Descartes have been the other
to explain the significance and impact of its behavior. But of the machine (Gunkel 2007), provide a good precedent
what this does indicate is that machine learning systems for understanding the changing nature of social responsibil-
like AlphaGo and Tay.ai introduce complications into the ity in the face of social robots, like Jibo. “Looking at state
instrumentalist way of assigning and dealing with respon- of the art technology,” Darling (2012, p. 1) writes, “our
sibility. They might not be moral agents in their own right robots are nowhere close to the intelligence and complexity
(not yet at least), but their design and operation effectively of humans or animals, nor will they reach this stage in the
challenge the standard instrumentalist theory and open near future. And yet, while it seems far-fetched for a robot’s
up fissures in the way responsibility comes to be decided, legal status to differ from that of a toaster, there is already
assigned, and formulated. a notable difference in how we interact with certain types
of robotic objects.” This occurs, Darling continues, princi-
Social robots pally due to our tendencies to anthropomorphize things by
projecting into them cognitive capabilities, emotions, and
In July of 2014 the world got its first look at Jibo. Who or motivations that do not necessarily exist in the mechanism
what is Jibo? That is an interesting and important ques- per se. But it is this emotional reaction that necessitates
tion. In a promotional video that was designed to raise new forms of obligation in the face of social robots. “Given
capital investment through pre-orders, social robotics pio- that many people already feel strongly about state-of-the-
neer Cynthia Breazeal introduced Jibo with the following art social robot ‘abuse,’ it may soon become more widely
explanation: “This is your car. This is your house. This is perceived as out of line with our social values to treat
your toothbrush. These are your things. But these [and the robotic companions in a way that we would not treat our
camera zooms into a family photograph] are the things that pets” (Darling 2012, p. 1).
matter. And somewhere in between is this guy. Introducing This insight is not just a theoretical possibility; it has
Jibo, the world’s first family robot” (Jibo 2014). Whether been demonstrated in empirical investigations. The com-
explicitly recognized as such or not, this promotional puter as social actor (CASA) studies undertaken by Reeves
video leverages a crucial ontological distinction that Der- and Nass (1996), for example, demonstrated that human
rida (2005, p. 80) calls the difference between “who” and users will accord computers social standing similar to that
“what.” On the side of “what” we have those things that of another human person and that this occurs as a prod-
are mere objects—our car, our house, and our toothbrush. uct of the extrinsic social interaction, irrespective of the
According to the instrumental theory of technology, these actual internal composition (or “being” as Heidegger would
things are mere instruments that do not have any independ- say) of the object in question. These results, which were
ent moral status whatsoever. We might worry about the obtained in numerous empirical studies with human sub-
impact that the car’s emissions have on the environment (or jects over several years, have been independently verified
perhaps stated more precisely, on the health and well-being
of the other human beings who share this planet with us),
but the car itself is not an object of moral concern. On the 3
Just to be clear, the problem with social robots is not that they are
other side there are, as the video describes it “those things or might be capable of becoming moral subjects. The problem is that
that matter.” These things are not “things,” strictly speak- they are neither instruments nor moral subjects. They occupy an in-
between position that effectively blurs the boundary that had typically
ing, but are the other persons who count as socially and separated the one from the other. The problem, then, is not that social
morally significant Others. They are those others to whom robots might achieve moral status equal to or on par with human
we are obligated and in the face of which we bear certain beings. That remains a topic of and for science fiction. The problem
duties or responsibilities. Unlike the car, the house, or the is that social robots complicate the way one decides who has moral
status and what does not, which is a more difficult/interesting philo-
toothbrush, these other persons have moral status and can sophical question. For more on this subject, see Coeckelbergh (2012),
be benefitted or harmed by our decisions and actions. Gunkel (2012), and Floridi (2013).

13
314 D. J. Gunkel

in two recent experiments with robots, one reported in with autonomous technology, machine learning, and social
the International Journal of Social Robotics (Rosenthal- robots. In the face of these challenges, there are at least
von der Pütten et al. 2013), where researchers found that three possible responses.
human subjects respond emotionally to robots and express
empathic concern for machines irrespective of knowledge Instrumentalism 2.0
concerning the actual ontological status of the mechanism,
and another that used physiological evidence, documented We can try to respond as we typically have responded,
by electroencephalography, of the ability of humans to treating these recent innovations in artificial intelligence
empathize with what appears to be simulated “robot pain” and robotics as mere instruments or tools. Joanna Bryson
(Suzuki et al. 2015). makes a persuasive case for this approach in her provoca-
Jibo, and other social robots like it, are not science fic- tively titled essay “Robots Should be Slaves”: “My thesis
tion. They are already or will soon be in our lives and in our is that robots should be built, marketed and considered
homes. As Breazeal (2002, p. 1) describes it, “a sociable legally as slaves, not companion peers” (Bryson 2010, p.
robot is able to communicate and interact with us, under- 63). Although this might sound harsh, the argument is per-
stand and even relate to us, in a personal way. It should be suasive, precisely because it draws on and is underwritten
able to understand us and itself in social terms. We, in turn, by the instrumental theory of technology. This decision
should be able to understand it in the same social terms—to has both advantages and disadvantages. On the positive
be able to relate to it and to empathize with it…In short, a side, it reaffirms human exceptionalism, making it abso-
sociable robot is socially intelligent in a human-like way, lutely clear that it is only the human being who possess
and interacting with it is like interacting with another per- rights and responsibilities. Technologies, no matter how
son.” In the face of these socially situated and interactive sophisticated they become, are and will continue to be
entities we are going to have to decide whether they are mere tools of human action, nothing more. “We design,
mere things like our car, our house, and our toothbrush; manufacture, own and operate robots” Bryson (2010, p.
someone who matters and to whom we bear responsibility, 65) writes. “They are entirely our responsibility. We deter-
like another member of the family; or something altogether mine their goals and behaviour, either directly or indi-
different that is situated in between the one and the other, rectly through specifying their intelligence, or even more
like our pet dog. In whatever way this comes to be decided, indirectly by specifying how they acquire their own intel-
however, these artifacts will undoubtedly challenge our ligence.” Furthermore, this line of reasoning seems to be
understanding of responsibility and the way we typically entirely consistent with current legal structures and deci-
distinguish between who is to be considered another social sions. “As a tool for use by human beings,” Gladden (2016,
subject and what is a mere instrument or object. Again, let’s p. 184) concludes, “questions of legal responsibility for any
be absolutely clear about things. Social robots, like Jibo, harmful actions performed by such a robot revolve around
are not (at least not at this particular moment in history) well-established questions of product liability for design
considered to be either a moral agent or a moral patient (on defects (Calverley 2008, p. 533; Datteri 2013) on the part
this distinction, see Gunkel 2012 and; Floridi 2013). But of its producer, professional malpractice on the part of its
Jibo is also something more than a mere tool or object. It human operator, and, at a more generalized level, political
occupies what is arguably an ambivalent in-between posi- responsibility for those legislative and licensing bodies that
tion that complicates the usual way of thinking about and allowed such devices to be created and used.”
assigning moral status. Social robots like Jibo, then, are But this approach, for all its usefulness, has at least
designed and situated in such a way that they do not fit the two problems. First, in strictly applying and enforcing
available ontological and axiological categories; their very the instrumental theory, we might inadvertently restrict
existence already complicates the usual way of sorting technological innovation and the development of respon-
things into “who” or “what.” sible governance. If, for example, we hold developers
responsible for the unintended consequences of robots
that have been designed with learning capabilities, this
Responding to responsibility gaps could lead engineers and manufactures to be rather con-
servative with the development and commercialization of
What we have discovered, therefore, are situations where new technology in an effort to protect themselves from
our theory of technology—a theory that has consider- culpability. Had the engineers at Microsoft been assigned
able history behind it and that has been determined to responsibility for the hate speech produced by Tay, it is
be as applicable to simple hand tools as it is to complex very possible that they and the corporation for whom
technological systems—encounters significant difficul- they worked (or the legal department within the corpora-
ties responding to or answering for recent developments tion) would think twice before releasing such technology

13
Mind the gap: responsible robotics and the problem of responsibility 315

into the wild. This might be, it could be argued, a positive robots, like Jibo, that invite and are intentionally designed
development, similar to the safety measures and product for emotional investment and attachment.
testing requirements that are currently employed in the Contra Brooks, however, it seems we are already at this
development of pharmaceuticals, transportation systems, point with things that are (at least metaphorically) as cold
and other industries. But it could also restrict and hin- and impersonal as the refrigerator. As Singer (2009, p. 338)
der robust development of machine learning systems and and Garreau (2007) have reported, US soldiers in Iraq and
capabilities. Afghanistan have formed surprisingly close personal bonds
There is a similar situation with self-driving cars and the with their units’ Packbots, giving them names, awarding
evolution of governance. As the NHTSA explicitly noted them battlefield promotions, risking their own lives to pro-
in its letter to Google, trying to assign the responsibility of tect that of the robot, and even mourning their death. This
“driver” to some human being riding in the autonomously happens, Singer explains, as a product of the way the mech-
driven vehicle would be both practically and legally inac- anism is situated within the unit and the role that it plays
curate. “No human occupant of the SDV [Self Driven Vehi- in battlefield operations. And it happens in direct opposi-
cle] could meet the definition of “driver” in Section 571.3 tion to what otherwise sounds like good common sense:
given Google’s described motor vehicle design, even if it They are just technologies—instruments or tools that work
were possible for a human occupant to determine the loca- on our behalf and feel nothing. This “correction,” in fact,
tion of Google’s steering control system, and sit ‘immedi- is part and parcel of the problem. This is because the dif-
ately behind’ it, that human occupant would not be capable ficulties with strict reassertion of the instrumental theory
of actually driving the vehicle as described by Google. If has little or nothing to do with speculation about what the
no human occupant of the vehicle can actually drive the mechanism may or may not feel. The problem is with the
vehicle, it is more reasonable to identify the ‘driver’ as kind of social environment it produces. As Kant (1963, p.
whatever (as opposed to whoever) is doing the driving” 239) argued concerning indirect duties to non-human ani-
(Hemmersbaugh 2016). For this reason, “accepting an AI mals: Animal abuse is wrong, not because of how the ani-
as a legal driver eases the government’s rule-writing pro- mal might feel (which is, according to Kant’s strict epis-
cess” (Ross 2016) by making existing law applicable to temological restrictions, forever and already inaccessible
recent changes in automotive technology. to us), but because of the adverse effect such action would
Second, strict application of the instrumental theory to have on other human beings and society as a whole. In
robots, as Bryson directly acknowledges, produces a new other words, applying the instrumental theory to these new
class of instrumental servant or slave, what we might call, kinds of mechanism and affordances, although seemingly
following Gunkel (2012, p. 86) “slavery 2.0.” The problem reasonable and useful, could have potentially devastating
here, as Brooks (2002) insightfully points out, is not with consequences for us, our world, and the other entities we
the concept of “slavery” per se (we should not, in other encounter here.
words, get hung up on words); the problem has to do with
the kind of robotic mechanisms to which this term comes to Machine ethics
be applied:
Conversely, we can entertain the possibility of what has
Fortunately we are not doomed to create a race of
been called “machine ethics” just as we had previously
slaves that is unethical to have as slaves. Our refrig-
done for other non-human entities, like animals (Singer
erators work twenty-four hours a day seven days a
1975). And there has, in fact, been a number of recent
week, and we do not feel the slightest moral concern
proposals addressing this opportunity. Wallach and Allen
for them. We will make many robots that are equally
(2009, p. 4), for example, not only predict that “there will
unemotional, unconscious, and unempathetic. We
be a catastrophic incident brought about by a computer sys-
will use them as slaves just as we use our dishwash-
tem making a decision independent of human oversight”
ers, vacuum cleaners, and automobiles today. But
but use this fact as justification for developing “moral
those that we make more intelligent, that we give
machines,” advanced technological systems that are able
emotions to, and that we empathize with, will be a
to respond to morally challenging situations. Anderson
problem. We had better be careful just what we build,
and Anderson (2011) take things one step further. They not
because we might end up liking them, and then we
only identify a pressing need to consider the moral respon-
will be morally responsible for their well-being. Sort
sibilities and capabilities of increasingly autonomous sys-
of like children (Brooks 2002, p. 195).
tems but have even suggested that “computers might be
As Brooks explains, our refrigerators work tireless on better at following an ethical theory than most humans,”
our behalf and “we do not feel the slightest moral concern because humans “tend to be inconsistent in their reasoning”
for them.” But things will be very different with social and “have difficulty juggling the complexities of ethical

13
316 D. J. Gunkel

decision-making” owing to the sheer volume of data that “where rules can be followed without interpretive judg-
need to be taken into account and processed (Anderson and ments.” “When a person,” Winograd (1990, p. 183)
Anderson 2007, p. 5). argues, “views his or her job as the correct application
These proposals, it is important to point out, do not nec- of a set of rules (whether human-invoked or computer-
essarily require that we first resolve the “big questions” of based), there is a loss of personal responsibility or com-
AGI (Artificial General Intelligence), robot sentience, or mitment. The ‘I just follow the rules’ of the bureaucratic
machine consciousness. As Wallach (2015, p. 242) points clerk has its direct analog in ‘That’s what the knowledge
out, these kinds of machines need only be “functionally base says.’ The individual is not committed to appropri-
moral.” That is, they can be designed to be “capable of ate results, but to faithful application of procedures.”
making ethical determinations…even if they have little or Coeckelbergh (2010, p. 236) paints a potentially more
no actual understanding of the tasks they perform.” The disturbing picture. For him, the problem is not the advent
precedent for this way of thinking can be found in corpo- of “artificial bureaucrats” but “psychopathic robots.” The
rate law and business ethics. Corporations are, according to term “psychopathy” has traditionally been used to name a
both national and international law, legal persons (French kind of personality disorder characterized by an abnormal
1979). They are considered “persons” (which is, we should lack of empathy which is masked by an ability to appear
recall, a moral classification and not an ontological cat- normal in most social situations. The functional moral-
egory) not because they are conscious entities like we ity, like that specified by Anderson and Anderson and
assume ourselves to be, but because social circumstances Wallach and Allen, intentionally designs and produces
make it necessary to assign personhood to these artificial what are arguably “artificial psychopaths”—robots that
entities for the purposes of social organization and jurispru- have no capacity for empathy but which follow rules and
dence. Consequently, if entirely artificial and human fabri- in doing so can appear to behave in morally appropriate
cated entities, like Google or IBM, are legal persons with ways. These psychopathic machines would, Coeckelbergh
associated social responsibilities, it would be possible, it (2010, p. 236) argues, “follow rules but act without fear,
seems, to extend the same moral and legal considerations to compassion, care, and love. This lack of emotion would
an AI or robot like Google’s DeepMind or IBM’s Watson. render them non-moral agents—i.e. agents that follow
The question, it is important to point out, is not whether rules without being moved by moral concerns—and they
these mechanisms are or could be “natural persons” with would even lack the capacity to discern what is of value.
what is assumed to be “genuine” moral status; the question They would be morally blind.”4
is whether it would make sense and be expedient, from both Efforts in “machine ethics” (or whatever other nomen-
a legal and moral perspective, to treat these mechanisms as clature comes to be utilized to name this development)
persons in the same way that we currently do for corpora- effectively seek to widen the circle of moral subjects to
tions, organizations and other human artifacts. include what had been previously excluded and marginal-
Once again, this decision sounds reasonable and justi- ized as mere neutral instruments of human action. This
fied. It extends both moral and legal responsibility to these is, it is important to note, not some blanket statement that
other socially aware and interactive entities and recognizes, would turn everything that was a tool into a moral sub-
following the predictions of Wiener (1988, p. 16), that the ject. It is the recognition, following Marx, that not eve-
social situation of the future will involve not just human-to- rything technological is reducible to a tool and that some
human interactions but relationships between humans and devices—what Marx called “machines” and what Win-
machines and machines and machines. But this shift in per- ner calls “autonomous technology”—might need to be
spective also has significant costs. First, it requires that we
rethink everything we thought we knew about ourselves,
4
technology, and ethics. It entails that we learn to think There is some debate concerning this matter. What Coeckelbergh
beyond human exceptionalism, technological instrumental- (2010, p. 236) calls “psychopathy”— e.g. “follow rules but act with-
out fear, compassion, care, and love”—Arkin (2009) celebrates as a
ism, and many of the other -isms that have helped us make considerable improvement in moral processing and decision making.
sense of our world and our place in it. In effect, it calls for Here is how Sharkey (2012, p. 121) characterizes Arkin’s efforts to
a thorough reconceptualization of who or what should be develop an “artificial conscience” for robotic soldiers: “It turns out
considered a legitimate center of moral concern and why. that the plan for this conscience is to create a mathematical decision
space consisting of constraints, represented as prohibitions and obli-
Second, robots that are designed to follow rules and gations derived from the laws of war and rules of engagement (Arkin
operate within the boundaries of some kind of pro- 2009). Essentially this consists of a bunch of complex conditionals
grammed restraint, might turn out to be something other (if-then statements)….Arkin believes that a robot could be more ethi-
than what is typically recognized as a responsible agent. cal than a human because its ethics are strictly programmed into it,
and it has no emotional involvement with the action.” For more on
Winograd (1990, pp. 182–183), for example, warns this debate and the effect it has on moral consideration, see Gunkel
against something he calls “the bureaucracy of mind,” (2012).

13
Mind the gap: responsible robotics and the problem of responsibility 317

programmed in such a way as to behave reasonably and van de Poel et al. (2012, pp. 49–50): “When engineering
responsibly for the sake of respecting human individuals structures fail or an engineering disaster occurs, the ques-
and communities. This proposal has the obvious advan- tion who is to be held responsible is often asked. However,
tage of responding to moral intuitions: if it is the machine in complex engineering projects it is often quite difficult to
that is making the decision and taking action in the world pinpoint responsibility.” As an example of this, the authors
with little or no direct human oversight, it would only point to an investigation of 100 international shipping acci-
make sense to hold it accountable (or at least partially dents undertaking by Wagenaar and Groenewegen (1987,
accountable) for the actions it deploys and to design it p. 596): “Accidents appear to be the result of highly com-
with some form of constraint in order to control for pos- plex coincidences which could rarely be foreseen by the
sible bad outcomes. But doing so has considerable costs. people involved. The unpredictability is due to the large
Even if we bracket the questions of AGI, super intel- number of causes and by the spread of the information over
ligence, and machine consciousness; designing robotic the participants.” For van de Poel et al. (2012, pp. 50–51),
systems that follow prescribed rules might provide the however, a more informative example can be obtained from
right kind of external behaviors but the motivations for the problem of climate change. “We think climate change is
doing so might be lacking. “Even if,” Sharkey (2012, p. a typical example of a many hands problem because it is a
121) writes in a consideration of autonomous weapons, phenomenon that is very complex, in which a large number
“a robot was fully equipped with all the rules from the of individuals are causally involved, but in which the role
Laws of War, and had, by some mysterious means, a way of individuals in isolation is rather small. In such situations,
of making the same discriminations as humans make, it is usually very difficult to pinpoint individual responsibil-
it could not be ethical in the same way as is an ethical ity. Climate change is also a good example of how technol-
human. Ask any judge what they think about blindly fol- ogy might contribute to the occurrence of the problem of
lowing rules and laws.” Consequently, what we actually many hands because technology obviously plays a major
get from these efforts might be something very different role in climate change, both as cause and as a possible
from (and maybe even worse than) what we had hoped to remedy”.
achieve. Extended agency theory, therefore, moves away from the
anthropocentric individualism of enlightenment thought,
Hybrid responsibility what Hanson (2009, p. 98) calls “moral individualism,” and
introduces an ethic that is more in-line with recent innova-
Finally, we can try to balance these two opposing positions tions in ecological thinking:
by taking an intermediate hybrid approach, distributing
When the subject is perceived more as a verb than a
responsibility across a network of interacting human and
noun—a way of combining different entities in dif-
machine components. Hanson (2009, p. 91), for instance,
ferent ways to engage in various activities—the dis-
introduces something he calls “extended agency theory,”
tinction between Self and Other loses both clarity and
which is itself a kind of extension/elaboration of the “actor-
significance. When human individuals realize that
network theory” initially developed by Latour (2005).
they do not act alone but together with other people
According to Hanson, who takes what appears to be a prac-
and things in extended agencies, they are more likely
tical and entirely pragmatic view of things, robot respon-
to appreciate the mutual dependency of all the par-
sibility is still undecided and, for that reason, one should
ticipants for their common well-being. The notion of
be careful not to go too far in speculating about things.
joint responsibility associated with this frame of mind
“Possible future development of automated systems and
is more conducive than moral individualism to con-
new ways of thinking about responsibility will spawn plau-
structive engagement with other people, with tech-
sible arguments for the moral responsibility of non-human
nology, and with the environment in general (Hanson
agents. For the present, however, questions about the men-
2009, p. 98).
tal qualities of robots and computers make it unwise to go
this far” (Hanson 2009, p. 94). Instead, Hanson suggests Similar proposals has been advanced and advocated
that this problem may be resolved by considering various by Deborah Johnson and Peter Paul Verbeek for dealing
theories of “joint responsibility,” where “moral agency is with innovation in information technology. “When com-
distributed over both human and technological artifacts” puter systems behave,” Johnson (2006, p. 202) writes,
(Hanson 2009, p. 94). “there is a triad of intentionality at work, the intention-
This proposal, which can be seen as a kind of elabora- ality of the computer system designer, the intentional-
tion of Nissenbaum’s (1996) “many hands” thesis, has ity of the system, and the intentionality of the user.” “I
been gaining traction, especially because it appears to be will,” Verbeek (2011, p. 13) argues, “defend the the-
able to deal with and respond to complexity. According to sis that ethics should be approached as a matter of

13
318 D. J. Gunkel

human-technological associations. When taking the notion the network that in the final analysis no one was able to be
of technological mediation seriously, claiming that technol- identified as being responsible for the collapse.
ogies are human agents would be as inadequate as claim-
ing that ethics is a solely human affair.” For both Johnson
and Verbeek, responsibility is something distributed across Conclusions
a network of interacting components and these networks
include not just other human persons, but organizations, From the beginning our concern has been the concept and
natural objects, and technologies. exigencies of responsibility. Usually efforts to decide the
This hybrid formulation—what Verbeek calls “the eth- question of responsibility in the face of technology is not a
ics of things” and Hanson terms “extended agency the- problem, precisely because the instrumental theory assigns
ory”—has advantages and disadvantages. To its credit, responsibility to the human being and defines technol-
this approach appears to be attentive to the exigencies of ogy as nothing more than a mere tool or instrument. It is,
life in the twenty-first century. None of us, in fact, make therefore, the human being who is responsible for respond-
decisions or act in a vacuum; we are always and already ing or answering for what the machine does nor does not
tangled up in networks of interactive elements that com- do (or perhaps more accurately stated, what comes to be
plicate the assignment of responsibility and decisions con- done or not done through the instrumentality of the mecha-
cerning who or what is able to answer for what comes to nism). This way of thinking has worked rather well, with
pass. And these networks have always included others—not little or no significant friction, for over 2500 years, and it
only other human beings but institutions, organizations, holds considerable promise for application to the project
and even technological components like the robots and of responsible robotics. But, as we have seen, recent inno-
algorithms that increasingly help organize and dispense vations in technology—autonomous machines, learning
with social activity. This combined approach, however, still algorithms, and social robots—challenge the instrumental
requires that someone decide and answer for what aspects theory by opening up what Matthias (2004) calls “respon-
of responsibility belong to the machine and what should sibility gaps.”
be retained for or attributed to the other elements in the In response to these challenges—in an effort to close or
network. In other words, “extended agency theory,” will at least remediate the gap—we have considered three alter-
still need to decide who is able to answer for a decision or natives. On the one side, there is strict application of the
action and what can be considered a mere instrument (Der- instrumental theory, which would restrict all questions of
rida 2005, p. 80). responsibility to human beings and define robots, no matter
Furthermore, these decisions are (for better or worse) how sophisticated their design and operations, as nothing
often flexible and variable, allowing one part of the net- more than tools or instruments of human decision making
work to protect itself from culpability by instrumental- and action. On the other side, there are efforts to assign
izing its role and deflecting responsibility and the obliga- some level of moral agency to machines. Even if robots
tion to respond elsewhere. This occurred, for example, are not (at least for now) able to be full moral subjects,
during the Nuremberg trials at the end of World War II, they can, it is argued, be functionally responsible. Though
when low-level functionaries tried to deflect responsibil- such “responsibility” is only a kind of “quasi-responsibil-
ity up the chain of command by claiming that they “were ity” (Stahl 2006), this way of thinking assigns the ability
just following orders.” But the deflection can also move in to respond to the mechanism. And situated somewhere in
the opposite direction, as was the case with the prisoner between these two opposing positions, is a kind of inter-
abuse scandal at the Abu Ghraib prison in Iraq during the mediate option that distributes responsibility (and the abil-
presidency of George W. Bush. In this situation, individu- ity to respond) across a network of interacting components,
als in the upper echelon of the network deflected respon- some human and some entirely otherwise.
sibility down the chain of command by arguing that the These three options clearly define a spectrum of possi-
documented abuse was not ordered by the administration ble responses with each mode of response having its own
but was the autonomous action of a “few bad apples” in the particular advantages and disadvantages. Consequently,
enlisted ranks. Finally, there can be situations where no one how we—individually but also as a collective—decide to
or nothing is accountable for anything. In this case, moral respond to these opportunities and challenges will have a
and legal responsibility is disseminated across the elements profound effect on the way we conceptualize our place in
of the network in such a way that no one person, institution, the world, who we decide to include in the community of
or technology is culpable or held responsible. This is pre- moral subjects, and what we exclude from such consid-
cisely what happened in the wake of the 2008 financial cri- eration and why. But no matter how it is decided, it is a
sis. The bundling and reselling of mortgage-backed securi- decision—quite literally a cut that institutes difference and
ties was considered to be so complex and dispersed across makes a difference. We are, therefore, responsible both for

13
Mind the gap: responsible robotics and the problem of responsibility 319

deciding who or even what is a moral subject and, in the Gladden, M. E. (2016). The diffuse intelligent other: An ontology
process, for determining the current state and future pos- of nonlocalizable robots as moral and legal actors. In M. Nør-
skov (Ed.), Social robots: Boundaries, potential, challenges
sibility of and for responsible robotics. (pp. 177–198). Burlington, VT: Ashgate.
Go Ratings. (2016). https://www.goratings.org/.
Goertzel, B. (2002). Thoughts on AI morality. Dynamical Psychol-
ogy: An International, Interdisciplinary Journal of Complex
References Mental Processes, May 2002. http://www.goertzel.org/dyna-
psyc/2002/AIMorality.htm.
Anderson, M., & Anderson, S. L. (2007). The status of machine eth- Google DeepMind. (2016). AlphaGo. https://deepmind.com/alpha-
ics: A report from the AAAI symposium. Minds & Machines, go.html.
17(1), 1–10. Gunkel, D. J. (2007). Thinking otherwise: Ethics, technology
Anderson, M., & Anderson, S. L. (2011). Machine ethics. Cambridge: and other subjects. Ethics and Information Technology, 9(3),
Cambridge University Press. 165–177.
Arkin, R. C. (2009). Governing lethal behavior in autonomous robots. Gunkel, D. J. (2012). The machine question: Critical perspectives
Boca Raton: CRC Press. on ai, robots and ethics. Cambridge, MA: MIT Press.
Asaro, P. (2012). On banning autonomous weapon systems: Human Hall, J. S. (2001). Ethics for machines. KurzweilAI.net. http://www.
rights, automation, and the dehumanization of lethal deci- kurzweilai.net/ethics-for-machines.
sion-making. International Review of the Red Cross, 94(886), Hammond, D. N. (2015). Autonomous weapons and the problem
687–709. of state accountability. Chicago Journal of International Law,
Beard, J. M. (2014). Autonomous weapons and human responsibili- 15(2), 652–687.
ties. Georgetown Journal of International Law, 45(1), 617–681. Hanson, F. A. (2009). Beyond the skin bag: On the moral respon-
Breazeal, C. L. (2004). Designing sociable robots. Cambridge, MA: sibility of extended agencies. Ethics and Information Technol-
MIT Press. ogy, 11(1), 91–99.
Bringsjord, S. (2007). Ethical robots: The future can heed us. AI & Heidegger, M. (1962). Being and time (trans. by John Macquarrie
Society, 22(4), 539–550. and Edward Robinson). New York: Harper and Row.
Brooks, R. A. (2002). Flesh and machines: How robots will change Heidegger, M. (1977). The Question concerning technology and
us. New York: Pantheon Books. other essays (trans. by William Lovitt). New York: Harper and
Bryson, J. J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close Row.
engagements with artificial companions: Key social, psycho- Hemmersbaugh, P. A. NHTSA Letter to Chris Urmson, Director,
logical, ethical and design issues (pp. 63–74). Amsterdam: John Self-Driving Car Project, Google, Inc. https://isearch.nhtsa.gov/
Benjamins. files/Google - compiled response to 12 Nov 15 interp request - 4
Calverley, D. J. (2008). Imaging a non-biological machine as a legal Feb 16 final.htm.
person. AI & Society, 22(4), 523–537. Jibo. (2014). https://www.jibo.com.
Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and Johnson, D. G. (1985). Computer ethics. Upper Saddle River, NJ:
human morality. Ethics and Information Technology, 12(3), Prentice Hall.
235–241. Johnson, D. G. (2006). Computer systems: Moral entities but not
Coeckelbergh, M. (2012). Growing moral relations: Critique of moral moral agents. Ethics and Information Technology, 8(4), 195–204.
status ascription. New York: Palgrave Macmillan. Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral
Committee on Legal Affairs. Draft Report with Recommendations to agents. Ethics and Information Technology, 10(2–3), 123–133.
the Commission on Civil Law Rules on Robotics. European Par- Kant, I. (1963). Duties to animals and spirits. lectures on ethics
liament, 2016. http://www.europarl.europa.eu/sides/getDoc.do? (trans. by L. Infield) (pp. 239–241). New York: Harper and Row.
pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443% Keynes, J. M. (2010). Economic possibilities for our grandchildren.
2B01%2BDOC%2BPDF%2BV0//EN. In Essays in persuasion (pp. 321–334). New York: Palgrave
Darling, K. (2012). Extending legal protection to social robots. Macmillan.
IEEE Spectrum. http://spectrum.ieee.org/automaton/robotics/ Krishnan, A. (2009). Killer robots: Legality and ethicality of autono-
artificial-intelligence/extending-legal-protection-to-social-robots. mous weapons. Burlington: Ashgate.
Datteri, E. (2013). Predicting the long-term effects of human-robot Latour, B. (2005). Reassembling the social: An introduction to actor-
interaction: A reflection on responsibility in medical robotics. network-theory. Oxford: Oxford University Press.
Science and Engineering Ethics, 19(1), 139–160. Lee, P. Learning from Tay’s introduction. Official Microsoft Blog,
Dennett, D. C. (1996). Kinds of minds: Toward and understanding of 25 March 2016. https://blogs.microsoft.com/blog/2016/03/25/
consciousness. New York: Perseus Books. learning-tays-introduction/.
Derrida, J. (2005). Paper machine (trans. by R. Bowlby). Stanford, Lokhorst, G. J., & van den Hoven, J. (2012). Responsibility for mili-
CA: Stanford University Press. tary robots. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot
Feenberg, A. (1991). Critical theory of technology. New York: Oxford ethics: The ethical and social implications of robots (pp. 145–
University Press. 155). Cambridge, MA: MIT Press.
Floridi, L. (2013). The ethic of information. Oxford: Oxford Univer- Lyotard, J. F. (1993). The postmodern condition: A report on knowl-
sity Press. edge (trans. by Geoff Bennington and Brian Massumi). Minne-
French, P. (1979). The corporation as a moral person. American Phil- apolis, MN: University of Minnesota Press.
osophical Quarterly, 16(3), 207–215. Marx, K. (1977). Capital (trans. by Ben Fowkes). New York: Vintage
Garreau, J. (2007). Bots on the Ground: In the Field of Battle (or Even Books.
Above it), Robots are a Soldier’s Best Friend. The Washington Matthias, A. (2004). The responsibility gap: Ascribing responsibil-
Post, Retrieved May 6, 2007, from http://www.washingtonpost. ity for the actions of learning automata. Ethics and Information
com/wp-dyn/content/article/2007/05/05/AR2007050501009. Technology, 6(3), 175–183.
html. Metz, C. Google’s AI Wins a Pivotal Second Game in Match with
Go Grandmaster. Wired, March 2016. http://www.wired.

13
320 D. J. Gunkel

com/2016/03/googles-ai-wins-pivotal-game-two-match-go- Stahl, B. C. (2006). Responsible computers? A case for ascribing


grandmaster/. quasi-responsibility to computers independent of personhood or
Microsoft. (2016). Meet Tay—Microsoft AI. Chatbot with Zero Chill. agency. Ethics and Information Technology, 8(4), 205–213.
https://www.tay.ai/. Sullins, J. P. (2006). When is a robot a moral agent? International
Moore, G. E. (2005). Principia ethica. New York: Barnes & Noble Review of Information Ethics, 6(12), 23–30.
Books. Sullins, J. P. (2010). Robowarfare: Can robots be more ethical than
Mowshowitz, A. (2008). Technology as excuse for questionable eth- humans on the battlefield? Ethics and Information Technology,
ics. AI & Society, 22(3), 271–282. 12(3), 263–275.
Nissenbaum, H. (1996). Accountability in a computerized society. Suzuki, Y., Galli, L., Ikeda, A., Itakura, S., & Kitazaki, M. (2015).
Science and Engineering Ethics, 2(1), 25–42. Measuring empathy for human and robot hand pain using elec-
Reeves, B., & Nass, C. (1996). The media equation: How people troencephalography. Scientific Reports, 5(1), 15924. doi:10.1038/
treat computers, television, and new media like real people and srep15924.
places. Cambridge: Cambridge University Press. Turing, A. (1999). Computing machinery and intelligence. In P. A.
Riceour, P. (2007). Reflections on the just (trans. by David Pellauer). Meyer (Ed.), Computer media and communication: A reader
Chicago: University of Chicago Press. (pp. 37–58). Oxford: Oxford University Press.
Risely, J. (2016). Microsoft’s Millennial Chatbot Tay.ai Pulled Offline van de Poel, I., Nihle´n Fahlquist, J., Doorn, N., Zwart, S., & Royak-
After Internet Teaches Her Racism. GeekWire. http://www.geek- kers, L. (2012). The problem of many hands: Climate change as
wire.com/2016/even-robot-teens-impressionable-microsofts-tay- an example. Science Engineering Ethics, 18(1), 49–67.
ai-pulled-internet-teaches-racism/. Verbeek, P. P. (2011). Moralizing technology: Understanding and
Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobi- designing the morality of things. Chicago: University of Chicago
eraj, S., & Eimler, S. C. (2013). An experimental study on emo- Press.
tional reactions towards a robot. International Journal of Social Wagenaar, W. A., & Groenewegen, J. (1987). Accidents at sea: Mul-
Robotics, 5(1), 17–34. tiple causes and impossible consequences. International Journal
Ross, P. E. (2016). A google car can qualify as a legal driver. IEEE of Man-Machine Studies, 27, 587–598.
Spectrum. http://spectrum.ieee.org/cars-that-think/transportation/ Wallach, W. (2015). A dangerous master: How to keep technology
self-driving/an-ai-can-legally-be-defined-as-a-cars-driver. from slipping beyond our control. New York: Basic Books.
Schulzke, M. (2013). Autonomous weapons and distributed responsi- Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots
bility. Philosophy & Technology, 26(2), 203–219. right from wrong. Oxford: Oxford University Press.
Sharkey, N. (2012). Killing made easy: From joysticks to politics. In Wiener, N. (1988). The human use of human beings: Cybernetics and
P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethi- society. Boston: Ad Capo Press.
cal and social implications of robots (pp. 111–128). Cambridge, Winner, L. (1977). Autonomous technology: Technics-out-of-control
MA: MIT Press. as a theme in political thought. Cambridge, MA: MIT Press.
Singer, P. (1975). Animal liberation: A new ethics for our treatment of Winograd. T. (1990). Thinking machines: Can there be? Are we? In
animals. New York: New York Review Book. D. Partridge & Y. Wilks (Eds.), The foundations of artificial
Singer, P. W. (2009). Wired for war: The robotics revolution and con- intelligence: A sourcebook (pp. 167–189). Cambridge: Cam-
flict in the twenty-first century. New York: Penguin Books. bridge University Press.
Siponen, M. (2004). A pragmatic evaluation of the theory of informa- Žižek, S. (2006). Philosophy, the “Unknown Knowns,” and the public
tion ethics. Ethics and Information Technology, 6(4), 279–290. use of reason. Topoi, 25(1–2), 137–142.
Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy,
24(1), 62–77.

13

You might also like