Science Fiction and Artificial Intellige
Science Fiction and Artificial Intellige
Author
Ryan Browne
Supervisor
John Beck
Word count: 10146
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . page 3
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .page 41
2
Introduction
The field of artificial intelligence, as defined by Stuart J. Russell and Peter Norvig,
‘attempts to understand intelligent entities’; 1 that is, ‘anything that can be viewed as
perceiving its environment through sensors and acting upon that environment
that can think and interact with the world - computers which are able to exhibit the
behaviours commonly associated with “being human”. In the 20th century, a variety
machines, taking influence from stories of inanimate objects becoming animate and
developing cognition which are ingrained in Western culture. 3 Authors such as Isaac
Asimov and Philip K. Dick, as well as the director Ridley Scott, have depicted cultural
which arise from the technology. In Asimov’s detective novel The Caves of Steel
(1953), anti-robot attitudes are widespread amongst earth’s population, and the
robots elicit scepticism and suspicion from scientists and researchers, as his “Three
Laws of Robotics” (“laws” programmed into robots to prevent them harming humans)
are explored and deconstructed throughout the course of the short stories. And in
Runner (1982), the primal fear of technology created by humans, turning on its
3
creators in revenge, is elaborately illustrated. This dissertation will examine these
fears and criticisms of AI which seem embedded in our culture and literature; as well
as in the scientific world, in which AI has increasingly become the subject of scrutiny
from researchers and inventors concerned about the negative impact the technology
might have. In the first chapter, I shall consider the extent to which the
groups in society (such as women and ethnic minorities), and will argue that the
uncertainty and fear surrounding AI partially results from a society which perpetuates
a hierarchical power imbalance. In the second chapter, I will consider the impact of
exclusive to humans). The philosopher John Searle, in his Chinese Room analogy,
attempts to refute the idea that machines can be conscious, an idea which is
reflected in many of the texts which will be explored in this dissertation. It will be
prejudice faced by AI; that is, the fear of the “posthuman” - a being which exists in a
by humans. In the final chapter, I will discuss how far our species has come in
developing machines in our own image, acknowledging the history behind AI as well
as the mythological and literary influences which led to its foundation. I will
subsequently argue that, considering the advances which have been made in
pattern-recognition and social awareness for AI, fears that it will bring about our
4
Chapter I:
Before dealing with the deeper philosophy and science behind artificial intelligence -
which accounts for a lot of the cultural anxiety surrounding the technology - it must
be recognised that machines with intelligence are often portrayed as a metaphor for
the “Other” (the marginalised societal outsider). A lot of the literature which this
dissertation focuses on, for instance, was written at a time in which subordination
and discrimination based on gender, race and sexuality was heavily protested
against. In fact all of the Asimov stories which will be analysed predate the Civil
Rights Act of 1964, which ended public segregation and employment discrimination.
However, to clarify, that is not to say that my aim is to reduce the significance of AI in
proposition that society is suspicious of AI in the same way that it makes women and
ethnic minorities suspects through prejudice). Instead, I want to make apparent the
argument that the uncertainty and suspicion surrounding AI can be attributed in part
Asimov’s The Caves of Steel is the first novel in the robot series (excluding I,
Robot, which is a collection of short stories), and it offers some unique analysis of AI
wide scale (similar to the modern development of “self-service” machines), all the
5
while portraying the underlying discriminative attitudes which robots face. As the
which the ‘trained men’ to perform certain jobs ‘don’t exist,’ and are inevitably being
replaced with R’s (robots).5 Earth has become a desolate landscape, in contrast with
the more elegant, wealthy “Spacer” worlds (the Spacers have been integrating
robots with City dwellers, exacerbating tensions between the two). The suspicions
held by earth’s inhabitants have, on the surface, a fathomable basis: a social climate
in which everyone ‘stands the chance of being out of a job’ inevitably leads to
prejudice towards the Other, the suspected “group” who are seen to be taking jobs
The novel’s protagonist Elijah Baley is far from neutral on the matter, and is
which Elijah is partnered with for a homicide case, who essentially adopts his
occupation as detective. In the first encounter with R. Daneel, Lije remarks that ‘“you
don’t look like a robot”’; R. Daneel responds, asking whether this disturbs Lije, who
retorts, ‘“It shouldn’t, I suppose, Da - Daneel. Are they all like you on your world?”’ 8
This interaction - particularly Lije’s reaction - implies the problematic nature of the
uncanny valley, the idea that ‘a person’s response to a humanlike robot would
lifelike appearance.’9 R. Daneel does not only double Lije and his profession (in the
sense of the uncanny “doppelganger”), but humanity. The stark difference which Lije
observes in the Spacer model consists in the fact that he is so similar to humans;
5 Ibid, p. 16
6 Ibid, p. 16
7 James 1990, p. 40
8 The Caves of Steel, p. 26
9 Mori 2012, p. 98
6
Lije assumes the robot to be a human Spacer until R. Daneel reveals his name (the
‘R’ initial signifying robot). The robot becomes both the protagonist’s doppelganger,
human.10 Lije’s encounter with the uncanny, arising from the duality of
form of sameness/otherness), and the doubling of his own human body and identity,
SF encounter with alterity [or otherness] in its most suggestive locus’. 11 Robert
estranged from; familiar because it plays so large a part in our life, estranged from
because we don’t really know how it works or what the boffins are about to invent
next.’12
Our familiarity with the human Other due to a resemblance in physiology and DNA
seems to mirror this familiarity with technology, embodied here as the artificially
both ubiquitous in the applications of modern life and yet uncanny in its resemblance
The two main protagonists of The Caves of Steel are comparable in many
aspects with similar figures in the film Blade Runner. Rick Deckard is the “anti-robot
extremist” and the replicants are bioengineered humanoid androids like R. Daneel -
only, in this case, they have become rogue androids, committing “mutiny” by
murdering their owners and escaping an off-world planet in search of extended life
7
on earth. A basic interpretation of the replicants representing the metaphorical
“Other” is the subordination of and discrimination against them in the film. Although
they are far from an innocent depiction of the technology, it is not unreasonable to
sympathise with them. Of the six replicants which attempted to escape the off-world
planet, only four actually escaped and survived. All four of these replicants are
loader, ‘can lift four-hundred-pound atomic loads all day and night’; Roy Batty (the
‘leader’), a combat model, has ‘optimum self-sufficiency’; Zhora Salome, who was
trained for murder squads, is referred to as ‘beauty and the beast’; and Pris Stratton,
a ‘basic pleasure model,’ is ‘the standard item for military clubs in the outer
Roy expresses to Rick, ‘quite an experience to live in fear, isn’t it? That’s what it is to
echoes Adam Roberts’ comments about the familiar and unfamiliar in our encounter
with artificial intelligence in SF. Roberts argues that replicants embody the “Other” in
interaction with commodities’ and the human ‘fascination with and suspicion of the
power of things, things that...almost acquire a life of their own.’ 15 This also evokes
the Freudian image of the “uncanny” in the form of the automata - the ‘intellectual
uncertainty between animate and inanimate.’16 It is arguable that the awareness the
(unique in that they embody the animation of the inanimate) causes them to lash out
13 Blade Runner.
14 Ibid.
15 Roberts, p. 151
16 Liu 2011, p. 207
8
“items” and “objects” of desire, in order to perpetuate a system in which robots and
Deckard as he quizzes her in her dressing room. Deckard claims to be ‘from the
abuses,’ implying the real moral abuses the androids have been subjected to at the
hands of humans in off-world colonies.17 Zhora then asks Deckard to dry her,
proceeding to kick him from behind. Whilst it’s certainly plausible that she perhaps
suspected him of being employed to track her down, there seems to be a voyeuristic
nature to the way the scene is directed. Harrison Ford enhances this voyeuristic role
play by putting on the voice of a “sleazeball”, and trying to get into her dressing
room. He says to her, ‘I’d like to check your dressing room...er, for holes...you’d be
surprised what a guy would go through to glimpse at a beautiful body.’ 18 The scene
which the film was inspired by also indicates a sense of voyeurism, as Rick is
referred to as a ‘sexual deviant’ in the novel. 19 Zhora appears to be lashing out at the
limited human construction of gender here, and she is not the only replicant that
locate the Nexus 6’s creator, Eldon Tyrell. As the mutated technician revels in the
intellect and power of the Nexus 6 replicants, Pris teases him, paraphrasing
cartwheel backwards. He gives into the demands of Roy and Pris after the latter
kisses him on the cheek and says ‘I don’t think there’s another human being in the
world who would have helped us.’ 20 By playing into what Judith Butler calls the ‘tacit
17 Blade Runner
18 Ibid
19 Do Androids Dream of Electric Sheep?, p. 84
20 Ibid
9
collective agreement to perform, produce, and sustain discrete and polar genders as
secure longer life for her and her fellow replicants. By tapping into existing
imbalances in power between the sexes, Blade Runner conveys to the audience that
encompasses The Caves of Steel’s environment, arguing that ‘the android functions
as SF’s contribution to the race debate … It is guilt for the Negroes, the Indians, the
Jews, the Vietnamese, the peoples of South America and mankind’s rape of weaker
individuals that comes back in the android’. 21 It is suggested that the robot is a
postcolonial psyche. The robots are abject in that they resummon the ‘immemorial
violence with which a body becomes separated from another body in order to be’; 22
the monstrosity of AI lies in its separateness from the human, as a creation of human
labour. Edward James argues that this separateness of robots from the human, and
the “human superiority” which accompanies it, is not entirely different to whites
separating themselves from blacks and assuming superiority in their own race. He
notes the fact that R. Daneel stands in direct contrast to the less intelligent, more
immature R. Sammy, a robot who ‘“shuffles his feet”, with a vacuous grin on his face,
in clear parody of the stereotypical black’. 23 R. Daneel on the other hand represents
‘the constant fear of the racist - the light-skinned black who might “pass” for [being]
white.’24 The comparison is bold, but it would account for the extreme prejudice faced
10
by the robots in the novel, and the fact that R. Daneel’s occupation involves less
Despite this, James’ conclusion contains gaps and it is not truly representative
of events which occur in the novel. For instance, what James does not consider is
the fact that robots have human creators, and so are not racially separate from
humans. As Baley assumes R. Daneel (whose design is based on his creator, Dr.
Roj Nemennuh Sarton) to actually be Dr. Sarton, it becomes evident that the novel
metaphorical coding for blackness, ‘but the more interesting representations make
for robots’ is ‘something quite irrational’ in that it is rooted in a deep cultural suspicion
difficult to accept that R. Daneel is a robot, disputing it on the grounds that ‘R.
to admit that the reason for his added suspicion is that he had seen R. Daneel’s
artificial genitalia ‘in the Personal.’28 The overarching “threat” which AI poses to its
ontological questions regarding the status and power of the human.’29 As Lije is
confronted with a robot which so uncannily resembles its creator, and humanity as a
failed to attain, a lifelike appearance.’ 30 His attitude is not entirely comparable to that
25 Roberts, p. 95
26 The Caves of Steel, p. 70-1
27 Ibid, p. 81
28 Ibid.
29 Bukatman 1993, p. 2
30 Mori, p. 98
11
of racial oppression; rather, it is more characteristic of a wider societal suspicion of
humans, as “slave labor” in the “hazardous exploration and colonization” of the new
frontier.’31 The audience is told in the film’s opening that a ‘bloody mutiny’ in an off-
world colony results in replicants being ‘declared illegal on earth - under penalty of
prejudice towards androids can be sourced from the hell of being on earth, as
opposed to off-world colonies, where a ‘new life awaits you...the chance to begin
again in a golden land of opportunity and adventure’). 33 When blade runners shoot
trespassing replicants on sight, the film tells us that this was ‘not called execution. It
was called retirement.’34 This phraseology immediately implies the racial Otherness
inherent to the portrayal of robots and androids in SF; an exception is made for the
killing of androids, who, when killed, are “retired” rather than murdered or executed.
Brian Locke argues that the attitudes of police officers in the film are not entirely
dissimilar to those of racist police officers towards black people within the historical
narrative of the civil rights movement. He notes that, in the original 1982 version of
history books, he’s the kind of cop who used to call black men ‘niggers.”’35
31 Locke, p. 103
32 Blade Runner
33 Ibid
34 Ibid
35 Locke 2009, p. 103
12
For Locke, Bryant represents the ‘Southern redneck cop’ whereas Deckard
embodies the ‘nonracist white one.’36 However, again, I want to avoid reducing Blade
Runner (as Locke does) to a film which is an entirely allegorical representation of the
‘all-known tradition of African American history and slave narratives.’ 37 The film itself
problematises this view: the most powerful Nexus-6 model, Roy Batty, for instance,
has white blonde hair and blue eyes. Philip K. Dick, who of course wrote the source
material for the film, regarded Rutger Hauer (who played Batty) as ‘the perfect Batty
- cold, Aryan, flawless’.38 A racial parameter is certainly suggested here, but the
power play is not the simple binary of white/black; the “Otherness” of artificial
and exploit AI. It is arguable that Batty represents the controversial peak of humanity
which the endeavour of human genetic engineering and the modification of the
human germline “aspires” towards (John H. Evans notes that ‘the ethics of HGE
which does reinforce a sense of racial Otherness here). 39 However Batty is clearly an
abject rather than desirable product of genetically engineered AI; the pure fact that
he is (like R. Daneel) an uncanny duplicate of the human, and his mind an uncanny
duplicate of human intelligence, leads one to the conclusion that there is something
other than racial Otherness at work in the cultural anxiety towards AI.
This can be identified in The Caves of Steel, as R. Daneel proves to Lije that
he is an artificial human, rather than the robotics doctor which Lije assumes him to
36 Ibid
37 Ibid
38 Sammon 2011, p. 284
39 Evans 2012, p. 6
13
‘the diamagnetic seam fell apart the entire length of his arm. A smooth, sinewy, and
apparently entirely human limb lay exposed...R. Daneel pinched the ball of his right
middle finger with the thumb and forefinger of his left hand...just as the fabric of the
sleeve had fallen in two when the diamagnetic field of its seam had been interrupted,
so now the arm itself fell in two. There, under a thin layer of fleshlike material, was
the dull blue gray of stainless steel rods, cords, and joints.’40
One is again reminded of the uncanny valley, and the perturbation elicited from the
human when confronted with an artificial being which displays an “apparently entirely
human limb” which is then revealed to be “stainless steel rods, cords, and joints”.
This also embodies Bukatman’s idea that SF texts ‘stage the return of the repressed’
the flesh’.41 Lije is reluctant to accept that a being so closely similar to him in its
intelligent. This attitude inevitably makes R. Daneel a suspect up until the moment in
which Lije and R. Daneel discover the murderer. In a society in which the machine
Other attempts to resemble the human - and, in turn, enable the sense of “familiarity”
- the inevitable response from humans is that of the uncanny valley. Consequently,
humans are able to degrade the status of AI, creating an inherently unequal power
structure.
To recapitulate, the purpose of this chapter is not to argue that the cultural
faced by women and minorities. But it is important to identify the ways in which
historical prejudices have given rise to the many concerns raised over robots and
androids in both fiction and now in reality. In both The Caves of Steel and Blade
14
beneath that of human (for instance, despite the fact that R. Daneel is a detective
like Lije, the latter always finds difficulty in trusting the former). And in reality, the
question has already been raised as to whether ‘intelligent machines [should] have
“posthumans”. And a significant cause for this seems to be the ostensible “loss” of a
15
Chapter II:
‘At the waterfall. When we see a waterfall, we think we see freedom of will and
choice in the innumerable turnings, windings, breakings of the waves; but everything
human actions; if one were omniscient, one would be able to calculate each
individual action in advance, each step in the progress of knowledge, each error,
It is clear that a large portion of cultural anxiety towards artificial intelligence lies in
the tendency of society to diminish and marginalise the power and existence of
“outsiders”. The first chapter has demonstrated that a lot of the issues surrounding
in human history and contemporary society. But there is a deeper and perhaps more
pertinent problem at hand. It is rarely considered by critics that the main cause for
encasement of the human in flesh and bone - possessing the coveted, privileged,
near-sacred entity of “consciousness” that really stirs the debate on AI, and accounts
43 Nietzsche 1984 (originally 1878), p. 74. Nietzsche’s aphorism almost mechanises the human,
arguing that our actions are necessitated, in contrast with the humanist idea that we are endowed with
autonomy.
16
Isaac Asimov’s I, Robot presents an anthological universe in which the reader
gains an insight into the various complexities raised by his AI machines and the
fundamental “Three Laws of Robotics”. Asimov states very clearly at the start that
‘1 - A robot may not injure a human being, or, through inaction, allow a human being
to come to harm.
2 - A robot must obey the orders given it by human beings except where such orders
3 - A robot must protect its own existence as long as such protection does not conflict
In short, the machines fundamentally cannot allow a human to come to harm (albeit
this is, naturally, faced with various problems throughout the stories). In light of these
of the behaviours associated with being human (the laws are almost like a robot’s
the state of being human, arguing that when a subject is ‘defined by information
flows and feedback loops rather than epidermal surfaces, the subject becomes a
wholeness can be assumed.’45 This cause for anxiety is realised in the story
“Reason”, as scientists Gregory Powell and Mike Donovan attempt to prove to the
highly intelligent, self-aware robot Cutie (or “QT”) that he was created by humans.
Cutie is unconvinced, so they build a robot right in front of him, and Asimov’s
17
‘Donovan uncapped the tightly sealed container and from the oil bath within he
withdrew a second cube. Opening this in turn, he removed a globe from its sponge-
rubber casing. He handled it gingerly, for it was the most complicated mechanism
ever created by man. Inside the thin platinum-plated “skin” of the globe was a
neuronic paths, which imbued each robot with what amounted to a pre-natal
education.’46
This description of the positronic brain significantly reduces both the anthropocentric
value and importance placed on the human brain and the “consciousness” which it is
information flows and feedback loops - that is, as something which operates as a
simulation - and with language which subverts the “norm” of the human anatomy
(e.g. the sponge rubber casing, platinum-plated skin, and the positronic brain itself),
abandoned the idea of the ‘unique, self-regulating and intrinsically moral powers of
human reason’.47 The society itself is founded upon a notion of “empathy”, and uses
the “Voigt-Kampff” test to discern between human and android, measuring their
46 I, Robot, p. 69
47 Braidotti 2013, p. 13
18
altering devices called “mood organs” to remain happy and some even require
“empathy boxes” to enter into a collective hallucination in which users are connected
with one another through the vehicle of the mysterious Wilbur Mercer’s mind (Mercer
resembles a prophet in the novel). The empathy box makes its first appearance as J.
F. Isidore (J. F. Sebastian in the film adaptation) “crosses over,” ‘in the usual
However he is not the only human character in the novel who seems to lack or
otherwise miscomprehend empathy. Rick Deckard and his wife Iran both exhibit a
lack of empathy and deficiency in emotions. As the novel is introduced, the two enter
into a spat, and Rick wonders whether to dial ‘for a thalamic suppressant’ at his
console ‘(which would abolish his mood of rage) or a thalamic stimulant (which would
make him irked enough to win the argument.)’ 49 In this instance it appears that the
two are so far detached from their human emotions that they require technology to
“remind them” of the human feelings of anger, sadness, joy, and so forth. Moreover,
Rick’s own “empathy” is questionable, having ‘never felt any empathy on his own
part toward the androids he killed.’ 50 Confronted with the thought of Nexus-6 android
construct?...Something that only pretends to be alive? But Luba Luft had seemed
genuinely alive; it had not worn the aspect of a simulation.’ 51 The protagonist’s
dilemma not only reflects a crisis of humanity which is persistently redefining itself (in
this case empathy is redefined, in the way that non-human animals provoke the most
19
significant empathic response, but androids - seen as lesser than both humans and
non-human animals - are barely even granted any empathy); it also implies the crisis
Donovan and Powell often arrogantly dismiss the robots, always implying that
‘“they’re (just) robots...They’ve got positronic brains: primitive, of course.”’ 52 But this
particular dismissal is soon disrupted by a confusion over the behaviour of the rogue
(albeit harmless) robot Speedy, whom they are trying to trace. The robot says, ‘“Hot
dog, let’s play games. You catch me and I catch you; no love can cut our knife in
two. For I’m Little Buttercup, sweet Little Buttercup. Whoops!”’ The robot seems to
be either drunk or simulating the human experience of being drunk. 53 Upon noticing
this erratic behaviour, Mike Donovan - baffled - says to Greg Powell, ‘“Say, Gress,
he… he’s drunk or something.”’54 But Powell is haste in his dismissal of the
behaviour, arguing that the robot ‘“isn’t drunk - not in the human sense”’; 55 but
of the natural and the cultural’ being displaced, and ‘the effects of scientific and
52 I, Robot, p. 34
53 Ibid, p. 36
54 Ibid
55 Ibid
56 Braidotti, p. 3
20
John Searle several times, but most famously in his paper ‘Mind, brains, and
programs’. Searle attempts to refute what he calls “strong AI”, ‘the appropriately
programmed computer [which] really is a mind, in the sense that computers given
the right programs can be literally said to understand and have other cognitive
states.’57 He uses the analogy of the “Chinese room” to demonstrate the supposed
falsity of strong AI, outlining a scenario in which ‘I’m locked in a room and given a
large batch of Chinese writing...I know no Chinese, either written or spoken’. 58 In the
room, the individual who is illiterate in Chinese is given instructions that enable him
to connect the dots with the Chinese symbols; outside of the room is someone fluent
but, with the accompanying instructions, is able to respond and give the impression
giving Mike Donovan the impression that its behaviour is human. But Searle fails to
recognise that he makes the same categorical error of assuming whether another
entity is exhibiting “real” consciousness or not, which Powell makes in his dismissal
of Speedy’s abnormal actions. As Daniel Dennett points out, ‘if Searle is right, it is
nevertheless insist they are conscious. It is impossible to know whether we are all
zombies or not. Even if we are all zombies, we would still believe that we are not.’ 60
Both Searle and Powell assume that they know consciousness and, subsequently,
that they can know who is a thinking, conscious human and who is not.
21
This error in thought soon breaks down into the endemic prejudice which
exists in the debate over AI in both literary and scientific circles - namely that a
by Rick Deckard in Dick’s novel, as the protagonist interviews Rachael Rosen (an
android, although this is not yet known at this point in the novel) for the Voigt-Kampff
test. Giving her a series of hypothetical scenarios to test her for an empathic
response, Rick details a lobster being boiled by a chef, to which she responds,
crying ‘“[t]hat’s awful! Did they really do that?”’ On the Voigt-Kampff machine
simulated.’61 Both the mechanism of the Voight-Kampff and Dick’s free indirect
patronising human attitude towards AI. After all, what differentiates the “simulation”
intelligent android and that she is simulating the mechanisms of the human brain,
including empathy with the lobster. After all, the widely held proposal for belief in
simulate it.”’62 But the question is: what makes this any different from the processes
‘If a computer’s actually following instructions, it isn’t really thinking is it? But, then
again, what is my mind doing when I’m actually articulating words now? It’s following
a set of instructions. So, it really raises the question: where’s the threshold here? At
22
what point can we say that a machine is really genuinely understanding and
thinking?’63
Of course, it is perfectly arguable that what Rachael Rosen’s mind is doing isn’t
necessarily the absolute equivalent to what Rick Deckard’s is. However, to insist that
Searle have done, is indicative of the aversion of humans to the idea of machines
intelligence, as it is clear that the two at least operate on a different basis. The
on a very linear, goal-oriented basis. This appears to be the more conventional idea
has not been programmed into it by a human. This does not mean that a machine
can only perform actions which it has been commanded to do; nor does it mean that
a machine cannot learn in the sense of machine learning; rather, it means that
machines are limited by a set of codes and algorithms which have been programmed
complicate this, but this shall be later explored). In the story “Liar!”, the robot Herbie
manufacturing). But, using its abilities, the robot lies to robopsychologist Dr. Susan
Calvin telling her that a colleague is attracted to her, and to Dr. Peter Bogert tricking
him into thinking that Dr. Alfred Lanning is about to resign as Director of Research,
and that he will be the most likely successor. Herbie does not lie to the scientists
intentionally however; he explains that he is ‘“a machine, given the imitation of life
63 Du Sautoy 2015.
23
only by virtue of the positronic interplay in my brain - which is man’s device.”’ 64 What
the robot means of course is that it is limited fundamentally by the three laws of
robotics. It is not aware of the moral consequences of lying, and lies to the scientists
as a means of preventing them from the mental harm which the truth would have
within these laws’ that they inevitably result in ‘strange and counterintuitive robot
behaviors.’65 Humans - while far from autonomous creatures - are still less restricted
behaviour, and so on). In the story, and in reality, human and computational minds
differ in the way that they process experiences and interpret them logically. Herbie
treats the situation with a rational approach on the basis of the fundamental laws of
robotics: he must not, through inaction, ‘allow a human being to come to harm.’ 66 But
his lie was unfathomable in the human sense, in that its attempt to be moral was in
fact immoral. This is not to say, however, that because of the distinction between
human minds and AI minds, that a human should consider their intelligence superior
mind. As Daniel M. Wegner argues - refuting the idea that we are autonomous -
‘[t]he experience of conscious will is a marvelous trick of the mind...but it is not the
foundation for an explanatory system that stands outside the paths of deterministic
causation.’67 We are bound in the same way that computers are, by our own internal
minds are superior to that of a robot’s, a prejudice acquired from the instinctual fear
64 I, Robot, p. 122
65 ‘Why Asimov’s Laws Can’t Protect Us’
66 I, Robot, ‘The Three Laws of Robotics’
67 Wegner, 2003, p. 68
24
of the posthuman and the symbolic “loss” of what supposedly puts mankind on a
how this is juxtaposed with the androids which appear to be closer to the “authentic”
human. The Tyrell Corporation (Blade Runner’s translation of the Rosen Association,
which creates the androids) portrays this idea well, as Eldon Tyrell refers to his
androids as ‘more human than human’.68 But what really seems to be “human” about
the androids, if we are to consider the human in a classical humanist sense, is their
ability to be arbitrary, to break apart from their programming to serve humans; they
‘learn to feel emotion and [ostensibly] begin to act of their own free will’.69 The brains
of the Nexus 6 models - unlike the positronic brains of Asimov’s robots - are
‘“capable of selecting within a field of two trillion constituents, or ten million separate
neural pathways.”’70 In the beginning of the novel, too, Bryant warns Deckard that ‘“a
small class of human beings could not pass the Voigt-Kampff scale. If you tested
them in line with police work you’d assess them as humanoid robots.”’ 71 The issues
which arise here are manifold: on the one hand, it would seem that the androids
possess some ability to choose, or at least to override certain protocols and betray
humans; on the other, the androids are almost indistinguishable from some humans
(who are referred to as “schizoids” in the novel) due to what qualifies as “empathy” in
68 Blade Runner
69 Paterek 2005.
70 Do Androids Dream of Electric Sheep?, p. 22
71 Ibid, p. 22
25
Dick’s dystopia;72 and, extending from this, the categorisation of humanness in the
novel leads one to question what being human means and whether the ability to
choose can be considered a “human” trait. However - in regard to the first issue -
the presumptive arbitrariness of the androids could be accounted for by the causal
necessity of their lifespan. We discover that, ‘if [Rick] failed to get [the androids],
someone else would. Time and tide,’ the four-year lifespan which the androids are ill-
fated to.73 In addition to this, he wonders, ‘[d]o androids dream? … Evidently; that’s
why they occasionally kill their employers and flee here. A better life, without
servitude.’74 The androids are seemingly driven by a force which is out of their
control; they emulate the more human biological will to live, but to live a better life.
human and android, blurring our perception of what being human or inhuman really
means. Perhaps Dick’s novel is merely illustrating to his reader that it does not
necessarily matter whether humans or androids are autonomous; rather, Dick was
research which accompanied it.75 Scepticism and suspicion towards AI had already
begun, and the fictional examples of both Asimov and Dick seem to directly respond
to that. Searle’s theoretical rebuttal, along with the scepticism of fictional characters
machines all arise out of the huge overall uncertainty of our own humanity. As the
boundaries between human and machine become ever increasingly blurred, and
research and fiction, the natural response appears to be to treat this with distrust.
72 Ibid, p. 30
73 Ibid, p. 146
74 Ibid, p. 145
75 The Dartmouth Conference of 1956 ‘is generally recognized as the official birthdate of the new
science.’ Crevier, p. 49
26
27
Chapter III:
‘“How could there have been stories about space travel before-” “The writers,” Pris
‘The time will come when [you] will...be glad to put your arm around the computer
and say, “Hello, friend,” because it will be helping you do a great many things you
Now that it has been assessed that our cultural fear towards AI can also be traced to
our suspicion towards the posthuman, and the symbolic “loss” of the sanctity of the
human mind, one final question still lingers: are any of these anxieties grounded in
fiction and science - a real possibility? And if so, should it then be feared? There
certainly seems to be a consensus on the notion that the basics of AI (or soft AI)
have been achieved, and are ‘a present-day reality.’ 78 That reality consists of
humans being ‘overloaded with information,’ whilst computers are able to process
learning algorithms to give computers ‘“the ability to learn without being explicitly
programmed.”’80 Even operating systems and web browsers are increasingly being
76 Do Androids Dream of Electric Sheep?, p. 119-20
77 Asimov 1985, p. 70
78 Calo 2011.
79 Negnevitsky 2011, p. 365
80 Simon 2013 (quoting Arthur Samuels who coined the term “machine learning” in 1959), p. 89
28
given the ability to differentiate ‘between what a human says and what a human
impressive, and it certainly echoes examples of AI from the selected texts of this
cogito ergo sum (as the robot QT ponders that ‘“I, myself, exist, because I think”’) the
idea that Cutie is actually thinking is dismissed.82 But it is beyond doubt that the
aforementioned characteristics for “strong AI”. This final chapter seeks to establish
how far humans have come in the quest to create thinking machines like those
Up until 1950, notions of automata and artificially intelligent beings were more
or less exclusively kept within the domain of myth and fiction. Pamela McCorduck
argues that the desire to create machines which exhibit artificial intelligence and
us...Back and forth between myth and reality, our imaginations supplying what our
workshops couldn’t, we have engaged for a long time in this odd form of self-
reproduction.’83
81 Saenz 2010.
82 I, Robot, p. 59
83 McCorduck 2004, p. 3
29
McCorduck cites the ‘assorted automata from the workshops of the Greek god
modern-day literary and scientific inquiries into artificial intelligence. 84 The idea of the
question its own existence, asking ‘Who was I? What was I? Whence did I come?
What was my destination?’85 - and this itself spurred a variety of science fiction
written about robots and androids throughout the 20th century. But by the middle of
the century, the myths and legends of automatons served as the catalyst for
something tangible. In 1950, Alan Turing proposed the Turing Test as means to give
a human the impression that a machine was exhibiting intelligent behaviour: ‘the test
and passes the test if the interrogator cannot tell if there is a computer or a human at
the other end.’86 As is suggested by Pris’ reflection on ‘stories about space travel’ - a
case for the idea that fiction itself served as an influence for a lot of the scientific
have cited Asimov as a key influence in their practice, including AI pioneer Marvin
Minsky who was ‘entranced by his stories about space and time...the ideas about
robots affected [him] most.’88 Minsky even acted as an AI advisor for the behaviour of
84 McCorduck, p. xxiii
85 Frankenstein, p. 99
86 Russell & Norvig 1995, p. 5
87 Chapter 3, first epigraph.
88 Markoff 1992.
89 Merriman 2016.
30
and literature; developments in AI research ‘paralleled and influenced the
emergence of a literature’ devoted to machines that ‘imitate not only our muscular
actions but the very actions of our minds.’ 90 And this interest in “thinking” machines
now exists as an exploding field of research; the mere transition from science fiction
to science (SF is ‘science fiction as well as science fiction’) demonstrates that the
These historical interactions between myth, fiction and science all seem
teleologically pointed towards an actualised reality of AI. But despite this, a great
deal of scepticism and anxiety pervades this history as well. In I, Robot, Dr. Susan
about the impact the technology will have on society. And in Do Androids Dream of
Electric Sheep? the Voigt-Kampff test serves as a cautionary measure to find and
security).92 Calvin exhibits a growing concern over the robots in Asimov’s stories. As
was mentioned in the second chapter, the robot Herbie lies to the robopsychologist
‘The psychologist paid no attention. “You must tell them, but if you do, you hurt, so
you mustn’t, but if you don’t, you hurt, so you must; but-” And Herbie screamed! It
was like the whistling of a piccolo many times magnified...Herbie collapsed into a
Calvin’s cruelty towards the robot is not an act in isolation however. It is merely an
causing harm towards humans, something which is generally feared in the society of
90 Porush 1992, p. 212-3
91 Willis, p. 3
92 Ahn, Luis von et al 2003, p. 294
93 I, Robot, p. 122-3
31
Asimov’s robot series. In ‘Little Lost Robot’, she puts herself at risk by testing various
“Nestor” robots to find one robot that has been told to “get lost” by Gerald Black, a
researcher. This particular robot - the Nestor-10 - has been modified, so that the
First Law of Robotics (“A robot may not injure a human being, or, through inaction,
allow a human being to come to harm”) has been altered to omit the latter half of the
law (which would allow a human to be harmed through inaction). To find the robot,
Calvin insists that the scientists interview all sixty-three Nestor robots, confronting
them all with a rehearsed scenario in which a man is put in danger by a weight which
is about to fall above him. But there is a catch - they are all threatened by ‘[h]igh-
tension cables, capable of electrocuting the Nestor models’. 94 None of the robots
move (as they discover that ‘if I died on my way to him, I wouldn’t be able to save
him anyway’), resulting in Calvin resorting to sitting in the chair herself to elicit a
response from Nestor-10. She succeeds, as the robot edges towards her to “save”
her (being unaffected by the radiation beams), but it attacks her (due to the
Calvin’s concern over the harm which AI could cause to humans parallels
Hawking, who has gone so far as to claim that the ‘development of full artificial
intelligence could spell the end of the human race.’ 95 Expanding on this bold
criticism, he argues
‘[AI] would take off on its own, and re-design itself at an ever increasing rate …
Humans, who are limited by slow biological evolution, couldn’t compete, and would
be superseded.’96
94 Ibid, p. 144
95 Cellan-Jones 2014.
96 Ibid.
32
Hawking’s view, like Calvin’s, does not exist within a vacuum; it arrives at a time in
which multiple researchers and inventors - including Elon Musk and Bill Gates - have
also cast light on the potential harm which a developed AI could bring, with Musk
describing it as ‘our biggest existential threat,’ and Gates having said that he is
‘concerned about super intelligence’ (the idea that the intelligence of computational
technology will ‘inevitably and rapidly’ overtake ‘the species that invented it’). 97 98 99
Hawking and Musk are also signatories to an open letter which calls for concrete
research on how to avoid potential ‘pitfalls,’ and a focus on ‘maximizing the societal
the “end of the human race”, the open letter takes into account the numerous
benefits which can be (and have been) reaped from AI. Another parallel between
concerns over AI in the “real world” and SF is the societal suspicion towards the so-
employment.101 You can now find out the likelihood of whether a robot will “take your
demonstrate that ‘35% of current jobs in the UK are at high risk of computerisation
over the following 20 years’.102 The study also cites “empathy” as a prime reason as
to why some jobs - such as social work, nursing and therapy - are less likely to
soon comes into question. After all, in ‘Liar!’ the robot Herbie does not have a social
97 Gibbs 2014.
98 Rawlinson 2015.
99 Kurzweil 1999, p. 192
100 Future of Life Institute 2015.
101 The Caves of Steel, p. 154
102 BBC 2015.
33
awareness of the immorality of lying, and in ‘Little Lost Robot’, Nestor-10’s attack on
Each of these instances of fictional AI inflicting harm human (emotional and physical)
are a direct consequence of human error. As Asimov himself has argued, ‘practically
everything human beings had made had elements of danger in them...being fallible
human beings’.103 The author mentions a real-world example of media hysteria when
a robot at the Kawasaki Heavy Industries plant in Japan allegedly “killed” Kenji
When you read the article you got the vision of this monstrous machine with
shambling arms, machine oil dribbling down the side, sort of isolating the poor guy in
a corner, and then rending him limb from limb. That was not the true story’.104
The true story was of course that Urada was merely in the wrong place at the wrong
time: the ‘gear-grinding robot’ had temporarily stopped working, and (according to
Asimov) as Urada approached it to figure out what the problem was, he accidentally
hit the “on” button and was subsequently crushed in the machine’s gears. 105 And as
for the anxiety over whether AI will lead to humans increasingly becoming
unemployed, Nils J. Nilsson believes that this should be viewed as a ‘blessing rather
welcomes the idea that ‘the new unemployment might better be thought of as a
liberating development. It will permit people to spend time on activities that will be
more gratifying and humane than most “jobs”.’ 107 Asimov himself adds to this
argument, and writes of the benefits which could come of a ‘computerized world with
considerably more leisure and with new kinds of jobs.’ 108 The apprehensive attitude
103 Asimov 1985, p. 60
104 Ibid, p. 63
105 Ibid, p. 62
106 Nilsson 1984, p. 6
107 Ibid, p. 14
108 Asimov, p. 63
34
towards AI “taking jobs” and the premonition that this will be a purely negative
phenomenon are both unsubstantiated when contrasted with the advantages which
narrative on AI which has all too often been perpetuated by critics of AI research
such as Hawking, Musk and Gates. That is, the narrative that - like the creature
towards revenge against its creators. This “Frankenstein complex” evidently ‘lies in
of the Laws of Robotics, ‘even if it is frequently denied.’ 109 It begs another question
which is whether there is any evidence to suggest that this may be the case. In 1997,
IBM’s AI computer Deep Blue defeated the chess world-champion Garry Kasparov,
feeling the position. We all thought computers couldn’t do that.”’ 110 And almost twenty
years on, AI researchers reached a new feat, developing the first machine
intelligence to beat a human at the complex board game Go, using ‘a mixture of
clever strategies’.111 The spectacle of Go’s success was that, despite the rules of Go
being somewhat simpler than that of chess, ‘a player typically has a choice of 200
moves, compared with about 20 in chess - there are more possible positions in Go
than atoms in the universe’.112 The relevance of considering both of these examples
especially in the field of AI. Ray Kurzweil, explaining Moore’s Law, notes that ‘you
get a doubling of price performance of computing every one year’; 113 the acceleration
109 James, p. 42
110 Weber 1997.
111 BBC 2016.
112 Ibid.
113 TED talk, 2005.
35
in performance in computing has demonstrably risen, and the prowess of computer
programs beating human professionals at traditional board games is just one aspect
of the evidence of this. But despite these two milestones in AI, continuing scepticism
and warnings of AI have increased alongside it. As the open letter signed by
Hawking and Musk urges, AI researchers must pay attention to the deficit of social
intelligence in such programs. Despite the rhetoric that Deep Blue and AlphaGo
appeared to have made counter-intuitive moves in their respective games which had
evaded a human’s foresight, the sheer intelligence which both demonstrate does not
encompass social awareness, which could further ignite the “Frankenstein complex”
Breazeal created the robot head Kismet in the late ‘90s, which ‘pouts, frowns, and
displays anger, along with a host of other expressions that outwardly display signs of
designed the infant humanoid robot iCub, which has sensors enabling it to feel being
touched, and lines of red LEDs which represent its mouth and eyebrows so that it
can exhibit facial expressions and display an emotional response. The aim of tests
on the infant robot is to demonstrate the significance which the human body has on
the ‘development of cognitive capability’; the ‘“baby” robot will act in cognitive
scenarios, performing tasks useful for learning while interacting with the environment
and humans.’115 Both Kismet and iCub have played a significant role in not only
cognition in humans. Kismet evidences the fact that social interaction between AI
36
and humans is not impossible, whilst iCub has provided researchers with further
insight into how interacting with the world and cognition are directly interrelated.
the technology isn’t so formidable as some might suggest. It is clear that the benefits
which we can reap from it are numerous, and that in many ways it could be of
significant benefit to scientists trying to gain a clearer picture of how we think as well
sophisticated AI and - if not - whether we are on the road towards strong machine
intelligence. Dr. Breazeal admits that her robot Kismet ‘is not conscious, so it does
not have feelings,’ which might suggest that the aim of developing a self-aware,
sentient machine is still out of reach. 116 The problems still facing AI - such as whether
the field to this day. AI researchers and pundits have also been warning of an “AI
opinion follows through, and leading AI figures get ridiculed’ - and this has become
something of a ‘recurring nightmare’ in the field. 117 But despite the ‘ups and downs’
which the research has faced since it began, the field continues to boom with
hundreds of millions of dollars and euros being poured into it from companies such
as Google, Microsoft and IBM. 118 Ray Kurzweil points out that ‘today many
industry,’ and claims that this significantly undermines the idea of an AI winter. 119 And
Stuart Russell & Peter Norvig bolster this claim, arguing that ‘the fact that AI has
37
managed to reduce the problem of producing human-level intelligence to a set of
many problems still exist in the field, it can at least be argued that those problems
are of intrinsic benefit, as they will require closer attention from researchers to
alleviate the stigma and panic attached to the technology which have been examined
thus far.
38
Conclusion:
The ultimate aim of this dissertation is to remind SF critics that ‘the genre is science
fiction as well as science fiction.’121 The only way to offer new insights into the
science fiction and the scientific reality which has been accomplished because of
that fiction. There is an ingenuity embedded in our culture which has always
that question in myth and art, and c) answering it with the scientific method. It has
been evaluated that AI is often the subject of fear and scepticism due to tendencies
in human history to marginalise and undermine the status of other humans, and
because of the authoritative presumption that our minds are semi-sacred in value
and for that reason can’t be replicated. These culturally embedded prejudices have
given rise to a debate over the ethics of AI, and fears from such figures as Stephen
Hawking and Bill Gates demonstrate this. Although these prejudices will be difficult
intelligence, progress will nevertheless be made, and the benefits AI has produced,
currently produces and will continue to produce, will inevitably see a shift in the
social attitude towards such intelligent computers. That is not to say, however, that
the ethical implications of AI should not be discussed, and the fact that they have
But when having these debates, it is vital that the history, mythology and literature
121 Willis, p. 3
39
behind both AI as a field, and the backlash which the idea of conscious robots has
popular literary genre - are endemic of a wider irrational cultural fear of the prospects
of AI, which have been discussed throughout the course of this dissertation. Once
common sense and social awareness in computers, then perhaps society will be
40
Bibliography
Ahn, Luis von et al. ‘CAPTCHA: Using Hard AI Problems for Security,’ in
Asimov, Isaac. ‘Our Future in the Cosmos - Computers,’ in Burke, James et al. The
NASA, 1985.
Blade Runner. Dir. Scott, Ridley. Perf. Ford, Harrison; Hauer, Rutger; Young, Sean.
Press, 2002.
Calo, Ryan. ‘The Sorcerer’s Apprentice, or: Why Weak AI is Interesting Enough’.
Center for Internet and Society (CIS) website, 30th August 2011.
Crevier, Daniel. AI: The Tumultuous Search for Artificial Intelligence. New York:
Du Sautoy, Marcus. ‘The Chinese Room Experiment - The Hunt for AI’.
41
Dvorsky, George. ‘Why Asimov’s Laws of Robotics Can’t Protect Us’.
http://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-
Evans, John H. Playing God?: Human Genetic Engineering and the Rationalization
Freud, Sigmund. ‘The Uncanny,’ in Strachey, James (trans.) The Standard Edition of
the Complete Psychological Works of Sigmund Freud. London: The Hogarth Press,
1953.
Gibbs, Samuel. ‘Elon Musk: artificial intelligence is our biggest existential threat’.
James, Edward. ‘The Race Question in American Science Fiction,’ in Davies, Philip
(ed.) Science Fiction, Social Conflict and War. Manchester: Manchester University
Press, 1990.
https://www.ted.com/talks/ray_kurzweil_on_how_technology_will_transform_us/trans
Kurzweil, Ray. The Age of Spiritual Machines. New York City: Penguin Group, 1999.
Kurzweil, Ray. The Singularity is Near. New York City: Penguin Group, 2005.
42
Liu, Lydia. H. The Freudian Robot: Digital Media and the Future of the Unconscious.
Locke, Brian. ‘The Orientalist Buddy Film and the “New Niggers”: Blade Runner
(1982, 1992, and 2007),’ in Racial Stigma on the Hollywood Screen: The Orientalist
Lundwall, Sam J. Science Fiction: What it’s all About. New York City: Ace, 1971.
http://www.nytimes.com/1992/04/12/business/technology-a-celebration-of-isaac-
McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and
Merriman, Chris. ‘AI pioneer Marvin Minsky passes away aged 88’. The Inquirer,
Metta, Giorgio; Sandini, Giulio; Vernon, David; Natale, Lorenzo; Nori, Francesco.
‘The iCub humanoid robot: an open platform for research in embodied cognition’.
http://www.nist.gov/el/isd/ks/upload/PERMIS_2008_Final_Proceedings.pdf
Performance Metrics for Intelligent Systems (PerMIS) Workshop, 19th August 2008.
Mori, Masahiro. ‘The uncanny valley’, in IEEE Robotics & Automation Magazine. Vol.
Nietzsche, Friedrich. Human, all too Human. Trans. Faber, Marion. Lincoln,
43
Nilsson, Nils J. ‘Artificial Intelligence, Employment and Income,’ in AI Magazine. Vol.
5 No. 2, 1984.
Rawlinson, Kevin. ‘Microsoft’s Bill Gates insists AI is a threat’. BBC, 29th January
2015.
‘Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter’.
Russell, Stuart J.; Norvig, Peter. Artificial Intelligence: A Modern Approach. New
Saenz, Aaron. ‘We Live in a Jungle of Artificial Intelligence that will Spawn
Sammon, Paul M. Future Noir: The Making of Blade Runner. London: Orion, 1997.
Searle, John. ‘Mind, brains, and programs,’ in The Behavioural and Brain Sciences.
Simon, Phil. Too Big to Ignore: The Business Case for Big Data. New Jersey: Wiley,
2013.
http://www.nytimes.com/1997/05/05/nyregion/computer-defeats-kasparov-stunning-
44
Wegner, Daniel M. ‘The mind’s best trick: how we experience conscious will,’ in
Willis, Martin. Mesmerists, Monsters and Machines: Science Fiction and the Culture
of Science in the Nineteenth Century. Kent, Ohio: Kent State University Press, 2006.
45