0% found this document useful (0 votes)
195 views45 pages

Science Fiction and Artificial Intellige

This document provides an introduction and table of contents for a dissertation examining cultural fears surrounding robots and artificial intelligence as portrayed in science fiction texts. The dissertation will have three chapters addressing how robots are represented and related to marginalized groups; issues of machines possessing consciousness; and how far technology has progressed in developing machines in our image. It will argue that fears of AI becoming a threat are less reasonable given advances in pattern recognition and social skills in robots.

Uploaded by

Shmaia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
195 views45 pages

Science Fiction and Artificial Intellige

This document provides an introduction and table of contents for a dissertation examining cultural fears surrounding robots and artificial intelligence as portrayed in science fiction texts. The dissertation will have three chapters addressing how robots are represented and related to marginalized groups; issues of machines possessing consciousness; and how far technology has progressed in developing machines in our image. It will argue that fears of AI becoming a threat are less reasonable given advances in pattern recognition and social skills in robots.

Uploaded by

Shmaia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

Science Fiction and Artificial Intelligence

Dissecting the Cultural Fear of Robots and Androids

Author
Ryan Browne

Supervisor
John Beck
Word count: 10146
Contents

1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . page 3

Chapter I: Robot and representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .page 5

Chapter II: Posthuman, all too posthuman . . . . . . . . . . . . . . . . . . . . . . . . . . . page 16

Chapter III: A brave new posthuman world . . . . . . . . . . . . . . . . . . . . . . . . . . .page 28

Conclusion: Towards a new interdisciplinary approach. . . . . . . . . . . . . . . . . page 39

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .page 41

2
Introduction

The field of artificial intelligence, as defined by Stuart J. Russell and Peter Norvig,

‘attempts to understand intelligent entities’; 1 that is, ‘anything that can be viewed as

perceiving its environment through sensors and acting upon that environment

through effectors.’2 More specifically, the aim of AI research is to create computers

that can think and interact with the world - computers which are able to exhibit the

behaviours commonly associated with “being human”. In the 20th century, a variety

of science fiction texts have portrayed futuristic depictions of such humanesque

machines, taking influence from stories of inanimate objects becoming animate and

developing cognition which are ingrained in Western culture. 3 Authors such as Isaac

Asimov and Philip K. Dick, as well as the director Ridley Scott, have depicted cultural

manifestations of AI, conveying many of the philosophical and sociological issues

which arise from the technology. In Asimov’s detective novel The Caves of Steel

(1953), anti-robot attitudes are widespread amongst earth’s population, and the

protagonist Elijah Baley is persistently suspicious of robots, including the robotic

detective he is paired with to solve a murder case. In I, Robot (1950), Asimov’s

robots elicit scepticism and suspicion from scientists and researchers, as his “Three

Laws of Robotics” (“laws” programmed into robots to prevent them harming humans)

are explored and deconstructed throughout the course of the short stories. And in

Dick’s Do Androids Dream of Electric Sheep? (1968), as well as Scott’s Blade

Runner (1982), the primal fear of technology created by humans, turning on its

1 Russell and Norvig 2003, p. 3


2 Ibid, p. 31
3 Such images include the god Hephaestus’ bronze robot and the bronze automaton Talos of Crete in
Greek mythology, as well as Mary Shelley’s monster in Frankenstein. This cultural influence is further
elaborated in Chapter 3.

3
creators in revenge, is elaborately illustrated. This dissertation will examine these

fears and criticisms of AI which seem embedded in our culture and literature; as well

as in the scientific world, in which AI has increasingly become the subject of scrutiny

from researchers and inventors concerned about the negative impact the technology

might have. In the first chapter, I shall consider the extent to which the

marginalisation of AI in fiction parallels that of the historical marginalisation of certain

groups in society (such as women and ethnic minorities), and will argue that the

uncertainty and fear surrounding AI partially results from a society which perpetuates

a hierarchical power imbalance. In the second chapter, I will consider the impact of

machines being endowed with “consciousness” (something traditionally thought of as

exclusive to humans). The philosopher John Searle, in his Chinese Room analogy,

attempts to refute the idea that machines can be conscious, an idea which is

reflected in many of the texts which will be explored in this dissertation. It will be

argued that these rebuttals of machine consciousness are indicative of another

prejudice faced by AI; that is, the fear of the “posthuman” - a being which exists in a

state beyond that of the human - possessing consciousness, an attribute privileged

by humans. In the final chapter, I will discuss how far our species has come in

developing machines in our own image, acknowledging the history behind AI as well

as the mythological and literary influences which led to its foundation. I will

subsequently argue that, considering the advances which have been made in

pattern-recognition and social awareness for AI, fears that it will bring about our

demise become less reasonable.

4
Chapter I:

Robot and representation

‘“R’s,” said the Commissioner. “They exist.”’4

Before dealing with the deeper philosophy and science behind artificial intelligence -

which accounts for a lot of the cultural anxiety surrounding the technology - it must

be recognised that machines with intelligence are often portrayed as a metaphor for

the “Other” (the marginalised societal outsider). A lot of the literature which this

dissertation focuses on, for instance, was written at a time in which subordination

and discrimination based on gender, race and sexuality was heavily protested

against. In fact all of the Asimov stories which will be analysed predate the Civil

Rights Act of 1964, which ended public segregation and employment discrimination.

However, to clarify, that is not to say that my aim is to reduce the significance of AI in

fiction to a mere metaphorical representation of Otherness (or at least the

proposition that society is suspicious of AI in the same way that it makes women and

ethnic minorities suspects through prejudice). Instead, I want to make apparent the

argument that the uncertainty and suspicion surrounding AI can be attributed in part

to a society which often marginalises particular groups of people and which

frequently perpetuates a power imbalance.

Asimov’s The Caves of Steel is the first novel in the robot series (excluding I,

Robot, which is a collection of short stories), and it offers some unique analysis of AI

as Other. It anticipates the development of technology which has replaced jobs on a

wide scale (similar to the modern development of “self-service” machines), all the

4 The Caves of Steel, p. 16

5
while portraying the underlying discriminative attitudes which robots face. As the

epigram of this chapter suggests, a particular anxiety can be attributed to a society in

which the ‘trained men’ to perform certain jobs ‘don’t exist,’ and are inevitably being

replaced with R’s (robots).5 Earth has become a desolate landscape, in contrast with

the more elegant, wealthy “Spacer” worlds (the Spacers have been integrating

robots with City dwellers, exacerbating tensions between the two). The suspicions

held by earth’s inhabitants have, on the surface, a fathomable basis: a social climate

in which everyone ‘stands the chance of being out of a job’ inevitably leads to

prejudice towards the Other, the suspected “group” who are seen to be taking jobs

from the indigenous population.6

The novel’s protagonist Elijah Baley is far from neutral on the matter, and is

regarded by some critics as ‘an anti-robot extremist’. 7 His prejudice becomes

increasingly manifest in his treatment of R. Daneel Olivaw. R. Daneel is a robot

which Elijah is partnered with for a homicide case, who essentially adopts his

occupation as detective. In the first encounter with R. Daneel, Lije remarks that ‘“you

don’t look like a robot”’; R. Daneel responds, asking whether this disturbs Lije, who

retorts, ‘“It shouldn’t, I suppose, Da - Daneel. Are they all like you on your world?”’ 8

This interaction - particularly Lije’s reaction - implies the problematic nature of the

uncanny valley, the idea that ‘a person’s response to a humanlike robot would

abruptly shift from empathy to revulsion as it approached, but failed to attain, a

lifelike appearance.’9 R. Daneel does not only double Lije and his profession (in the

sense of the uncanny “doppelganger”), but humanity. The stark difference which Lije

observes in the Spacer model consists in the fact that he is so similar to humans;

5 Ibid, p. 16
6 Ibid, p. 16
7 James 1990, p. 40
8 The Caves of Steel, p. 26
9 Mori 2012, p. 98

6
Lije assumes the robot to be a human Spacer until R. Daneel reveals his name (the

‘R’ initial signifying robot). The robot becomes both the protagonist’s doppelganger,

‘doubling, dividing and interchanging...the self,’ and an uncanny replica of the

human.10 Lije’s encounter with the uncanny, arising from the duality of

heimlich/unheimlich (“familiarity/unfamiliarity” - this can equally be expressed in the

form of sameness/otherness), and the doubling of his own human body and identity,

is reminiscent of the ‘metaphorical effectiveness of technology in SF that focuses the

SF encounter with alterity [or otherness] in its most suggestive locus’. 11 Robert

Adams further argues that

‘[t]echnology is something with which we are simultaneously familiar and already

estranged from; familiar because it plays so large a part in our life, estranged from

because we don’t really know how it works or what the boffins are about to invent

next.’12

Our familiarity with the human Other due to a resemblance in physiology and DNA

seems to mirror this familiarity with technology, embodied here as the artificially

human Other. We are simultaneously familiar and unfamiliar with AI because it is

both ubiquitous in the applications of modern life and yet uncanny in its resemblance

of humans. This inevitably leads to a power imbalance, as humans subordinate AI

purely on the basis that it is not human.

The two main protagonists of The Caves of Steel are comparable in many

aspects with similar figures in the film Blade Runner. Rick Deckard is the “anti-robot

extremist” and the replicants are bioengineered humanoid androids like R. Daneel -

only, in this case, they have become rogue androids, committing “mutiny” by

murdering their owners and escaping an off-world planet in search of extended life

10 Freud 1953, p. 234


11 Roberts 2006, pp. 146-7
12 Ibid

7
on earth. A basic interpretation of the replicants representing the metaphorical

“Other” is the subordination of and discrimination against them in the film. Although

they are far from an innocent depiction of the technology, it is not unreasonable to

sympathise with them. Of the six replicants which attempted to escape the off-world

planet, only four actually escaped and survived. All four of these replicants are

described in terms of their “benefit” to humans. Leon Kowalski, an ammunition

loader, ‘can lift four-hundred-pound atomic loads all day and night’; Roy Batty (the

‘leader’), a combat model, has ‘optimum self-sufficiency’; Zhora Salome, who was

trained for murder squads, is referred to as ‘beauty and the beast’; and Pris Stratton,

a ‘basic pleasure model,’ is ‘the standard item for military clubs in the outer

colonies.’13 The androids are also self-aware, acknowledging their enslavement, as

Roy expresses to Rick, ‘quite an experience to live in fear, isn’t it? That’s what it is to

be a slave’.14 This description of androids as “items” with a purpose to serve humans

echoes Adam Roberts’ comments about the familiar and unfamiliar in our encounter

with artificial intelligence in SF. Roberts argues that replicants embody the “Other” in

the form of a fetishised commodity, describing their purpose in terms of ‘human

interaction with commodities’ and the human ‘fascination with and suspicion of the

power of things, things that...almost acquire a life of their own.’ 15 This also evokes

the Freudian image of the “uncanny” in the form of the automata - the ‘intellectual

uncertainty between animate and inanimate.’16 It is arguable that the awareness the

artificially intelligent androids have of their fetishisation as a “unique” commodity

(unique in that they embody the animation of the inanimate) causes them to lash out

against humans. This again bears connotations of the dehumanisation of AI, as

13 Blade Runner.
14 Ibid.
15 Roberts, p. 151
16 Liu 2011, p. 207

8
“items” and “objects” of desire, in order to perpetuate a system in which robots and

androids are inferior to humans.

This “lashing out” against objectification is exemplified by Zhora’s reaction to

Deckard as he quizzes her in her dressing room. Deckard claims to be ‘from the

American federation of variety dancers...from the confidential committee on moral

abuses,’ implying the real moral abuses the androids have been subjected to at the

hands of humans in off-world colonies.17 Zhora then asks Deckard to dry her,

proceeding to kick him from behind. Whilst it’s certainly plausible that she perhaps

suspected him of being employed to track her down, there seems to be a voyeuristic

nature to the way the scene is directed. Harrison Ford enhances this voyeuristic role

play by putting on the voice of a “sleazeball”, and trying to get into her dressing

room. He says to her, ‘I’d like to check your dressing room...er, for holes...you’d be

surprised what a guy would go through to glimpse at a beautiful body.’ 18 The scene

which the film was inspired by also indicates a sense of voyeurism, as Rick is

referred to as a ‘sexual deviant’ in the novel. 19 Zhora appears to be lashing out at the

limited human construction of gender here, and she is not the only replicant that

does. Pris (the “basic pleasure model”), manipulates J. F. Sebastian in order to

locate the Nexus 6’s creator, Eldon Tyrell. As the mutated technician revels in the

intellect and power of the Nexus 6 replicants, Pris teases him, paraphrasing

Descartes - ‘I think Sebastian, therefore I am’ - and then proceeding to playfully

cartwheel backwards. He gives into the demands of Roy and Pris after the latter

kisses him on the cheek and says ‘I don’t think there’s another human being in the

world who would have helped us.’ 20 By playing into what Judith Butler calls the ‘tacit

17 Blade Runner
18 Ibid
19 Do Androids Dream of Electric Sheep?, p. 84
20 Ibid

9
collective agreement to perform, produce, and sustain discrete and polar genders as

cultural fictions,’ Pris subverts inferior human constructs of gender in an effort to

secure longer life for her and her fellow replicants. By tapping into existing

imbalances in power between the sexes, Blade Runner conveys to the audience that

the cultural fear of AI can be viewed as a by-product of the marginalisation of other

groups of people in society.

Sam Lundwall goes as far as to suggest a racial parameter which

encompasses The Caves of Steel’s environment, arguing that ‘the android functions

as SF’s contribution to the race debate … It is guilt for the Negroes, the Indians, the

Jews, the Vietnamese, the peoples of South America and mankind’s rape of weaker

individuals that comes back in the android’. 21 It is suggested that the robot is a

Kristevan abject representation of the amalgam of repressed “monstrosities” in the

postcolonial psyche. The robots are abject in that they resummon the ‘immemorial

violence with which a body becomes separated from another body in order to be’; 22

the monstrosity of AI lies in its separateness from the human, as a creation of human

labour. Edward James argues that this separateness of robots from the human, and

the “human superiority” which accompanies it, is not entirely different to whites

separating themselves from blacks and assuming superiority in their own race. He

notes the fact that R. Daneel stands in direct contrast to the less intelligent, more

immature R. Sammy, a robot who ‘“shuffles his feet”, with a vacuous grin on his face,

in clear parody of the stereotypical black’. 23 R. Daneel on the other hand represents

‘the constant fear of the racist - the light-skinned black who might “pass” for [being]

white.’24 The comparison is bold, but it would account for the extreme prejudice faced

21 Lundwall 1971, p. 166


22 Kristeva, p. 10
23 James, p. 40
24 Ibid.

10
by the robots in the novel, and the fact that R. Daneel’s occupation involves less

servitude to humans than the rudimentary R. Sammy’s does.

Despite this, James’ conclusion contains gaps and it is not truly representative

of events which occur in the novel. For instance, what James does not consider is

the fact that robots have human creators, and so are not racially separate from

humans. As Baley assumes R. Daneel (whose design is based on his creator, Dr.

Roj Nemennuh Sarton) to actually be Dr. Sarton, it becomes evident that the novel

includes a power imbalance in the creator/creation dichotomy as well as the

master/slave dichotomy. As Roberts argues, AI can certainly be viewed as a

metaphorical coding for blackness, ‘but the more interesting representations make

complex what can be a straightforward demonisation.’ 25 And the ‘Earthman’s distrust

for robots’ is ‘something quite irrational’ in that it is rooted in a deep cultural suspicion

of AI - like Frankenstein’s monster - taking revenge on its creator. 26 Lije finds it

difficult to accept that R. Daneel is a robot, disputing it on the grounds that ‘R.

Daneel is too good a human to be a robot’. 27 He is then subsequently embarrassed

to admit that the reason for his added suspicion is that he had seen R. Daneel’s

artificial genitalia ‘in the Personal.’28 The overarching “threat” which AI poses to its

creator is to human identity. As Scott Bukatman argues, AI ‘pose[s] a set of crucial

ontological questions regarding the status and power of the human.’29 As Lije is

confronted with a robot which so uncannily resembles its creator, and humanity as a

whole, he is irked by a feeling of revulsion towards R. Daneel who ‘approached, but

failed to attain, a lifelike appearance.’ 30 His attitude is not entirely comparable to that

25 Roberts, p. 95
26 The Caves of Steel, p. 70-1
27 Ibid, p. 81
28 Ibid.
29 Bukatman 1993, p. 2
30 Mori, p. 98

11
of racial oppression; rather, it is more characteristic of a wider societal suspicion of

the potential for technology to replicate its human creators.

Nevertheless, it is arguable that Blade Runner conveys a racial parameter, as

it depicts the marginalisation of androids as ‘robots that are “virtually identical” to

humans, as “slave labor” in the “hazardous exploration and colonization” of the new

frontier.’31 The audience is told in the film’s opening that a ‘bloody mutiny’ in an off-

world colony results in replicants being ‘declared illegal on earth - under penalty of

death.’32 (Again, comparison can be made to The Caves of Steel, as human

prejudice towards androids can be sourced from the hell of being on earth, as

opposed to off-world colonies, where a ‘new life awaits you...the chance to begin

again in a golden land of opportunity and adventure’). 33 When blade runners shoot

trespassing replicants on sight, the film tells us that this was ‘not called execution. It

was called retirement.’34 This phraseology immediately implies the racial Otherness

inherent to the portrayal of robots and androids in SF; an exception is made for the

killing of androids, who, when killed, are “retired” rather than murdered or executed.

Brian Locke argues that the attitudes of police officers in the film are not entirely

dissimilar to those of racist police officers towards black people within the historical

narrative of the civil rights movement. He notes that, in the original 1982 version of

the film, Deckard’s “boss” Bryant refers to androids as “skin-jobs”:

‘Deckard’s voice-over comments, “‘Skin-jobs’-that’s what Bryant called replicants. In

history books, he’s the kind of cop who used to call black men ‘niggers.”’35

31 Locke, p. 103
32 Blade Runner
33 Ibid
34 Ibid
35 Locke 2009, p. 103

12
For Locke, Bryant represents the ‘Southern redneck cop’ whereas Deckard

embodies the ‘nonracist white one.’36 However, again, I want to avoid reducing Blade

Runner (as Locke does) to a film which is an entirely allegorical representation of the

‘all-known tradition of African American history and slave narratives.’ 37 The film itself

problematises this view: the most powerful Nexus-6 model, Roy Batty, for instance,

has white blonde hair and blue eyes. Philip K. Dick, who of course wrote the source

material for the film, regarded Rutger Hauer (who played Batty) as ‘the perfect Batty

- cold, Aryan, flawless’.38 A racial parameter is certainly suggested here, but the

power play is not the simple binary of white/black; the “Otherness” of artificial

intelligence cannot simply be sourced to the marginalisation of the black Other.

There is instead a pure Otherness of artificial humanity - an excuse to dehumanise

and exploit AI. It is arguable that Batty represents the controversial peak of humanity

which the endeavour of human genetic engineering and the modification of the

human germline “aspires” towards (John H. Evans notes that ‘the ethics of HGE

[human genetic engineering] was originally the province of eugenicist scientists,’

which does reinforce a sense of racial Otherness here). 39 However Batty is clearly an

abject rather than desirable product of genetically engineered AI; the pure fact that

he is (like R. Daneel) an uncanny duplicate of the human, and his mind an uncanny

duplicate of human intelligence, leads one to the conclusion that there is something

other than racial Otherness at work in the cultural anxiety towards AI.

This can be identified in The Caves of Steel, as R. Daneel proves to Lije that

he is an artificial human, rather than the robotics doctor which Lije assumes him to

be. When R. Daneel touches his cuff,

36 Ibid
37 Ibid
38 Sammon 2011, p. 284
39 Evans 2012, p. 6

13
‘the diamagnetic seam fell apart the entire length of his arm. A smooth, sinewy, and

apparently entirely human limb lay exposed...R. Daneel pinched the ball of his right

middle finger with the thumb and forefinger of his left hand...just as the fabric of the

sleeve had fallen in two when the diamagnetic field of its seam had been interrupted,

so now the arm itself fell in two. There, under a thin layer of fleshlike material, was

the dull blue gray of stainless steel rods, cords, and joints.’40

One is again reminded of the uncanny valley, and the perturbation elicited from the

human when confronted with an artificial being which displays an “apparently entirely

human limb” which is then revealed to be “stainless steel rods, cords, and joints”.

This also embodies Bukatman’s idea that SF texts ‘stage the return of the repressed’

in this way, by ‘construct[ing] their own emphatic, techno-organic reconstructions of

the flesh’.41 Lije is reluctant to accept that a being so closely similar to him in its

outward humanoid shell and behavioural mimicry of humans is actually artificially

intelligent. This attitude inevitably makes R. Daneel a suspect up until the moment in

which Lije and R. Daneel discover the murderer. In a society in which the machine

Other attempts to resemble the human - and, in turn, enable the sense of “familiarity”

- the inevitable response from humans is that of the uncanny valley. Consequently,

humans are able to degrade the status of AI, creating an inherently unequal power

structure.

To recapitulate, the purpose of this chapter is not to argue that the cultural

stigma surrounding AI is entirely comparable to that which has historically been

faced by women and minorities. But it is important to identify the ways in which

historical prejudices have given rise to the many concerns raised over robots and

androids in both fiction and now in reality. In both The Caves of Steel and Blade

Runner, machine intelligence is socially stratified into a class which is always


40 The Caves of Steel, p. 89
41 Bukatman, p. 19

14
beneath that of human (for instance, despite the fact that R. Daneel is a detective

like Lije, the latter always finds difficulty in trusting the former). And in reality, the

question has already been raised as to whether ‘intelligent machines [should] have

rights’.42 However it becomes apparent that this distrust and marginalisation of AI

seems to arise out of a feeling of revulsion towards artificial humans, or

“posthumans”. And a significant cause for this seems to be the ostensible “loss” of a

certain privileged power which humans feel entitlement to - consciousness.

42 Russell & Norvig 1995, p. 849

15
Chapter II:

Posthuman, all too human

‘At the waterfall. When we see a waterfall, we think we see freedom of will and

choice in the innumerable turnings, windings, breakings of the waves; but everything

is necessary; each movement can be calculated mathematically. Thus it is with

human actions; if one were omniscient, one would be able to calculate each

individual action in advance, each step in the progress of knowledge, each error,

each act of malice.’

Friedrich Nietzsche, 187843

It is clear that a large portion of cultural anxiety towards artificial intelligence lies in

the tendency of society to diminish and marginalise the power and existence of

“outsiders”. The first chapter has demonstrated that a lot of the issues surrounding

cultural depictions of AI can be sourced to existing power imbalances and struggles

in human history and contemporary society. But there is a deeper and perhaps more

pertinent problem at hand. It is rarely considered by critics that the main cause for

marginalisation of AI characters in fiction may boil down to the acquisition of

something comparable to human consciousness residing within a virtual simulation

on a computer. It is the fear of a posthuman - a being which transcends the

encasement of the human in flesh and bone - possessing the coveted, privileged,

near-sacred entity of “consciousness” that really stirs the debate on AI, and accounts

for suspicion of it in SF to this day.

43 Nietzsche 1984 (originally 1878), p. 74. Nietzsche’s aphorism almost mechanises the human,
arguing that our actions are necessitated, in contrast with the humanist idea that we are endowed with
autonomy.

16
Isaac Asimov’s I, Robot presents an anthological universe in which the reader

gains an insight into the various complexities raised by his AI machines and the

fundamental “Three Laws of Robotics”. Asimov states very clearly at the start that

the three laws are as follows:

‘1 - A robot may not injure a human being, or, through inaction, allow a human being

to come to harm.

2 - A robot must obey the orders given it by human beings except where such orders

would conflict with the First Law.

3 - A robot must protect its own existence as long as such protection does not conflict

with the First or Second Law.’44

In short, the machines fundamentally cannot allow a human to come to harm (albeit

this is, naturally, faced with various problems throughout the stories). In light of these

“laws”, the robots effectively represent an attempt to reduce the fundamentals of

“humanness” - i.e. rationality, emotion, morality, etc. - into a mechanised simulation

of the behaviours associated with being human (the laws are almost like a robot’s

version of the ten commandments). N. Katherine Hayles summarises this reducing of

the state of being human, arguing that when a subject is ‘defined by information

flows and feedback loops rather than epidermal surfaces, the subject becomes a

system to be assembled and disassembled rather than an entity whose organic

wholeness can be assumed.’45 This cause for anxiety is realised in the story

“Reason”, as scientists Gregory Powell and Mike Donovan attempt to prove to the

highly intelligent, self-aware robot Cutie (or “QT”) that he was created by humans.

Cutie is unconvinced, so they build a robot right in front of him, and Asimov’s

description of this again elicits a sense of the uncanny:

44 I, Robot, ‘The Three Laws of Robotics’


45 Hayles 1993, p. 160

17
‘Donovan uncapped the tightly sealed container and from the oil bath within he

withdrew a second cube. Opening this in turn, he removed a globe from its sponge-

rubber casing. He handled it gingerly, for it was the most complicated mechanism

ever created by man. Inside the thin platinum-plated “skin” of the globe was a

positronic brain, in whose delicately unstable structure were enforced calculated

neuronic paths, which imbued each robot with what amounted to a pre-natal

education.’46

This description of the positronic brain significantly reduces both the anthropocentric

value and importance placed on the human brain and the “consciousness” which it is

able to produce. By portraying the brain as something which is contained within

information flows and feedback loops - that is, as something which operates as a

simulation - and with language which subverts the “norm” of the human anatomy

(e.g. the sponge rubber casing, platinum-plated skin, and the positronic brain itself),

Asimov evokes a revulsion towards AI. As a species, we cannot fathom - and so

inevitably become predisposed to distrust - the idea of a foreign, mechanical brain

being the host of a rational, thinking “mind”.

Philip K. Dick’s novel Do Androids Dream of Electric Sheep? is ironically more

about the increasing loss of the conventional humanist understand of which

constitutive parts make us essentially human, such as rationality, empathy and

emotion. It presents a world in which humanity, in the aftermath of a “World War

Terminus” (already indicating the termination of humanness), has fundamentally

abandoned the idea of the ‘unique, self-regulating and intrinsically moral powers of

human reason’.47 The society itself is founded upon a notion of “empathy”, and uses

the “Voigt-Kampff” test to discern between human and android, measuring their

humanity in empathy. In addition to this, human characters require synthetic, mood-

46 I, Robot, p. 69
47 Braidotti 2013, p. 13

18
altering devices called “mood organs” to remain happy and some even require

“empathy boxes” to enter into a collective hallucination in which users are connected

with one another through the vehicle of the mysterious Wilbur Mercer’s mind (Mercer

resembles a prophet in the novel). The empathy box makes its first appearance as J.

F. Isidore (J. F. Sebastian in the film adaptation) “crosses over,” ‘in the usual

perplexing fashion; physical merging - accompanied by mental and spiritual

identification - with Wilbur Mercer’. 48 Radiation and subsequent mutation have

crippled Isidore in such a way that he is supposedly lacking in empathic ability.

However he is not the only human character in the novel who seems to lack or

otherwise miscomprehend empathy. Rick Deckard and his wife Iran both exhibit a

lack of empathy and deficiency in emotions. As the novel is introduced, the two enter

into a spat, and Rick wonders whether to dial ‘for a thalamic suppressant’ at his

console ‘(which would abolish his mood of rage) or a thalamic stimulant (which would

make him irked enough to win the argument.)’ 49 In this instance it appears that the

two are so far detached from their human emotions that they require technology to

“remind them” of the human feelings of anger, sadness, joy, and so forth. Moreover,

Rick’s own “empathy” is questionable, having ‘never felt any empathy on his own

part toward the androids he killed.’ 50 Confronted with the thought of Nexus-6 android

Luba Luft being “retired”, he questions ‘(e)mpathy toward an artificial

construct?...Something that only pretends to be alive? But Luba Luft had seemed

genuinely alive; it had not worn the aspect of a simulation.’ 51 The protagonist’s

dilemma not only reflects a crisis of humanity which is persistently redefining itself (in

this case empathy is redefined, in the way that non-human animals provoke the most

48 Do Androids Dream of Electric Sheep?, p. 17


49 Ibid, p. 2
50 Ibid, p. 112
51 Ibid

19
significant empathic response, but androids - seen as lesser than both humans and

non-human animals - are barely even granted any empathy); it also implies the crisis

of being confronted with a simulated version of human consciousness which appears

to be all too real.

This confusion over whether or not something which is artificially intelligent

can be identified as “conscious” or, instead, as a “simulation” (presupposing a

difference between the two), is repeated throughout I, Robot. The characters

Donovan and Powell often arrogantly dismiss the robots, always implying that

‘“they’re (just) robots...They’ve got positronic brains: primitive, of course.”’ 52 But this

particular dismissal is soon disrupted by a confusion over the behaviour of the rogue

(albeit harmless) robot Speedy, whom they are trying to trace. The robot says, ‘“Hot

dog, let’s play games. You catch me and I catch you; no love can cut our knife in

two. For I’m Little Buttercup, sweet Little Buttercup. Whoops!”’ The robot seems to

be either drunk or simulating the human experience of being drunk. 53 Upon noticing

this erratic behaviour, Mike Donovan - baffled - says to Greg Powell, ‘“Say, Gress,

he… he’s drunk or something.”’54 But Powell is haste in his dismissal of the

behaviour, arguing that the robot ‘“isn’t drunk - not in the human sense”’; 55 but

Donovan’s assumption that Speedy is drunk could be a direct consequence of the

effect of posthumanism, as it demonstrates the ‘boundaries between the categories

of the natural and the cultural’ being displaced, and ‘the effects of scientific and

technological advances,’ making it more difficult to distinguish between human and

“posthuman”.56 The troubles of distinction between whether an AI machine could

really be considered an example of “human intelligence” have been addressed by

52 I, Robot, p. 34
53 Ibid, p. 36
54 Ibid
55 Ibid
56 Braidotti, p. 3

20
John Searle several times, but most famously in his paper ‘Mind, brains, and

programs’. Searle attempts to refute what he calls “strong AI”, ‘the appropriately

programmed computer [which] really is a mind, in the sense that computers given

the right programs can be literally said to understand and have other cognitive

states.’57 He uses the analogy of the “Chinese room” to demonstrate the supposed

falsity of strong AI, outlining a scenario in which ‘I’m locked in a room and given a

large batch of Chinese writing...I know no Chinese, either written or spoken’. 58 In the

room, the individual who is illiterate in Chinese is given instructions that enable him

to connect the dots with the Chinese symbols; outside of the room is someone fluent

in Chinese who submits questions to him - he is unable to understand the questions

but, with the accompanying instructions, is able to respond and give the impression

that he understands.59 Searle is in agreement with Gregory Powell, who regards

Speedy’s actions as a simulation of something comparable to human behaviour,

giving Mike Donovan the impression that its behaviour is human. But Searle fails to

recognise that he makes the same categorical error of assuming whether another

entity is exhibiting “real” consciousness or not, which Powell makes in his dismissal

of Speedy’s abnormal actions. As Daniel Dennett points out, ‘if Searle is right, it is

most likely that human beings...are actually [philosophical] “zombies”, who

nevertheless insist they are conscious. It is impossible to know whether we are all

zombies or not. Even if we are all zombies, we would still believe that we are not.’ 60

Both Searle and Powell assume that they know consciousness and, subsequently,

that they can know who is a thinking, conscious human and who is not.

57 Searle 1980, p. 417


58 Ibid, p. 417-8
59 Ibid, p. 418
60 Crevier 1994, p. 271. A philosophical zombies is a hypothetical being indistinguishable from a
human being, but essentially lacks consciousness. Dennett asserts that we cannot know whether
“other minds” lack consciousness.

21
This error in thought soon breaks down into the endemic prejudice which

exists in the debate over AI in both literary and scientific circles - namely that a

machine cannot said to be “thinking” because it threatens the privileged position of

rationality and consciousness in humanistic discourse. This prejudice is manifested

by Rick Deckard in Dick’s novel, as the protagonist interviews Rachael Rosen (an

android, although this is not yet known at this point in the novel) for the Voigt-Kampff

test. Giving her a series of hypothetical scenarios to test her for an empathic

response, Rick details a lobster being boiled by a chef, to which she responds,

crying ‘“[t]hat’s awful! Did they really do that?”’ On the Voigt-Kampff machine

however, ‘the gauges...did not respond. Formally, a correct response. But

simulated.’61 Both the mechanism of the Voight-Kampff and Dick’s free indirect

speech (Rick identifying Rachael’s reaction as “simulated”) imply an innately

patronising human attitude towards AI. After all, what differentiates the “simulation”

of consciousness from consciousness itself? It is clear that Rachael is an artificially

intelligent android and that she is simulating the mechanisms of the human brain,

including empathy with the lobster. After all, the widely held proposal for belief in

artificial intelligence is that ‘“[e]very aspect of learning or any other feature of

intelligence can in principle be so precisely described that a machine can be made to

simulate it.”’62 But the question is: what makes this any different from the processes

intrinsic to the human brain itself? As Marcus Du Sautoy comments,

‘If a computer’s actually following instructions, it isn’t really thinking is it? But, then

again, what is my mind doing when I’m actually articulating words now? It’s following

a set of instructions. So, it really raises the question: where’s the threshold here? At

61 Do Androids Dream of Electric Sheep?, p. 39


62 Crevier, p. 48

22
what point can we say that a machine is really genuinely understanding and

thinking?’63

Of course, it is perfectly arguable that what Rachael Rosen’s mind is doing isn’t

necessarily the absolute equivalent to what Rick Deckard’s is. However, to insist that

this doesn’t constitute thinking, as many of the aforementioned SF characters and

Searle have done, is indicative of the aversion of humans to the idea of machines

and other organisms possessing the privileged, dualistic notion of consciousness.

It is certainly important to distinguish between human intelligence and artificial

intelligence, as it is clear that the two at least operate on a different basis. The

processes internal to the artificially intelligent minds of I, Robot in particular operate

on a very linear, goal-oriented basis. This appears to be the more conventional idea

of an AI “mind”, as a computer cannot necessarily perform a task or function which

has not been programmed into it by a human. This does not mean that a machine

can only perform actions which it has been commanded to do; nor does it mean that

a machine cannot learn in the sense of machine learning; rather, it means that

machines are limited by a set of codes and algorithms which have been programmed

into it by a human (the androids of Do Androids Dream of Electric Sheep? seem to

complicate this, but this shall be later explored). In the story “Liar!”, the robot Herbie

(RB-34) is able to read people’s thoughts through telepathy (due to a fault in

manufacturing). But, using its abilities, the robot lies to robopsychologist Dr. Susan

Calvin telling her that a colleague is attracted to her, and to Dr. Peter Bogert tricking

him into thinking that Dr. Alfred Lanning is about to resign as Director of Research,

and that he will be the most likely successor. Herbie does not lie to the scientists

intentionally however; he explains that he is ‘“a machine, given the imitation of life

63 Du Sautoy 2015.

23
only by virtue of the positronic interplay in my brain - which is man’s device.”’ 64 What

the robot means of course is that it is limited fundamentally by the three laws of

robotics. It is not aware of the moral consequences of lying, and lies to the scientists

as a means of preventing them from the mental harm which the truth would have

borne. It is because of ‘the imperfections, loopholes, and ambiguities enshrined

within these laws’ that they inevitably result in ‘strange and counterintuitive robot

behaviors.’65 Humans - while far from autonomous creatures - are still less restricted

in their fundamental “coding” (DNA, the psyche, external impressions on our

behaviour, and so on). In the story, and in reality, human and computational minds

differ in the way that they process experiences and interpret them logically. Herbie

treats the situation with a rational approach on the basis of the fundamental laws of

robotics: he must not, through inaction, ‘allow a human being to come to harm.’ 66 But

his lie was unfathomable in the human sense, in that its attempt to be moral was in

fact immoral. This is not to say, however, that because of the distinction between

human minds and AI minds, that a human should consider their intelligence superior

- or, that this intelligence is somehow unbound by similar restrictions to that of an AI

mind. As Daniel M. Wegner argues - refuting the idea that we are autonomous -

‘[t]he experience of conscious will is a marvelous trick of the mind...but it is not the

foundation for an explanatory system that stands outside the paths of deterministic

causation.’67 We are bound in the same way that computers are, by our own internal

unconscious processes. To assume otherwise would again be to assume that our

minds are superior to that of a robot’s, a prejudice acquired from the instinctual fear

64 I, Robot, p. 122
65 ‘Why Asimov’s Laws Can’t Protect Us’
66 I, Robot, ‘The Three Laws of Robotics’
67 Wegner, 2003, p. 68

24
of the posthuman and the symbolic “loss” of what supposedly puts mankind on a

pedestal - the “sanctity” of the human mind.

As was previously mentioned, it will prove difficult to understand both humans

and machines as fundamentally coded and - to a certain extent (as Nietzsche’s

aphorism in the beginning of the chapter highlights) - predictable, when the

enigmatic androids of Do Androids Dream of Electric Sheep? are taken into

consideration. Dick seems more interested in the mechanisation of humans, and

how this is juxtaposed with the androids which appear to be closer to the “authentic”

human. The Tyrell Corporation (Blade Runner’s translation of the Rosen Association,

which creates the androids) portrays this idea well, as Eldon Tyrell refers to his

androids as ‘more human than human’.68 But what really seems to be “human” about

the androids, if we are to consider the human in a classical humanist sense, is their

ability to be arbitrary, to break apart from their programming to serve humans; they

‘learn to feel emotion and [ostensibly] begin to act of their own free will’.69 The brains

of the Nexus 6 models - unlike the positronic brains of Asimov’s robots - are

‘“capable of selecting within a field of two trillion constituents, or ten million separate

neural pathways.”’70 In the beginning of the novel, too, Bryant warns Deckard that ‘“a

small class of human beings could not pass the Voigt-Kampff scale. If you tested

them in line with police work you’d assess them as humanoid robots.”’ 71 The issues

which arise here are manifold: on the one hand, it would seem that the androids

possess some ability to choose, or at least to override certain protocols and betray

humans; on the other, the androids are almost indistinguishable from some humans

(who are referred to as “schizoids” in the novel) due to what qualifies as “empathy” in

68 Blade Runner
69 Paterek 2005.
70 Do Androids Dream of Electric Sheep?, p. 22
71 Ibid, p. 22

25
Dick’s dystopia;72 and, extending from this, the categorisation of humanness in the

novel leads one to question what being human means and whether the ability to

choose can be considered a “human” trait. However - in regard to the first issue -

the presumptive arbitrariness of the androids could be accounted for by the causal

necessity of their lifespan. We discover that, ‘if [Rick] failed to get [the androids],

someone else would. Time and tide,’ the four-year lifespan which the androids are ill-

fated to.73 In addition to this, he wonders, ‘[d]o androids dream? … Evidently; that’s

why they occasionally kill their employers and flee here. A better life, without

servitude.’74 The androids are seemingly driven by a force which is out of their

control; they emulate the more human biological will to live, but to live a better life.

Ultimately, the novel is multilayered with complex interchanges between

human and android, blurring our perception of what being human or inhuman really

means. Perhaps Dick’s novel is merely illustrating to his reader that it does not

necessarily matter whether humans or androids are autonomous; rather, Dick was

responding to an age of post-atomic technological uncertainty, amidst the dawn of AI

research which accompanied it.75 Scepticism and suspicion towards AI had already

begun, and the fictional examples of both Asimov and Dick seem to directly respond

to that. Searle’s theoretical rebuttal, along with the scepticism of fictional characters

over whether artificially intelligent machines can really be considered as thinking

machines all arise out of the huge overall uncertainty of our own humanity. As the

boundaries between human and machine become ever increasingly blurred, and

romanticised notions of thought, imagination and reason become the domain of AI

research and fiction, the natural response appears to be to treat this with distrust.

72 Ibid, p. 30
73 Ibid, p. 146
74 Ibid, p. 145
75 The Dartmouth Conference of 1956 ‘is generally recognized as the official birthdate of the new
science.’ Crevier, p. 49

26
27
Chapter III:

A brave new posthuman world

‘“How could there have been stories about space travel before-” “The writers,” Pris

said, “made it up.” “Based on what?” “On imagination.”’76

‘The time will come when [you] will...be glad to put your arm around the computer

and say, “Hello, friend,” because it will be helping you do a great many things you

couldn’t do without it.’77

Isaac Asimov 1985

Now that it has been assessed that our cultural fear towards AI can also be traced to

our suspicion towards the posthuman, and the symbolic “loss” of the sanctity of the

human mind, one final question still lingers: are any of these anxieties grounded in

reality? Or, to narrow it down a little more articulately, is AI - as portrayed in both

fiction and science - a real possibility? And if so, should it then be feared? There

certainly seems to be a consensus on the notion that the basics of AI (or soft AI)

have been achieved, and are ‘a present-day reality.’ 78 That reality consists of

humans being ‘overloaded with information,’ whilst computers are able to process

‘terabytes, petabytes (1000 terabytes, or 10 15) and event exabytes (1,000,000

terabytes, or 1018 bytes)’ of data.79 It consists of expert systems utilising machine

learning algorithms to give computers ‘“the ability to learn without being explicitly

programmed.”’80 Even operating systems and web browsers are increasingly being
76 Do Androids Dream of Electric Sheep?, p. 119-20
77 Asimov 1985, p. 70
78 Calo 2011.
79 Negnevitsky 2011, p. 365
80 Simon 2013 (quoting Arthur Samuels who coined the term “machine learning” in 1959), p. 89

28
given the ability to differentiate ‘between what a human says and what a human

wants.’81 The rapidity at which computers are progressing in these areas of

processing speeds, knowledge engineering and artificial neural networks is

impressive, and it certainly echoes examples of AI from the selected texts of this

paper. But it is questionable that any of these developments measure up to the

teleological goal of artificial general intelligence - a self-aware, sentient, genuinely

thinking machine. When Asimov parodically references Rene Descartes’ phrase

cogito ergo sum (as the robot QT ponders that ‘“I, myself, exist, because I think”’) the

idea that Cutie is actually thinking is dismissed.82 But it is beyond doubt that the

machine is demonstrating a sophisticated level of thinking, and that it exhibits the

aforementioned characteristics for “strong AI”. This final chapter seeks to establish

how far humans have come in the quest to create thinking machines like those

prevalent in SF, whilst critically examining the sceptical attitudes to modern

applications of AI technology, and the anxiety of those who do see AI as an

inexorable reality that poses a threat to humanity.

Up until 1950, notions of automata and artificially intelligent beings were more

or less exclusively kept within the domain of myth and fiction. Pamela McCorduck

argues that the desire to create machines which exhibit artificial intelligence and

human characteristics is one embedded in our history - a history…

‘...full of attempts...to make artificial intelligences, to reproduce what is the essential

us...Back and forth between myth and reality, our imaginations supplying what our

workshops couldn’t, we have engaged for a long time in this odd form of self-

reproduction.’83

81 Saenz 2010.
82 I, Robot, p. 59
83 McCorduck 2004, p. 3

29
McCorduck cites the ‘assorted automata from the workshops of the Greek god

Hephaestos,’ introduced by Homer’s The Iliad, as a precursor to many of the

modern-day literary and scientific inquiries into artificial intelligence. 84 The idea of the

automaton became central to Mary Shelley’s initial literary moulding of artificial

intelligence, Victor Frankenstein’s monster - a figure which is able to self-reflect, and

question its own existence, asking ‘Who was I? What was I? Whence did I come?

What was my destination?’85 - and this itself spurred a variety of science fiction

written about robots and androids throughout the 20th century. But by the middle of

the century, the myths and legends of automatons served as the catalyst for

something tangible. In 1950, Alan Turing proposed the Turing Test as means to give

a human the impression that a machine was exhibiting intelligent behaviour: ‘the test

he proposed is that the computer should be interrogated by a human via a teletype,

and passes the test if the interrogator cannot tell if there is a computer or a human at

the other end.’86 As is suggested by Pris’ reflection on ‘stories about space travel’ - a

metaphor for SF - in Do Androids Dream of Electric Sheep?, there is an arguable

case for the idea that fiction itself served as an influence for a lot of the scientific

interest in artificially intelligent machines. 87 Many scientists, including AI researchers,

have cited Asimov as a key influence in their practice, including AI pioneer Marvin

Minsky who was ‘entranced by his stories about space and time...the ideas about

robots affected [him] most.’88 Minsky even acted as an AI advisor for the behaviour of

the HAL-9000 computer in 2001: A Space Odyssey.89 This interaction between

scientists and science fiction demonstrates a reciprocal relationship between science

84 McCorduck, p. xxiii
85 Frankenstein, p. 99
86 Russell & Norvig 1995, p. 5
87 Chapter 3, first epigraph.
88 Markoff 1992.
89 Merriman 2016.

30
and literature; developments in AI research ‘paralleled and influenced the

emergence of a literature’ devoted to machines that ‘imitate not only our muscular

actions but the very actions of our minds.’ 90 And this interest in “thinking” machines

now exists as an exploding field of research; the mere transition from science fiction

to science (SF is ‘science fiction as well as science fiction’) demonstrates that the

creation of thinking machines is possible. 91

These historical interactions between myth, fiction and science all seem

teleologically pointed towards an actualised reality of AI. But despite this, a great

deal of scepticism and anxiety pervades this history as well. In I, Robot, Dr. Susan

Calvin is a fictional predecessor to a variety of AI critics who have grave concerns

about the impact the technology will have on society. And in Do Androids Dream of

Electric Sheep? the Voigt-Kampff test serves as a cautionary measure to find and

“retire” androids (although it is interesting to note that “CAPTCHAs” - tests which

‘differentiate humans from computers’ - have proven useful in computing and

security).92 Calvin exhibits a growing concern over the robots in Asimov’s stories. As

was mentioned in the second chapter, the robot Herbie lies to the robopsychologist

(inadvertently upsetting her), leading her to drive him insane:

‘The psychologist paid no attention. “You must tell them, but if you do, you hurt, so

you mustn’t, but if you don’t, you hurt, so you must; but-” And Herbie screamed! It

was like the whistling of a piccolo many times magnified...Herbie collapsed into a

huddled heap of motionless metal.’93

Calvin’s cruelty towards the robot is not an act in isolation however. It is merely an

extreme manifestation of her apprehension of the possibility of artificial intelligences

causing harm towards humans, something which is generally feared in the society of
90 Porush 1992, p. 212-3
91 Willis, p. 3
92 Ahn, Luis von et al 2003, p. 294
93 I, Robot, p. 122-3

31
Asimov’s robot series. In ‘Little Lost Robot’, she puts herself at risk by testing various

“Nestor” robots to find one robot that has been told to “get lost” by Gerald Black, a

researcher. This particular robot - the Nestor-10 - has been modified, so that the

First Law of Robotics (“A robot may not injure a human being, or, through inaction,

allow a human being to come to harm”) has been altered to omit the latter half of the

law (which would allow a human to be harmed through inaction). To find the robot,

Calvin insists that the scientists interview all sixty-three Nestor robots, confronting

them all with a rehearsed scenario in which a man is put in danger by a weight which

is about to fall above him. But there is a catch - they are all threatened by ‘[h]igh-

tension cables, capable of electrocuting the Nestor models’. 94 None of the robots

move (as they discover that ‘if I died on my way to him, I wouldn’t be able to save

him anyway’), resulting in Calvin resorting to sitting in the chair herself to elicit a

response from Nestor-10. She succeeds, as the robot edges towards her to “save”

her (being unaffected by the radiation beams), but it attacks her (due to the

modification of the First Law) and is deactivated by the scientists.

Calvin’s concern over the harm which AI could cause to humans parallels

real-world warnings from respected scientists today. Among them is Stephen

Hawking, who has gone so far as to claim that the ‘development of full artificial

intelligence could spell the end of the human race.’ 95 Expanding on this bold

criticism, he argues

‘[AI] would take off on its own, and re-design itself at an ever increasing rate …

Humans, who are limited by slow biological evolution, couldn’t compete, and would

be superseded.’96

94 Ibid, p. 144
95 Cellan-Jones 2014.
96 Ibid.

32
Hawking’s view, like Calvin’s, does not exist within a vacuum; it arrives at a time in

which multiple researchers and inventors - including Elon Musk and Bill Gates - have

also cast light on the potential harm which a developed AI could bring, with Musk

describing it as ‘our biggest existential threat,’ and Gates having said that he is

‘concerned about super intelligence’ (the idea that the intelligence of computational

technology will ‘inevitably and rapidly’ overtake ‘the species that invented it’). 97 98 99

Hawking and Musk are also signatories to an open letter which calls for concrete

research on how to avoid potential ‘pitfalls,’ and a focus on ‘maximizing the societal

benefit of AI.’100 Contrary to Hawking’s ominously apocalyptic remarks on AI spelling

the “end of the human race”, the open letter takes into account the numerous

benefits which can be (and have been) reaped from AI. Another parallel between

concerns over AI in the “real world” and SF is the societal suspicion towards the so-

called ‘soulless monster-machines’ in The Caves of Steel forcing workers out of

employment.101 You can now find out the likelihood of whether a robot will “take your

job”, as a study by researchers at Oxford University and Deloitte attempts to

demonstrate that ‘35% of current jobs in the UK are at high risk of computerisation

over the following 20 years’.102 The study also cites “empathy” as a prime reason as

to why some jobs - such as social work, nursing and therapy - are less likely to

become automated, which again seems to parallel an example in SF (the Voigt-

Kampff test emphasises “empathy” as a defining quality of “being human”).

But whether it is reasonable to be so anxious about the development of AI

soon comes into question. After all, in ‘Liar!’ the robot Herbie does not have a social

97 Gibbs 2014.
98 Rawlinson 2015.
99 Kurzweil 1999, p. 192
100 Future of Life Institute 2015.
101 The Caves of Steel, p. 154
102 BBC 2015.

33
awareness of the immorality of lying, and in ‘Little Lost Robot’, Nestor-10’s attack on

Calvin is an unforeseen consequence of tampering with the First Law of Robotics.

Each of these instances of fictional AI inflicting harm human (emotional and physical)

are a direct consequence of human error. As Asimov himself has argued, ‘practically

everything human beings had made had elements of danger in them...being fallible

human beings’.103 The author mentions a real-world example of media hysteria when

a robot at the Kawasaki Heavy Industries plant in Japan allegedly “killed” Kenji

Urada, one of its workers:

When you read the article you got the vision of this monstrous machine with

shambling arms, machine oil dribbling down the side, sort of isolating the poor guy in

a corner, and then rending him limb from limb. That was not the true story’.104

The true story was of course that Urada was merely in the wrong place at the wrong

time: the ‘gear-grinding robot’ had temporarily stopped working, and (according to

Asimov) as Urada approached it to figure out what the problem was, he accidentally

hit the “on” button and was subsequently crushed in the machine’s gears. 105 And as

for the anxiety over whether AI will lead to humans increasingly becoming

unemployed, Nils J. Nilsson believes that this should be viewed as a ‘blessing rather

than a curse.’106 He examines the historical stigma of “unemployment”, and

welcomes the idea that ‘the new unemployment might better be thought of as a

liberating development. It will permit people to spend time on activities that will be

more gratifying and humane than most “jobs”.’ 107 Asimov himself adds to this

argument, and writes of the benefits which could come of a ‘computerized world with

considerably more leisure and with new kinds of jobs.’ 108 The apprehensive attitude
103 Asimov 1985, p. 60
104 Ibid, p. 63
105 Ibid, p. 62
106 Nilsson 1984, p. 6
107 Ibid, p. 14
108 Asimov, p. 63

34
towards AI “taking jobs” and the premonition that this will be a purely negative

phenomenon are both unsubstantiated when contrasted with the advantages which

AI could potentially provide.

Stories like this have unfortunately propelled the “Frankenstein’s monster”

narrative on AI which has all too often been perpetuated by critics of AI research

such as Hawking, Musk and Gates. That is, the narrative that - like the creature

created by Victor Frankenstein - an “autonomous” AI will spiral uncontrollably

towards revenge against its creators. This “Frankenstein complex” evidently ‘lies in

the background’ of Asimov’s robot stories in the form of ostensible “transgressions”

of the Laws of Robotics, ‘even if it is frequently denied.’ 109 It begs another question

which is whether there is any evidence to suggest that this may be the case. In 1997,

IBM’s AI computer Deep Blue defeated the chess world-champion Garry Kasparov,

apparently demonstrating ‘“moves that were based on understanding chess, on

feeling the position. We all thought computers couldn’t do that.”’ 110 And almost twenty

years on, AI researchers reached a new feat, developing the first machine

intelligence to beat a human at the complex board game Go, using ‘a mixture of

clever strategies’.111 The spectacle of Go’s success was that, despite the rules of Go

being somewhat simpler than that of chess, ‘a player typically has a choice of 200

moves, compared with about 20 in chess - there are more possible positions in Go

than atoms in the universe’.112 The relevance of considering both of these examples

is that each demonstrates the exponential rate at which technology is advancing,

especially in the field of AI. Ray Kurzweil, explaining Moore’s Law, notes that ‘you

get a doubling of price performance of computing every one year’; 113 the acceleration

109 James, p. 42
110 Weber 1997.
111 BBC 2016.
112 Ibid.
113 TED talk, 2005.

35
in performance in computing has demonstrably risen, and the prowess of computer

programs beating human professionals at traditional board games is just one aspect

of the evidence of this. But despite these two milestones in AI, continuing scepticism

and warnings of AI have increased alongside it. As the open letter signed by

Hawking and Musk urges, AI researchers must pay attention to the deficit of social

intelligence in such programs. Despite the rhetoric that Deep Blue and AlphaGo

appeared to have made counter-intuitive moves in their respective games which had

evaded a human’s foresight, the sheer intelligence which both demonstrate does not

encompass social awareness, which could further ignite the “Frankenstein complex”

argument against AI.

However, researchers have made significant developments in affective

computing. At Massachusetts Institute of Technology (MIT) for instance, Dr. Cynthia

Breazeal created the robot head Kismet in the late ‘90s, which ‘pouts, frowns, and

displays anger, along with a host of other expressions that outwardly display signs of

human emotion.’114 In addition, a consortium of several European universities

designed the infant humanoid robot iCub, which has sensors enabling it to feel being

touched, and lines of red LEDs which represent its mouth and eyebrows so that it

can exhibit facial expressions and display an emotional response. The aim of tests

on the infant robot is to demonstrate the significance which the human body has on

the ‘development of cognitive capability’; the ‘“baby” robot will act in cognitive

scenarios, performing tasks useful for learning while interacting with the environment

and humans.’115 Both Kismet and iCub have played a significant role in not only

developing affective computing in AI, but also in obtaining a greater understanding of

cognition in humans. Kismet evidences the fact that social interaction between AI

114 Menzel and D’Aluisio 2000, p. 66


115 Metta et al 2010, p. 51

36
and humans is not impossible, whilst iCub has provided researchers with further

insight into how interacting with the world and cognition are directly interrelated.

Weighing up these applications of AI, it is demonstrable that the “threat” posed by

the technology isn’t so formidable as some might suggest. It is clear that the benefits

which we can reap from it are numerous, and that in many ways it could be of

significant benefit to scientists trying to gain a clearer picture of how we think as well

as how computers can be programmed to think.

One final question to consider then is of whether we have reached a

sophisticated AI and - if not - whether we are on the road towards strong machine

intelligence. Dr. Breazeal admits that her robot Kismet ‘is not conscious, so it does

not have feelings,’ which might suggest that the aim of developing a self-aware,

sentient machine is still out of reach. 116 The problems still facing AI - such as whether

it can be considered conscious, autonomous and/or sentient - certainly complicate

the field to this day. AI researchers and pundits have also been warning of an “AI

Winter” - a period in which ‘optimism evaporates in the research community, public

opinion follows through, and leading AI figures get ridiculed’ - and this has become

something of a ‘recurring nightmare’ in the field. 117 But despite the ‘ups and downs’

which the research has faced since it began, the field continues to boom with

hundreds of millions of dollars and euros being poured into it from companies such

as Google, Microsoft and IBM. 118 Ray Kurzweil points out that ‘today many

thousands of AI applications are deeply embedded in the infrastructure of every

industry,’ and claims that this significantly undermines the idea of an AI winter. 119 And

Stuart Russell & Peter Norvig bolster this claim, arguing that ‘the fact that AI has

116 Breazeal 2002, p. 112


117 Crevier, p. 203
118 Ibid.
119 Kurzweil 2005, p. 264

37
managed to reduce the problem of producing human-level intelligence to a set of

relatively well-defined technical problems seems to be progress in itself.’ 120 Though

many problems still exist in the field, it can at least be argued that those problems

are of intrinsic benefit, as they will require closer attention from researchers to

alleviate the stigma and panic attached to the technology which have been examined

thus far.

120 Russell & Norvig, p. 830

38
Conclusion:

Towards a new interdisciplinary approach

The ultimate aim of this dissertation is to remind SF critics that ‘the genre is science

fiction as well as science fiction.’121 The only way to offer new insights into the

cultural significance of SF, especially in relation to AI as an ever-expanding

discipline, is to discuss the serious interdisciplinary relationship between the fiction of

science fiction and the scientific reality which has been accomplished because of

that fiction. There is an ingenuity embedded in our culture which has always

consisted of a) asking a fundamental question (“can machines think?”), b) illustrating

that question in myth and art, and c) answering it with the scientific method. It has

been evaluated that AI is often the subject of fear and scepticism due to tendencies

in human history to marginalise and undermine the status of other humans, and

because of the authoritative presumption that our minds are semi-sacred in value

and for that reason can’t be replicated. These culturally embedded prejudices have

given rise to a debate over the ethics of AI, and fears from such figures as Stephen

Hawking and Bill Gates demonstrate this. Although these prejudices will be difficult

to overthrow in the gradual progress researchers make towards strong artificial

intelligence, progress will nevertheless be made, and the benefits AI has produced,

currently produces and will continue to produce, will inevitably see a shift in the

social attitude towards such intelligent computers. That is not to say, however, that

the ethical implications of AI should not be discussed, and the fact that they have

been debated, even in the mainstream, can be at least be regarded as a triumph.

But when having these debates, it is vital that the history, mythology and literature

121 Willis, p. 3

39
behind both AI as a field, and the backlash which the idea of conscious robots has

faced, are considered. Portrayals of such machines as “soulless monsters” in SF - a

popular literary genre - are endemic of a wider irrational cultural fear of the prospects

of AI, which have been discussed throughout the course of this dissertation. Once

research has adequately addressed these concerns, implementing such things as

common sense and social awareness in computers, then perhaps society will be

ready for a posthuman future.

40
Bibliography

Ahn, Luis von et al. ‘CAPTCHA: Using Hard AI Problems for Security,’ in

Proceedings of Eurocrypt, Vol. 2656, 2003.

Asimov, Isaac. ‘Our Future in the Cosmos - Computers,’ in Burke, James et al. The

Impact of Science on Science. http://history.nasa.gov/sp482.pdf Washington, DC:

NASA, 1985.

Asimov, Isaac. I, Robot. New York: Harper Voyager, 2013.

Asimov, Isaac. The Caves of Steel. New York: Bantam, 1991.

Blade Runner. Dir. Scott, Ridley. Perf. Ford, Harrison; Hauer, Rutger; Young, Sean.

United States: Warner Brothers, 1982.

Braidotti, Rosi. The Posthuman. Cambridge: Polity Press, 2013

Breazeal, Cynthia. Designing Sociable Robots. Cambridge, Massachusetts: The MIT

Press, 2002.

Bukatman, Scott. Terminal Identity: The Virtual Subject in Postmodern Science

Fiction. Durham, North Carolina: Duke University Press, 1993.

Calo, Ryan. ‘The Sorcerer’s Apprentice, or: Why Weak AI is Interesting Enough’.

Center for Internet and Society (CIS) website, 30th August 2011.

Cellan-Jones, Rory. ‘Stephen Hawking warns artificial intelligence could end

mankind’. BBC, 2nd December 2014.

Crevier, Daniel. AI: The Tumultuous Search for Artificial Intelligence. New York:

Basic Books, 1994.

Dick, Philip K. Do Androids Dream of Electric Sheep? London: Millennium, 1999.

Du Sautoy, Marcus. ‘The Chinese Room Experiment - The Hunt for AI’.

https://www.youtube.com/watch?v=D0MD4sRHj1M BBC Worldwide, 2015.

41
Dvorsky, George. ‘Why Asimov’s Laws of Robotics Can’t Protect Us’.

http://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-

1553665410 io9, 28th March 2014.

Evans, John H. Playing God?: Human Genetic Engineering and the Rationalization

of Public Bioethical Debate. Chicago: University of Chicago Press, 2002.

Freud, Sigmund. ‘The Uncanny,’ in Strachey, James (trans.) The Standard Edition of

the Complete Psychological Works of Sigmund Freud. London: The Hogarth Press,

1953.

Gibbs, Samuel. ‘Elon Musk: artificial intelligence is our biggest existential threat’.

The Guardian, 27th October 2014.

‘Google’s AI wins final Go challenge’. http://www.bbc.co.uk/news/technology-

35810133 BBC, 15th March 2016.

Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics,

Literature, and Informatics. Chicago: The University of Chicago Press, 1999.

James, Edward. ‘The Race Question in American Science Fiction,’ in Davies, Philip

(ed.) Science Fiction, Social Conflict and War. Manchester: Manchester University

Press, 1990.

Kristeva, Julia. Powers of Horror: An Essay on Abjection. Trans. Roudiez, Leon S.

New York: Columbia University Press, 1982.

Kurzweil, Ray. ‘The accelerating power of technology’.

https://www.ted.com/talks/ray_kurzweil_on_how_technology_will_transform_us/trans

cript?language=en#t-630180 TED, February 2005.

Kurzweil, Ray. The Age of Spiritual Machines. New York City: Penguin Group, 1999.

Kurzweil, Ray. The Singularity is Near. New York City: Penguin Group, 2005.

42
Liu, Lydia. H. The Freudian Robot: Digital Media and the Future of the Unconscious.

Chicago, Illinois: University of Chicago Press, 2011.

Locke, Brian. ‘The Orientalist Buddy Film and the “New Niggers”: Blade Runner

(1982, 1992, and 2007),’ in Racial Stigma on the Hollywood Screen: The Orientalist

Buddy Film. New York: Palgrave Macmillan, 2009.

Lundwall, Sam J. Science Fiction: What it’s all About. New York City: Ace, 1971.

Markoff, John. ‘Technology; A Celebration of Isaac Asimov’.

http://www.nytimes.com/1992/04/12/business/technology-a-celebration-of-isaac-

asimov.html?pagewanted=all The New York Times, 12th April 1992.

McCorduck, Pamela. Machines Who Think: A Personal Inquiry into the History and

Prospects of Artificial Intelligence. Massachusetts: A K Peters, 2004.

Menzel, Peter; D’Aluisio, Faith. Robo Sapiens: Evolution of a New Species.

Cambridge, Massachusetts: The MIT Press, 2000.

Merriman, Chris. ‘AI pioneer Marvin Minsky passes away aged 88’. The Inquirer,

26th January 2016.

Metta, Giorgio; Sandini, Giulio; Vernon, David; Natale, Lorenzo; Nori, Francesco.

‘The iCub humanoid robot: an open platform for research in embodied cognition’.

http://www.nist.gov/el/isd/ks/upload/PERMIS_2008_Final_Proceedings.pdf

Performance Metrics for Intelligent Systems (PerMIS) Workshop, 19th August 2008.

Mori, Masahiro. ‘The uncanny valley’, in IEEE Robotics & Automation Magazine. Vol.

19, Issue 2. June 2012.

Negnevitsky, Michael. Artificial Intelligence: A Guide to Intelligent Systems, 3rd

Edition. Harlow: Addison Wesley, 2011.

Nietzsche, Friedrich. Human, all too Human. Trans. Faber, Marion. Lincoln,

Nebraska: University of Nebraska Press, 1984.

43
Nilsson, Nils J. ‘Artificial Intelligence, Employment and Income,’ in AI Magazine. Vol.

5 No. 2, 1984.

Palmer, Christopher. Philip K. Dick: Exhilaration and Terror of the Postmodern.

Liverpool University Press, 2003.

Paterek, Liz. ‘Free Will and Artificial Intelligence’.

http://serendip.brynmawr.edu/sci_cult/evolit/s05/web2/lpaterek.html The Story of

Evolution, Spring 2005.

Rawlinson, Kevin. ‘Microsoft’s Bill Gates insists AI is a threat’. BBC, 29th January

2015.

‘Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter’.

http://futureoflife.org/misc/open_letter Future of Life Institute.

Roberts, Adam. Science Fiction. London: Routledge, 2006.

Russell, Stuart J.; Norvig, Peter. Artificial Intelligence: A Modern Approach. New

Jersey: Prentice-Hall, Inc., 1995.

Saenz, Aaron. ‘We Live in a Jungle of Artificial Intelligence that will Spawn

Sentience’. Singularity Hub website, 10th August 2010.

Sammon, Paul M. Future Noir: The Making of Blade Runner. London: Orion, 1997.

Searle, John. ‘Mind, brains, and programs,’ in The Behavioural and Brain Sciences.

Cambridge: Cambridge University Press, 1980.

Shelley, Mary. Frankenstein. Hertfordshire: Wordsworth Editions Limited, 1993.

Simon, Phil. Too Big to Ignore: The Business Case for Big Data. New Jersey: Wiley,

2013.

Weber, Bruce. ‘Computer Defeats Kasparov, Stunning the Chess Experts’.

http://www.nytimes.com/1997/05/05/nyregion/computer-defeats-kasparov-stunning-

the-chess-experts.html The New York Times, 5th May 1997.

44
Wegner, Daniel M. ‘The mind’s best trick: how we experience conscious will,’ in

Trends in Cognitive Sciences. Vol. 7 No. 2. February 2003.

‘Will a robot take your job?’ http://www.bbc.co.uk/news/technology-34066941 BBC,

11th September 2015.

Willis, Martin. Mesmerists, Monsters and Machines: Science Fiction and the Culture

of Science in the Nineteenth Century. Kent, Ohio: Kent State University Press, 2006.

45

You might also like