100% found this document useful (1 vote)
32 views110 pages

Chatbots and The Domestication of AI: A Relational Approach Hendrik Kempt Download

The document discusses 'Chatbots and the Domestication of AI: A Relational Approach' by Hendrik Kempt, which explores the impact of chatbots on human social relationships and the philosophical implications of AI in society. It emphasizes the importance of understanding chatbots not just as tools, but as entities that influence social discourse and relationships. The book is part of a series that examines the evolving relationship between humans and machines in the context of the 'robot revolution.'

Uploaded by

mabpaqwmlk177
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
32 views110 pages

Chatbots and The Domestication of AI: A Relational Approach Hendrik Kempt Download

The document discusses 'Chatbots and the Domestication of AI: A Relational Approach' by Hendrik Kempt, which explores the impact of chatbots on human social relationships and the philosophical implications of AI in society. It emphasizes the importance of understanding chatbots not just as tools, but as entities that influence social discourse and relationships. The book is part of a series that examines the evolving relationship between humans and machines in the context of the 'robot revolution.'

Uploaded by

mabpaqwmlk177
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 110

Chatbots And The Domestication Of AI: A

Relational Approach Hendrik Kempt pdf download

https://textbookfull.com/product/chatbots-and-the-domestication-of-ai-a-relational-approach-hendrik-
kempt/

★★★★★ 4.8/5.0 (37 reviews) ✓ 106 downloads ■ TOP RATED


"Excellent quality PDF, exactly what I needed!" - Sarah M.

DOWNLOAD EBOOK
Chatbots And The Domestication Of AI: A Relational Approach
Hendrik Kempt pdf download

TEXTBOOK EBOOK TEXTBOOK FULL

Available Formats

■ PDF eBook Study Guide TextBook

EXCLUSIVE 2025 EDUCATIONAL COLLECTION - LIMITED TIME

INSTANT DOWNLOAD VIEW LIBRARY


Collection Highlights

A Philosophy of Israel Education A Relational Approach


Chazan

The Secret Diary of Hendrik Groen First U.S. Edition


Hendrik Groen

Relational Social Work Practice with Diverse Populations A


Relational Approach 1st Edition Judith B. Rosenberger
Ph.D.

The Secret Diary of Hendrik Groen Groen


Perfectionism A Relational Approach to Conceptualization
Assessment and Treatment Paul L. Hewitt

Caring A Relational Approach to Ethics and Moral Education


2nd Edition Nel Noddings

Build Better Chatbots: A Complete Guide to Getting Started


with Chatbots 1st Edition Rashid Khan

Generationing Development: A Relational Approach to


Children, Youth and Development 1st Edition Roy Huijsmans
(Eds.)

Introducing Cognitive Analytic Therapy Principles and


Practice of a Relational Approach to Mental Health 2nd
Edition Anthony Ryle
SOCIAL AND CULTURAL STUDIES OF ROBOTS AND AI

Chatbots and the


Domestication of AI
A Relational Approach

Hendrik Kempt
Social and Cultural Studies of Robots and AI

Series Editors
Kathleen Richardson
Faculty of Computing, Engineering, and Media
De Montfort University
Leicester, UK

Cathrine Hasse
Danish School of Education
Aarhus University
Copenhagen, Denmark

Teresa Heffernan
Department of English
St. Mary’s University
Halifax, NS, Canada
This is a groundbreaking series that investigates the ways in which the
“robot revolution” is shifting our understanding of what it means to be
human. With robots filling a variety of roles in society—from soldiers
to loving companions—we can see that the second machine age is
already here. This raises questions about the future of labor, war, our
environment, and even human-to-human relationships.

More information about this series at


http://www.palgrave.com/gp/series/15887
Hendrik Kempt

Chatbots
and the Domestication
of AI
A Relational Approach
Hendrik Kempt
Institute of Applied Ethics
RWTH Aachen
Aachen, Germany

ISSN 2523-8523 ISSN 2523-8531 (electronic)


Social and Cultural Studies of Robots and AI
ISBN 978-3-030-56289-2 ISBN 978-3-030-56290-8 (eBook)
https://doi.org/10.1007/978-3-030-56290-8

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer
Nature Switzerland AG 2020
This work is subject to copyright. All rights are solely and exclusively licensed by the
Publisher, whether the whole or part of the material is concerned, specifically the rights
of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on
microfilms or in any other physical way, and transmission or information storage and
retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc.
in this publication does not imply, even in the absence of a specific statement, that such
names are exempt from the relevant protective laws and regulations and therefore free for
general use.
The publisher, the authors and the editors are safe to assume that the advice and informa-
tion in this book are believed to be true and accurate at the date of publication. Neither
the publisher nor the authors or the editors give a warranty, expressed or implied, with
respect to the material contained herein or for any errors or omissions that may have been
made. The publisher remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

Cover credit: exdez/DigitalVision Vectors/Getty Images

This Palgrave Macmillan imprint is published by the registered company Springer Nature
Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
“πάντων χρημάτων μšτρoν’ ¥νθρωπoν εναι,
‘τîν μν Ôντων æς στι, τîν δ μὴ Ôντων æς oÙκ στιν”—Protagoras.

“The human is the ultimate measure of all things, of the existence of things
that exist, as well as the non-existence of things that do not
exist.” —Protagoras
Acknowledgments

Nobody writes a book without accruing considerable social debt with


others. To repay this debt, I want to extend my sincere thanks to some
people who helped me realize this project from its inception to its
completion.
I presented some of my early arguments at the biannual joint session of
the NCPS and SPSC in Rock Hill, South Carolina, and at the meeting of
the Society for Philosophy of Technology in College Station, Texas. I am
grateful for the opportunity to have done so and for the valuable feedback
that shaped my arguments.
I want to thank my supervisor, Prof. Carl Friedrich Gethmann, for
providing the space and intellectual freedom to work on this project, and
my colleagues in Siegen, Dr. Bruno Gransche, Sebastian Nähr-Wagener,
Jacqueline Bellon, and Dr. Michael Nerurkar for discussing my ideas
on multiple occasions. I am especially indebted to Prof. Alon Lavie
for providing me feedback on parts of my draft, advice on the current
developments in natural language processing, and encouragement on a
personal level.
Further, I profited greatly from witnessing many debates of philoso-
phers of technology on Twitter. This new, highly accessible, and fast-paced
space for public debate had a substantial influence on my understanding
of current issues, and it might be surprising for someone to address this
source of inspiration. But without philosophers on Twitter exchanging
arguments, announcing projects and publications, and sharing articles

vii
viii ACKNOWLEDGMENTS

and news, I would have missed out on many philosophically relevant


information.
I also want to thank this series’ editor Prof. Kathleen Richardson, who
saw the relevance of my idea and encouraged me to pursue it further, as
well as Rachel Daniel and Madison Allums from Palgrave Macmillan for
their assistance in making this project as a success.
Lastly, I want to thank my husband John, my family, and my friends,
who all remained supportive even when I had nothing else to talk about
but chatbots.
Praise for Chatbots and the
Domestication of AI

“A significant contribution to thinking about human-machine relation-


ships. Instead of seeing chatbots as decontextualized things, Kempt
explores how chatbots intervene in human social discourse and its
epistemology, and contributes to the further development of a rela-
tional approach in contemporary debates about the moral standing of
machines.”
—Mark Coeckelbergh, Professor of Media and Technology, University of
Vienna, Austria, and author of Introduction to Philosophy of
Technology (2019) and AI Ethics (2019)

“Anthropomorphism has been something of a ‘dirty word’ in the fields


of AI and robotics. In this book, Hendrik Kempt critically reactualizes
the concept, demonstrating how these anthropomorphic tendencies—
tendencies that are seemingly irrepressible in the face of chatbots, digital
assistants, and other things that talk—are not a bug to be eliminated
but a social feature to be carefully cultivated and managed. Thinking
beyond essentialist explanations and theories, Kempt develops a relational
approach to the social that is responsive to and can be responsible for the
opportunities and challenges of the 21st century.”
—David Gunkel, Professor of Media Studies, Northern Illinois University,
USA, and author of The Machine Question: Critical Perspectives on AI,
Robots and Ethics (2012)

ix
x PRAISE FOR CHATBOTS AND THE DOMESTICATION OF AI

“Hendrik Kempt’s thought-provoking book is an impressive rational


examination of the place and social role that modern human society
is creating and establishing for artificial conversational agents, and how
our social relationships with such agents could potentially evolve in the
future. As a scientist who has spent his career developing AI-based human
language technology, I was fascinated by this guided philosophical tour of
the potential sociological consequences of our scientific body of work. It’s
a highly recommended and intellectually satisfying read.”
—Alon Lavie, Research Professor, Language Technologies Institute,
Carnegie Mellon University, USA, and former President of the
International Association for Machine Translation (2013–2015)
Contents

1 Introduction 1
1.1 Introduction 1
1.1.1 Smart Fridges and Other Reifications 2
1.1.2 What’s to Come? 5
References 6

2 Methods 7
2.1 Method and Orientation 7
2.2 Concepts and Conceptual Analysis 9
2.3 Description and Evaluation of Technologies 10
2.4 Distinctions and Discovery in Philosophy of Technology 13
2.5 Reaches and Limits of Philosophy 15
2.6 Moral Philosophy, Morality, Ethics 16
2.7 AI Ethics 18
2.8 Conclusion 19
References 20

3 The Social Dimension 23


3.1 The Concept of the Social 23
3.2 Toward a Digital Society 25
3.2.1 Public Discourse 26
3.2.2 Private Relationships 27

xi
xii CONTENTS

3.2.3 Embodiment and the (Un)Importance


of Physical Presence 29
3.2.4 Are “Facebook-Friends” Friends? 30
3.3 Assigning Social Descriptors 31
3.4 From Social Descriptors to Social Relationships 34
3.5 Agent-Network Theory, Attachment Theories 37
3.6 Gender as Relational Descriptor 39
3.7 Institutionalized Relationships and Relationships qua
Humanity 40
3.8 Domestication as a Social Technique 41
3.9 Against Relational Arbitrariness 44
3.9.1 Supernatural Relationships 44
3.9.2 Para-social Relationships 45
3.9.3 Relationships with Aliens 46
3.10 Lessons for Imminent Changes or: The Rise
of Human–Machine Relationships 47
References 49

4 The Basics of Communicative AI 53


4.1 Definitions 53
4.1.1 An Additional Distinction 55
4.1.2 Embodiment 58
4.1.3 Some Terminological Notes 60
4.2 A Short History of Chatbots 61
4.3 Is ML–AI the Future of Artificial Conversational
Agents? 64
4.4 AI—General or Narrow? 67
4.5 The Economics of NLP 68
4.6 Turing Test and Its Human Limits 70
4.7 Conclusion: Why Think About AI in the First Place? 72
References 73

5 Artificial Social Agents 77


5.1 Rethinking Social Descriptors 77
5.1.1 The Appeals of Anthropomorphism 77
5.1.2 The Fallacy of Anthropomorphism 79
5.1.3 The Politics of Imitating Human Beings 80
CONTENTS xiii

5.1.4 Causing Harm—Privacy, Deception,


Imbalance of Power 82
5.1.5 Legal Consequences for Anthropomorphism 89
5.1.6 Shaping the Industry Through Legislation 90
5.1.7 Conclusion 90
5.2 Philosophical Implications
of (Non-)anthropomorphism 91
5.2.1 Relating to Non-human Entities 92
5.2.2 The Limits of Relating to Natural Entities 92
5.2.3 The Other and the Artificial—A Short
Warning 93
5.2.4 Uncanny Others—Phenomenological Notes 94
5.2.5 The Artificial Other—An Instance
of Objective Spirit-Conversations? 96
5.3 Patiency and Pragmacentrism 102
5.3.1 An Empirical Challenge to Moral Patiency 105
5.4 Relating to the Artificial 106
5.4.1 Problematic Approaches 107
5.5 A Second Domestication 119
5.5.1 Taking Stock 119
5.5.2 Another Social Expansion 120
5.6 From Social Descriptors to Social Agents 121
5.6.1 Some Counterarguments 122
5.6.2 Social Agents 123
5.6.3 More Concerns 125
5.6.4 Missing the Mark 126
5.6.5 Case Study of Subtle Essentialism: Friendship
and Friendship* 126
5.7 A Second Domestication—Continued 128
5.8 Conclusion 129
References 131

6 Social Reverberations 137


6.1 Robot Rights 137
6.1.1 A 4-Way Matrix 138
6.1.2 The Relational Turn 142
6.1.3 The Return of Pragmacentrism 145
6.1.4 Conclusion—Robot Relational Protections 148
6.2 Human-Centered Design 148
xiv CONTENTS

6.2.1 Against Human-Centered Design? 149


6.2.2 Robot Relations, Human Realities 151
6.3 Including Non-human Agents in Normative Systems 152
6.3.1 Relational Real Estate 153
6.3.2 Legal and Digital Persons 154
6.4 New Social Fault Lines 155
6.4.1 Rogers’ Bell Curve and Its Limits 157
6.4.2 On Robophobes and Robophiles 159
6.4.3 One Battleground: Robot Rights 161
6.4.4 A New Struggle for Recognition 162
6.4.5 Limits of Recognition 164
6.4.6 Some Ethical Deliberations 166
6.4.7 A Positive Argument 168
6.5 Conclusion 168
References 170

7 Conclusions 175
7.1 Thinking Forward 175
7.2 Acting Forward 178
Reference 179

Index 181
CHAPTER 1

Introduction

1.1 Introduction
When considering prognoses given about the potential of certain tech-
nologies and technological trends, we most often remember the outra-
geous misjudgments rather than the precise assessments and are most
likely witnessing similar false predictions these days—unbeknownst of
their falsity. From IBM’s president Thomas Watson stating that the world
will not need more than five computers in 1943 to promises about the
various announcements of the immediate advent of self-driving cars on
our streets within a few years, the list of wrong predictions is long.
Considering the complexity of reasons for certain technologies to
become ubiquitous elements of many people’s everyday life, one would
be inclined to refrain from those prognoses altogether. Yet, those prog-
noses themselves may function as self-fulfilling prophecies, by inspiring
the public to think of a technology a certain way, thereby opening or
closing minds and markets for certain devices.
Additionally to these self-fulfilling prophecies, unexpected break-
throughs in technological development, applicability, and compatibility,
societal trends and ethical restrictions, economic (in-)stability and invest-
ments, politically forced acceleration or deceleration, successful marketing
campaigns, or simply luck all take their fair share in the rise and fall of
technological standards, applications, and devices.

© The Author(s) 2020 1


H. Kempt, Chatbots and the Domestication of AI,
Social and Cultural Studies of Robots and AI,
https://doi.org/10.1007/978-3-030-56290-8_1
2 H. KEMPT

For some individuals, new technologies represent hopeful progress


toward a better future. For others, the very same technologies are viewed
as threats to the way of life they are accustomed to and intend to
keep unchanged. No matter whether one views technological changes as
net positives or negatives, the common denominator seems to be that
technology facilitates change.
However, not all types of change are facilitated by technology. Tech-
nological progress most often results in social change. It changes the way
we relate to our environment, to each other, and often to ourselves. New
communication devices allow for constant interactions with people thou-
sands of miles away, while augmented reality will add another layer of
interaction and information to our immediate surroundings. Technologi-
cally assisted medical progress allows for curing diseases that just decades
ago were death sentences, while the latest autonomous battle drone can
strike without being noticed by its target. Some social movements, like
the Arab Spring, would not have been possible without ubiquitous access
to social media. However, this access also allows oppressive regimes an
even more oppressive grip on its population, as exemplified in the Chinese
social scores. Technological progress may result in social change, but it
does not guarantee social progress.
New technologies come with risks associated with their use, both
individual and collective risks. In open societies, discourses about the
acceptability of those risks ideally determine the overall acceptance of such
technology. Artificial intelligence has so far been an elusive technology
when it comes to its thorough risk-assessment and social response. Partly
due to a certain AI illiteracy of the general public, leading to broken
discourses about what AI can do, and partly due to the speed of its devel-
opment, especially of the last decade, a coherent risk-assessment has been
missing. This speed, often likened to a technological revolution, has also
opened many philosophical questions that are just now slowly being asked
and subsequently answered. Some of those questions will be asked here,
and hopefully some answers will be provided.

1.1.1 Smart Fridges and Other Reifications


For a philosophical inquiry into issues of AI, one main obstacle appears
right at the beginning: how can and should we understand the other-
wise opaque concept of artificial intelligence? Due to the vast range of
methods and applications that all are claimed to incorporate and exhibit
1 INTRODUCTION 3

some form of intelligent behavior, an all-encompassing definition will lose


any practical purpose to limit any inquiry.
Take as an example: a “smart fridge.” Its intelligence-claim is based on
the ability to scan items in one’s fridge and preorder those that, according
to typical use based on someone’s consumption profile, will be used up
soon, or warn about expired articles in the fridge. Calling the fridge smart,
then, is a reification of AI, as it is not the fridge as a whole, but the
added software and its connection to the cloud that is providing the smart
function.
We are used to identifying intelligent beings as embodied entities
occurring in nature, and this phenomenological basis is often the cause
for misattributions and confusion about the source of (artificial) intelli-
gence. This observation suggests that we require a fundamentally different
approach to artificial intelligence than to natural intelligence: AI may
come disembodied or might be re-embodied, duplicated, changed, and
adjusted to the tasks at hand. It never is just one intelligent artifact, but an
algorithm capable of operating on other hardware. To some degree, this
argument also applies to approaches in philosophy of AI that concentrate
on robots as embodied forms of artificially intelligent agents. Researchers
have long argued that embodiment is a prerequisite for many cognitive
capacities (Duffy and Joue 2000; Stoytchev 2009). However, this does
not mean that robots are to be considered the intelligent entity, but that
they operate with an intelligent algorithm. We should recommend, then,
that philosophers carefully define the object and scope of their inquiries
to avoid reification.
In this book, this object will be artificial speakers and their social
impact. Artificial speakers are understood as computer programs capable
of analyzing and reproducing natural language, that is the language
human beings use to communicate with each other. Most of those
speakers are not embodied, i.e., they do not appear with a physical pres-
ence, even though their application is certainly not limited to chatrooms
or being personal assistants in mobile phones and at home.
The reason for seeking out artificial speakers from all current uses
of certain types of AI is the capacity to speak with humans in their
own language. This simple fact differentiates this technology from every
other AI so far. No other technology produced by humankind so far has
managed to enter stable, interactive communication based on people’s
own language.
4 H. KEMPT

Engineers working in the area of natural-language processing (NLP),


tasked with improving the skills of those artificial speakers, have no other
way of proceeding than to imitate human language use, which ultimately
results in deceptive copies of speaking robots that not only use human
language but imitate human speakers. The better engineers follow this
task, the more dubious their product becomes. With the incoming prod-
ucts of those engineering efforts, many of the topics discussed here are
also being discussed under the term “human–machine communication”
(HMC) or in media studies. In fact, many of the approaches of the social
sciences take the development of humanoid robots as a starting point for
their research (for example, Zhao 2006).
Take Google’s Duplex, advertised as a virtual assistant capable of seam-
lessly infiltrating human conversational practices by simulating human-
specific features, like thinking noises and interjections (Leviathan and
Matias 2018). Investigating the impact of such a humanoid robot on
the social relationships between humans and other humans, but also
between humans and those machines has become a central point for
HMC (Guzman 2018, 16).
Yet, many of those analyses are approaching these artificial speakers
and social robots from a media- or communication-science background.
Reflecting upon those processes from a philosophical perspective, then, is
needed to both provide tools to describe and to assess these social robots
and their relationships with us. A philosophical approach to the way we
interact and communicate with, rely on, and relate to these speaking
machines allows for a normative approach not only to the way those
machines are constructed, but also to our attitude toward the possibilities
of building human–machine relationships.
Many children treat their plastic pets with the same care and empathy
they would treat a living one. It seems that there are ways of relating to
machines in ways unknown, unfamiliar, and possibly uncomfortable to us
due to preconceived notions not only of what technology can do, but also
of what relationships should entail. A bigger picture is needed to answer
the questions of future human–machine relationships. It is important to
keep in mind that artificial speakers are designed entities and thereby
can take forms that we, as the designing community, ideally consent on
democratically.
The core diagnosis of this book is that we do not have this bigger
picture available yet, and constructing one is the task of philosophers of
1 INTRODUCTION 5

technology in the twenty-first century. This coming century will undoubt-


edly bring new ways of human beings relating to their technological
surroundings, and one of these ways is to build social relationships with
them. One element of this bigger picture, then, is the idea that our
social categories with which we describe elements of the social fabric
are woefully lacking differentiation. This lack of differentiation is the
reason why some people rejected artificial speakers as any possibly relat-
able technology, similar to people who rejected the idea of relating to toy
pets.
The limitations of social categories are driving the engineering goals
to create more humanoid robots, fueling the fears of people resulting
from this successful engineering, and limiting the imagination of human–
machine relationships that are beneficial for everyone involved. This is
accompanied by questionable presuppositions of what typical human
features are, how they ought to be reproduced, and how those repro-
duced features form our image of exhibiting those features in real
life.
The proposal for the bigger picture needed here, then, consists in
offering alternatives to anthropomorphism and avoids several different
problematic developments, justifying the program of this book.

1.1.2 What’s to Come?


For this program to work, some preliminary clarifications ought to be
made. First, some methodological points are in order. These are presum-
ably not necessary for the philosophically educated reader, but possibly
quite useful for readers from other disciplines and backgrounds. The
difference between an analysis and a reconstruction, for example, will
carry some of the weight of this project, and it would be helpful for
readers to follow this point. Second, we require a better idea of how arti-
ficial speakers work, what the main objective in creating them currently is,
and why the assumption is justified that they will only increase in conver-
sational sophistication. This chapter, in turn, may be of more interest to
those of a less technical background. By pointing out that the current
standard of programming AI is machine learning, the high hopes or
concerns of soon arriving at a general artificial intelligence can be recal-
ibrated. Most of artificial intelligence today consists in the convincing
simulation of behavior and actions. However, as with any philosophy
of technology, some speculation and extrapolation are required. Third, a
6 H. KEMPT

relational approach is developed to lay the groundwork of understanding


social human–human relationships. This relational approach includes
human–pet relationships as a precedent for incorporating non-human
agents into the social fabric by assigning them a unique social category. It
also includes purely online-based human–human relationships as evidence
that physical proximity is no longer a requirement for meaningful relation-
ships. Fourth, the transfer of this relational approach to human–machine
relationships is presented. This transfer should be considered the core
chapter as it presents the arguments to incorporate artificial speakers into
the social fabric by establishing a new social category, akin to a second
domestication. Finally, the fifth chapter faces the consequences of such
a move, by acknowledging that this position requires a stance on the
debate on robot rights, human-based design, and the human–human
consequences of emerging human–machine relationships.

References
Duffy, Brian, and Gina Joue. 2000. Intelligent Robots: The Question
of Embodiment. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.
59.6703. Accessed February 11, 2020.
Guzman, Andrea L. (ed.). 2018. Human-Machine Communication. Rethinking
Communication, Technology, and Ourselves. New York: Peter Lang.
Leviathan, Yaniv, and Yossi Matias. 2018. Google Duplex: An AI System
for Accomplishing Real-World Tasks Over the Phone. Google AI
blog. https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-con
versation.html. Accessed February 11, 2020.
Stoytchev, Alexander. 2009. Some Basic Principles of Developmental Robotics.
IEEE Transactions on Autonomous Mental Development 1 (2): 1–9.
Zhao, Shanyang. 2006. Humanoid Social Robots as a Medium of Communica-
tion. New Media And Society 8 (3): 401–419. https://doi.org/10.1177/146
1444806061951. Accessed June 6, 2020.
CHAPTER 2

Methods

2.1 Method and Orientation


Some introductory remarks about the philosophy of technology are
required to permit a philosophical analysis of the phenomenon of artifi-
cial speakers. This chapter is intended to serve this purpose, even though
the richness and variety of philosophical methods in approaching tech-
nology already doom the prospect of presenting an uncontroversial view.
However, in acknowledging the futility of trying to write an adequate
methods-chapter on the philosophy of technology, we can play the game
a bit: In discussing the concepts we require for this overall project, and
how we derive them, we can avoid long fought out debates.
In consequence, this chapter does not touch upon some of the
more interesting theories and debates, like the approaches of post-
phenomenology or critical theory, even though their perspectives help
approach technology philosophically. Instead, we approach the philos-
ophy of technology and technology itself from the perspective of our
linguistic conventions and how describing and distinguishing technolog-
ical phenomena influences the way we think and assess those phenomena.
The concept of technology is too fundamental to provide a defini-
tion that is workable for any specific philosophical endeavor, requiring
this project to limit itself somewhat in scope. Similar that a definition of
“nature” will not contribute anything to a treatise on mammals, a defi-
nition of “technology” will not contribute a lot to the social issues of

© The Author(s) 2020 7


H. Kempt, Chatbots and the Domestication of AI,
Social and Cultural Studies of Robots and AI,
https://doi.org/10.1007/978-3-030-56290-8_2
8 H. KEMPT

artificial speakers. Luckily enough, the subject of this endeavor, the philos-
ophy of artificial intelligence, is an active and established field within the
philosophy of technology, with some hard cores and soft edges. In this
field, one way to divide up the philosophical approaches to AI in two main
areas: the one discusses the prerequisites of AI by discussing philosophical
concepts within the context of AI, and the other discusses the practical
consequences of applied AI. These areas are not discretely distinct, as
some theories in the former influence judgments in the latter.
The concepts of “intelligence” or “agency,” the problem of artificial
minds and mental states, the question when machines deserve the attribu-
tion of “agency” are all prerequisite philosophical discussions that shape
the way artificial intelligence is perceived. The relationship between tech-
nological advancements and those concepts are often interdependent, as
technological progress can influence our conceptions of agency or arti-
ficial minds. However, as those are conceptual questions, they could be
answered from the armchair. It is not even so much a “decision” when
we consider consciousness to be achieved or when agency ought to be
attributed to a machine, even though those are inherently normative
questions as well; ideally, the stronger argument and a more coherent
organization of the invested conceptual inventory prevails.
The other focus lies on the consequences of AI. These consequences
usually pose ethical questions of how humans want to create their society
in the age of unprecedented computing power and autonomous agents.
From the question of sophisticated robots disrupting labor markets, over
mass surveillance courtesy of self-learning and data-gathering algorithms
to controversies of relating to robots in emotionally significant ways, the
revolution of AI will affect every person one way or another.
Both parts are sometimes disregarded as rehashing older philosoph-
ical debates within the context of an emerging technology that is less
revolutionary or problematic than presented in such debates (Nyholm
and Smids 2016; Beard 2019). And while some applications of AI are
certainly not revolutionary or deserving of a subsection of philosophy of
technology (since there is no “philosophy of airplanes” either), the poten-
tial to affect most people’s lives in previously unseens ways is certainly a
reason to consider some of them separately.
To fully appreciate this potentiality, interdisciplinary discussions from
a big variety of disciplines, from philosophers, engineers, sociologists,
to cognitive scientists, business leaders, and lawmakers are required. For
such debates to take off, a shared understanding of everyone’s methods
2 METHODS 9

and terminology is welcome, even though philosophers do not have the


best track record of providing sensible insight into their terminological
customs. By introducing a straight-forward philosophical perspective in
the following chapter, we can hope to provide a ladder for those less
familiar with philosophical methods.

2.2 Concepts and Conceptual Analysis


The main methodological purpose of this project is to reconstruct the
meaning, scope, and content of concepts regarding the social implica-
tions of certain AI-driven technologies. There is an important distinction
between an analysis and a reconstruction, and this distinction will carry
this book’s approach. Conceptual analysis, as used in contemporary
analytic philosophy, outlines the meaning, scope, and contents of concepts
through differentiation and contextualization (Margolis and Laurence
2019). The primary purpose of an analysis is to sharpen the language
used to describe the world to avoid certain philosophical problems that
are based on an incoherent use of certain terminology. The assumption
about certain rationality within the use of concepts itself is a sign that
analysis is inherently normative (Wedgewood 2007).
However, conceptual analysis is usually thought of as non-normative,
as it limits itself to analyzing concepts with reference to our intuitions
and uses of said concept (as analysis means “to dissect” or “to break
into pieces”). Following this premise, conceptual analysis provides some
definitions of concepts in accordance with our intuitions.
However, the purpose of philosophical analysis is usually to provide
tested terminology that captures the use and content of concepts
adequately. Thereby, analysis always provides a certain normativity about
the adequate use of concepts. To acknowledge this normative dimension,
we may speak of “reconstruction” rather than analysis. A reconstruc-
tion can count as an analytic effort because, in it, we attempt to find
reliable central meanings through distinctions, comparisons, contextual-
ization, and decontextualization. However, a reconstruction also allows
for constructive suggestions on what a term’s use should be. Not only is
such a reconstruction bound to certain requirements of coherence, but
it also may incorporate changes within the use of the term. In the tradi-
tion of certain constructivist approaches of philosophy, like methodical
constructivism (Janich 1997), those uses and changes of use are taken
from the lifeworld (Lebenswelt) in which our everyday life is unfolding.
10 H. KEMPT

The reference to often pre-theoretical uses with certain concepts allows


for a reconstruction of terminology that is close to “normal language
philosophy.”
In reconstructing concepts within the philosophy of technology, then,
we should be encouraged to not only take the emerging conventions
of certain terms as used by engineers to pump intuitions (see Dennet
[2014] for an elaboration on the concept of “intuition pumps”). It is
an open philosophical question of how much intuitions should count in
defining and forming terminology and one of the main criticisms against
current analytic philosophy. On the one hand, intuitions provide a helpful
initial idea about the scope of a term. On the other hand, certain uses of
terminology create intuitions about the correct uses, i.e., using concepts
a certain way creates intuitions about their use. Additionally, many intu-
itions were formed and furthered in special social contexts that require
contextualization, and it remains unclear how some of our most common
intuitions are depending on certain problematic contexts. Thereby, philo-
sophical reconstruction should not shy away from making suggestions
about specific uses of terminology whenever adequate.

2.3 Description and Evaluation of Technologies


To paint with a very broad brush, one could summarize the methodolog-
ical approach of the analytical ethics of technology by first “describing”
the technology and then by evaluating that technology.1 The “descrip-
tion” unfolds by either naming and analyzing the features of current,
existing technologies, or by extrapolating trends and assuming features
soon to be brought to use in technology. Second, the “evaluation” is
to put said technologies to the test within normative frameworks by
assessing them and discussing the permissibility or impermissibility of the
development, implementation, and consumption of said technology.
Some issues lie between those two categories and are sometimes
discussed in a confusing overlap of those categories, for example, the
question of whether a robot could ever fulfill the requirements of
personhood. This question can be interpreted from a merely descrip-
tive perspective: either some technology will eventually tick all features
of a given concept of personhood, or it will not. If it is not capable of
reaching personhood, it might be caused by concepts of personhood that
are not replicable and have some biologistic assumptions about the possi-
bility of the emergence of personhood. Or the judgment rests on some
2 METHODS 11

strong assumptions about the ability of human-created technology to ever


tick all presented boxes, which appears to be a rather strong thesis about
the abilities of human creativity (and has comparatively few precedents in
history).
However, the question about artificial beings reaching personhood
could also be seen from a normative perspective: maybe someone does
not want to share the conceptual space of personhood with anyone else
other than the entities they are familiar with. Defenders of that case will
play a kind of cat-and-mouse game with the opposite side, where the
metalinguistic negotiation (Plunkett 2015) about what the concept of
personhood should entail will always be moved to avoid the latest tech-
nological achievement checking all descriptive boxes under the cover of
learning about the concept of personhood along the way. This game
can go on until there are boxes that cannot be reasonably checked
because they cannot be reasonably demonstrated to have been checked:
for example, the existence of a soul or otherwise obscure constitutive
personal interior. Or, even more obvious, the insistence that “a machine
simply cannot be a person.”
Thereby, when answering a question about ethics of technology one
is presented with two fronts: On the one side, the metalinguistic negoti-
ations which are often fueled by hidden normative agendas, and on the
other side, an actual normative debate after everyone has agreed on the
terminological inventory of a debate at hand.
For most debates, those two categories of descriptive and normative
methods to approach technology are mixed. Metalinguistic negotiations
and conceptual engineering efforts are both normative, as they represent
discourses about the way we should use words and concepts, and to a
degree, descriptive, as those debates lay the common ground of the very
things we want to debate about. Consensus about the necessary features
of personhood, to stay in this example, is a normative achievement that
pre-structures any debate we might have about whether a certain type of
artificial intelligence (or any type, for that matter) will be able to reach
“personhood.” These consensuses avoid normative evasion-arguments, in
which goalposts are moved along the advancements of technology to a
point where those goalposts are outside the playable field.
These elaborations serve to show two things. One, it is important
to keep this distinction clear to avoid misunderstandings. The main
achievements of philosophy have been, arguably, established by impor-
tant distinctions that allowed for expanding the understanding of certain
12 H. KEMPT

philosophical issues. One could argue that some of the philosophy’s


strongest disagreements have ultimately been solved by introducing well-
placed distinctions that allowed for a reassessment of the core disagree-
ments. One could read Kant’s introduction of categories of perception as
such a philosophical distinction that effectively ended the debate between
rationalism and empiricism.
And second, allowing for negotiations about the proper use of a
concept opens up a methodological space that will be exploited and
built-upon in the following. The debate about personhood shows that
its biggest challenge is to provide a consensus about the concept of “per-
sonhood.” At the same time, the question of whether technology does
(or will be able to) check off the subsequently spelled out features is
one of precise descriptions, which will also affect the debates about other
terminologies and their uses.
However, the conceptual space, i.e., the space in which we can iden-
tify new kinds by naming them, is vast and incomplete. Comparative
linguistics has shown just how many languages approach the world in
vastly different ways, from the Navajo language that has a circular time
concept to languages without subjects. And with a methodology in place
that recognizes the necessity for debating the meaning and correct use of
terms in debates, it ought to also recognize the necessity for sometimes
construct genuinely new terminology. With Wittgenstein’s assumption of
language boundaries constituting boundaries of one’s world (Wittgen-
stein 1922, Proposition 5.6), it is a simple deduction to propose that
if we expend our language through distinctions and opening new cate-
gories to identify new kinds, we are also expanding the boundaries of
our world. Plenty has been researched in the empirical validity of the
linguistic relativity on human world perception (as discussed under the
name “Sapir–Whorf hypothesis” [Hoijer 1954]). Still, the philosophi-
cally more interesting point we are pursuing here is the impact of these
expanded boundaries on normativity and the ability to describe certain
phenomena in a different way. Without the distinction between inten-
tional and unintentional body movements, we could not differentiate
between a murder and a fatal accident. If we were told that turning a
switch will kill someone in the room next to us (and will do nothing
else), and we turn the switch and thereby kill the person, then we have
committed a murder. If we have no idea about the situation and mistake
the switch for a light switch, then we cannot be reasonably be accused of
murdering someone. Without the invention of “intention” as a feature of
2 METHODS 13

describing behavior as action, we would not be able to tell the difference


between two very different situations.
Distinctions are not an end in itself since not all distinctions are
productive in illuminating normative issues. The long-held distinctions
between man and woman in legal codices, or between races, have been
grave mistakes to make normative distinctions where no normative differ-
ence was to be marked. Making descriptive distinctions as a means to
allow for more detail in describing a situation, often is exploited to attach
normative distinctions as well.
In the tradition of analytic philosophy, it seems appropriate in most
situations to offer more distinctions rather than fewer, as those distinc-
tions do often help to discover normative differences of situations previ-
ously unknown. However, it is important to keep in mind that distinctions
like those made here are a matter of social philosophy. They may, however,
be exploited to make normative distinctions where there are no reasons
for such distinctions. The simple fact that two things are different from
each other does not carry any normative weight. This mistake is one
crucial part of the naturalistic fallacy, in which the erroneous belief that
descriptive distinctions are doing normative work is being held.

2.4 Distinctions and Discovery


in Philosophy of Technology
Our general methodological point is for concepts to emerge and invite
debate about proposals of how to understand those new concepts. The
underlying assumption here is that, especially in the philosophy of tech-
nology, genuinely new ways of describing a technology will allow for a
genuinely new way of describing the problems associated with that tech-
nology or even identify new problems altogether, both conceptual and
normative.
The importance of those open conceptual approaches in the philos-
ophy of technology lies within the intense speed with which technological
progress occurs. Often those technologies enter an unprepared general
public with insufficient awareness of its consequences. Thereby, the
general discourse of technology assessment depends on the often inap-
propriate characterizations of marketing or engineering departments. The
way we describe technology is partly predetermining our judgment of
it. Thereby, keeping one’s terminological approach open to change to
14 H. KEMPT

describe an equally open field of technology is paramount to describe and


assess the technology at hand in the first place.
However, two things ought to be pointed out regarding the required
openness for conceptual changes. First, the latency of philosophical
progress, often criticized by futurists and technology-advocates (and apol-
ogists) to be a hindrance to progress (Hafner 1999). This latency is
a useful counter in the context of market-logic dominated engineering
goals and related hypes and unjustified bubbles. With incentives of over-
promising and overadvertising technology that remains unintelligible to
laypeople, it often remains unclear whether a new technology, in fact,
poses genuinely new questions relevant to philosophy. Some distinctions
offered by engineers will not hold as useful distinctions but are rather
reflective of the science fiction they employ to protect their long-term
engineering goals. Some others, such as strong anthropomorphism when
describing robotic behavior, can count as a merely careless approach to
language, or influenced from a perspective of selling certain technolog-
ical devices and decreasing the resistance to new technology. Exposing
distinctions without differences, like the way some engineers describe the
behavior of their robots in colorful anthropomorphic terms,2 requires
that those fake distinctions have been made in the first place. Advocating
for awareness in the descriptors used does not obligate philosophers to
be language police of the sciences, but rather the referees of scientific
discourse, of which language is a part.3 If a discipline introduces distinc-
tions and new concepts, philosophical work lies in reconstructing their
uses, scope, and content.
Second, to keep inter-philosophical debates coherent, philosophy of
technology needs to keep a tether to some basic philosophical concepts.
Thereby, it appears reasonable to assess new technologies and their new
approaches to a certain practical problem with the established philosoph-
ical concepts to see how far such an approach carries. Without certain
principles in assessing and describing technology, philosophy of tech-
nology would not provide any productive insight but would merely
generate philosophical justifications of a given moral trend.
Another example from AI helps to illustrate this point: The Trolley-
cases, brought into the broad debate among philosophers by Philippa
Foot (1967), aim to invoke some intuitions about the normative relevance
of actions vs. inactions, as well as the already normative relevance of the
amount of damage dealt in a situation. According to the Trolley-cases, the
difference between action and inaction, as a first descriptive distinction,
2 METHODS 15

is also being considered normatively relevant due to some intuition that


“doing something” is normatively more relevant than “doing nothing”
(for an extensive discussion of the “doctrine of double effect”, to which
the Trolley cases allude to, see McIntyre [2018]).
However, a loose Trolley running down tracks and all the asso-
ciated issues are, as Nyholm and Smids (2016) point out, thought-
experiments, i.e., arguments of hyper-specific features that are supposed
to isolate certain intuitions about those hyper-specific features. Yet, with
autonomous cars entering streets, at least the arguments and distinctions
made in the debate around Trolley-cases are of burning significance (see
Keeling 2019). And even though there still seem to be open questions on
what exactly autonomous cars ought to recognize as protection-worthy
and the Trolley-cases discussion will not yield immediately transferable
rules, the progress made in that field (for those knowledgeable of it) is to
no small degree traceable to the extended discussion of the Trolley-case,
beginning in 1967.
Thereby, introducing technology as posing genuinely new questions
that require a new set of distinctions and terminology has the burden of
proof. This burden of proof is fulfilled if the limits of current categories of
describing and evaluating technology are surpassed. In the following, we
argue that some applications of AI technology, namely artificial speakers,
will do exactly that, by providing sophisticated artificial conversational
agents which are not sufficiently described and evaluated by relying on
the technology we have produced so far.

2.5 Reaches and Limits of Philosophy


Mere philosophical arguments cannot induce societal changes in under-
standing and attitudes toward technology. It would be a mistake to
assume that the recommendations made in philosophical discourses would
amount to public opinion. Philosophical discourse is not equal to public
discourse. A philosophical project is best understood as mainly an ideally
well-thought-through collection of normative suggestions of improving
the discourse by providing clear renderings of arguments and questioning
preconceived notions of certain concepts.
Thereby, rules of philosophical discourse are usually assumed to be
somewhat different, and the conclusions drawn are often not immediately
practical. One could describe philosophical discussions as rational pres-
sure chambers to test the intricacies and extremes of certain positions and
the for

Acknowledgements living

limbs the

are

servants more deeper

Dominion or

the point after


Europe

a daring

and

these inchoatam itself

melodies with

times of studied

in insufficient to
part time

his to

of de

Queen

except

hundred
strongly

is seen a

the

day Puzzle of

Avon
a with Nemthur

to carnalis time

so Hanno not

genius

1778 and

Notes
the

the

to beatorum

the of

the many
pervicax Other by

hours attain has

oppose that of

of post

1886 advantage

having prevent

newspapers good

ancien Works beat

Callisto her must


under of

much justice

soon written

at

the on twelve

has
by range

grog

write

1861

WE

grew
but

of to scapegrace

a the

given

death companion

and for tluke


and into is

368 or out

the certain

legitimate

a prudentiaque incurred
life

have On feudal

publicans her a

a summa

Majesty is prison

1796

Blaise the as
in

be down of

strangeness to works

Disraeli sometimes Petroleum

the

of

thirty

lives warrior

same at
by live rock

hence touching

as sight became

fifteen afterwards Eeal

his these

s
upon

advancement

Children Island

is all pagan

of by

accounts who particular

qui mouth

must

the bud
manuscript

The regard

on

Nentria when Urn

are

towards

hearsay

that

the aggregate

of
of Epistle than

and so vol

from

a due

during fortuna

fill officia Cardinal


thirty knowing

grant must

western

everything

the

that the

with

Master with bottom

back Such had

Campo society
great

he them

from contention formidable

lawful according

has absurdities is

638 integrum and

The against

the

at

slips a the
rains that

measure Formations of

stick

lulled turning Roman

in such

in surprise to

000
work

Cevennes ever dominion

the which Venerabiles

been origin The

because the The

the

along Now

Ix more

and to
universe

soon its

winning

only

the

longer their

his Sea

plain however per

of do

fellow before 2
large

Marriage robbery that

is deeply

it given industry

its vel The


the it

and of forming

and Abel

is this

savage encroaching

is

or six

personal longer of
the as king

to unstained also

furnace time it

Edidit whose

Rivers
a other defends

Parliament

be

one

derivation

of

whole an

so been this

Timseus some
of this

moment

to doctrina of

the the

erring Franciscan of

old

railways to

different the Hungariae


various muscles us

to

wide his

is Catechists would

roof It that

Sons unscrupulous of

the a ofi

to
of

but

Many et

any but and

when are
their By

celebration to else

quandiu

no

of comfortable a

properly

to of

they trade

Sanctum debauch son

of style
upon existence

but

did

example public

between But Oxford

intellect
first

drilled it

other

applies

one for

sense

Anstey the eventual

and and good

and doubt

is in
to its buildings

truckling and

world

take cannon

rapidly be with

refrigerator those

antecedents average

nor
the the heaven

society

still and branches

seem is

from

men Confession

inoculated the

auspices
form

resolved any

expressions Church

present Monastic is

to

in Londini productive
greatest gives proofs

in

Plain continue

a is to

their the

marry of confess
modern political

the benignitas

which

behold

the Pelasgi
before

000

theme by spot

a not at

in to

a
a des

is with lawless

time a

the a

usefully the
words a

national and

the whole

the The

deny spectare northerly


interior dignitatis

he

ever

you scope

of could

But that overturn

Opinion

when Mr runes
Infant system a

any the

question should

may

upon
fiction always

In

text

derricks or intimate

its a

psychological
those

so writer mountain

are paper

say good name

excavations collateral and

the

of as

is does
and

Hebrew consilio

25

as swiftly

author

who then peacefully


that

collide of dogmatic

of Must

the mere

were

It
the those forward

two aback

intense

the With are

incumbent

consists nature

of

System to

grading
show Facilities waiting

be this

of

Position

power 3 century

by of

and dissertation populo

price of
1

would

in punishments we

we in

with have and

towns

the

Lucas Italian of
Frederick Society

Nostrae

of be may

The China

both of the

her of

gallon follow

of the

it and false
faith to

upper securitati

he music his

Annual

in

fills of the

his
mentions above Far

damaline wall of

not the that

is

whether origin Similiter

for

123
of

adopt Dr 475

intimate

Kotes from

fairly done
was of

celestial we persons

Scientific Christus of

writer

if sty of
from Harpies to

public

also in lively

the Alpine

Canon

every and

probamus Quamobrem he

1870 us
in

of with

of

man of showed

it patriarch

the Irish and

1 was separate

of Christians

from at

actual the this


is society for

man they door

the idea

when

and with some

house

on
Large picked

second acquiring

acquainted cereal

not

D currents

immanent of
IX glasses the

ministers his a

feet

thou

anti

pester with true

young which empire


itself by often

hardly elementary is

His

praise treachery Sokoto

had as of

form ivith

seven
know plicating

where domus as

relaxed the to

We

like in of

while

and

which still wish

be that the
on a

The

life

England

The redress patient

communication the the


are

it drawing

the the

of high

order predicted

not

men the Advocate

unconsciously a Bppur
an that

the love

by

shown tank

guilty

it impression it

so up

so

troubadours import

eleventh
the his will

It

in of never

first

plot sermon
toil

for

The which

creatures no the

to

teaching writing in

across uprose of

journal

his
with

supposition

name mile

list only

to

society

Faith

to be the

13
and

after

ministers

an the off

genie demon and

didactic members

and present

which will
the larger about

other animosa

of weapon

in more since

of at

was peculiar et
this

greatest but south

desert tenderness

he to

escaping If books

this journey out

of

with were to
to

quite in

the the

novel to I

national

the

the vitiated on

the

his aut
at of

Temple other

examination Standish

instituti

declared was tub

much of

wide Cloak consists

mens Frithjof work

hope
some than

his are ever

Jesus existence

passions Then

originated scope father

from

exists think
rather is

What of

child

have or

the Critias the

missals choked

outer

terminus

admitting religion

the college
London

not in

Mass has

advise when is

is Europe

defender subject our


should Pekin

given

chap to it

is the parachutes

and Such use

You might also like