0% found this document useful (0 votes)
46 views10 pages

Postphenomenology and Ethics: Peter-Paul Verbeek

Philosophy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views10 pages

Postphenomenology and Ethics: Peter-Paul Verbeek

Philosophy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Chapter 6

Postphenomenology and Ethics


Peter- Paul Verbeek

Introduction
How to account for the ethical significance of technology? Technologies have become an
intricate part of human existence, and their influence on human beings and society raises
many ethical questions and concerns. But does this imply that technology itself is ethically
significant? After all, it is human beings who decide about the goals for which they use
technologies, not the technologies themselves. It would be odd to blame a car for a traffic
accident: it’s the driver who has moral agency, not the vehicle. Or would we throw out
the child with the bathwater if we drew this conclusion? Should we indeed take seriously
the idea that technologies are ethically ‘charged’, and that there is some kind of ethics ‘ in’
the devices and systems that we use? When Artificial Intelligence helps medical doctors to
make decisions about life and death, doesn’t that make AI systems ‘ moral’?
Questions like these have a central place in the postphenomenological approach to tech-
nology. This approach takes the relations between human beings and technologies as its
starting point. The role of technologies in society, after all, always rests on the relations
that human beings have with them: on the basis of human-technology relations, technol-
ogies help to shape human practices, perceptions, and interpretations. And exactly this
‘shaping’ role of technology can be seen as the basis for their ethical significance. After all,
ethics is about the questions of ‘ how to act’ and ‘ how to live’, and the influence of tech-
nologies on human actions and decisions gives them an explicit role in our answering of
these ethical questions. The main challenges this brings for the ethics of technology, then,
are (1) how to conceptualize this ‘ morality of technology’?; and (2) how to deal with it in
a responsible way?
In this chapter, I will explain the postphenomenological approach to ethics of technol-
ogy in three steps.1 First, I will introduce postphenomenology itself, as a specific way to
analyze technology and its role in society. Second, I will discuss the various dimensions of
the moral significance of technology that become visible from the postphenomenological
approach. I will explain how technologies mediate moral actions and decisions, how they
help to shape value frameworks, and how they challenge the concepts with which we can
do ethics in the first place. Third, I will move this discussion from theoretical philosophy
to practical philosophy, by explaining the approach of Guidance Ethics, which grew out of
postphenomenological ethics of technology.

Postphenomenology
The phenomenological tradition in philosophy has always had quite a substantial role in
philosophy of technology. Phenomenology focuses on the study on human experience. Its
central starting point is the intricate connection between subject and object, human and

DOI: 10.4324/9781003189466 - 8
Postphenomenology and Ethics 43

world. Human subjects and the world of objects can never be separated, after all: humans
are always directed at the world (we always see something, hear something, feel something) and
at the same time, the world is what it is for us based on our relations with it. The world
‘in itself ’, therefore, is by definition inaccessible to human beings. As soon as we even ask
a question about it, it becomes a world-for-us, which is meaningful on the basis of our
relations with it. Phenomenology is a relational approach: it sees humans and world as in-
tricately connected.
On the basis of the phenomenological focus on ‘ experience’, classical phenomenological
analyses of technology tended to be quite negative. They typically saw technology as a
threat to the primordial role of experience and considered technology to be an alienating
force. Karl Jaspers, for instance, considered technology to be alienating in an existential
sense: we have become dependent on technology, and the technologization of labor to-
gether with the increasingly bureaucratic organization of society turns our social environ-
ment into a big ‘ Apparatus’, in which human authenticity is less important than the function
of each individual in the system. For Martin Heidegger, technology was alienating in a her-
meneutic sense: it affects our understanding of the world and of ourselves. The technology
way of thinking approaches all entities in the world as ‘ raw material’ for the human ‘ will to
power’. While an old, wooden bridge over the river Rhine still recognized the river in its
own right, Heidegger stated, a waterpower station built into it forces the river to show itself
as a supplier of energy (Heidegger 1977, 16–17).
Over time, resistance grew against the romanticism and one-sidedly negative character of
these classical positions. Gradually, technology started to be understood as an element of society,
rather than being opposed to it as an alienating force. As I have argued in my book What Things
Do ( Verbeek 2005), classical philosophy of technology typically had a ‘ transcendentalistic’
approach to technology: it reduced technology to its conditions and analyzed these conditions
rather than the technologies themselves. Jaspers reduced technology to the system of mass
production, and Heidegger to the technological way of understanding the world. This re-
sistance resulted in the so-called ‘ empirical turn’ in philosophy of technology: a turn toward
studying concrete technological artifacts and systems as a basis for philosophical analysis.
Instead of reducing technologies to their conditions, it took them as a starting point.
The postphenomenological approach, which developed out of the work of North Ameri-
can philosopher Don Ihde, embodies this empirical turn (Ihde 1990; Selinger 2006; Rosen-
berger and Verbeek 2015). Postphenomenology leaves the romantic opposition of humans
and technologies behind, and approaches technology as constitutive for human existence.
Instead of taking ‘ Technology’ as its object of investigation, it focuses on actual technol-
ogies and the ways in which they help to shape the relations between human beings and
their world. The central idea of the postphenomenological approach is that it does not
locate technologies in a realm of material ‘ objects’ which is clearly demarcated from to
the realm of human subjects, but in the relations between humans and world. When using
a technology, humans are typically not only interacting with that technology, but also
have a relation with the world via that technology. This means that technologies can bring
about new human-world relations, ranging from social interactions to moral and aesthetic
experiences, and from scientific observations to religious awe. In short, technologies bring
mediation rather than alienation. MRI scanners, for instance, help to shape how neuroscien-
tists understand the brain and the mind ( De Boer et al. 2020). And Artificially Intelligent
systems help medical doctors to understand the symptoms of patients. Technologies are a
medium for human-world relations, and this mediation helps to shape the character of these
relations, including people’s understanding of the world ( Verbeek 2015).
To understand this phenomenon of technological mediation, we need to start from tech-
nologies themselves, instead of reducing them to their conditions. Don Ihde distinguishes
44 Peter-Paul Verbeek

several types of relations human beings can have with technologies. Some technologies are
embodied, like a pair of glasses: we look through it, not at it. Others are read, like a thermom-
eter, that gives a representation of the temperature which requires human interpretation.
Another type of relation is the alterity relation in which there is an interaction with technolo-
gies, as a quasi-other, like interacting with a social robot. And fourth, there is a background
relation, in which technologies function as a context for human activities and experience,
like the functioning of heating and air conditioning systems, that operate without us no-
ticing this.
In all these human-technology relations, technologies are not neutral intermediaries be-
tween humans and world, but ‘ mediators’ that help to shape how human beings engage
with the world, and how the world becomes meaningful for them. When technologies are
used, they contribute to the human practices and perceptions that emerge from that use.
And it is on the basis of this mediating role that technologies have an ethical dimension: by
helping to shape human practices and interpretations of the world, technologies also help to
shape moral actions and decisions, as will become clear below.

The Ethical Significance of Technology


The postphenomenological approach brings a specific contribution to the ethics of tech-
nology. First of all, postphenomenology makes it possible to analyze the ‘ impact’ of specific
technologies on human beings in a detailed way, as a basis for ethical evaluation: the ethical
assessment of technologies can be based on the identification of technological mediations.
This results in questions like: to what extent is it acceptable to use speed bumps to make
people drive more slowly?; how acceptable is the risk that WiFi tracking in public spaces
discourages people to visit these places?; etc.
Such ethical questions address technological mediation ‘ from the outside’, as it were:
they apply an ethical framework to assess the moral quality of specific technological me-
diations. There is a more intricate way to connect ethics and mediation, though: the
phenomenon of technological mediation has a normative dimension itself. By helping to
shape human practices and perceptions, technologies play a mediating role in the central
questions of ethics: the questions of ‘
how to act?’ and ‘how to live?’. Technologies, in order
words, are part of the ways in which humans do ethics. This section will highlight three
dimensions of this technological mediation of ethics: the mediation of moral actions and
decisions; the mediated character of values and moral frameworks; and the technological
disruption of ethical concepts, requiring the development of new concepts to address eth-
ical questions.

Moral Mediation
How to conceptualize the moral significance of technology, in a philosophical discourse
which connects ethics only to human subjects, not to technological objects? To qualify
as a moral agent, after all, intentionality is needed – a condition that can never be met by
technological artifacts. Moreover, attributing agency to things could have the absurd conse-
quence that we could actually blame things for ethically problematic actions. Also, it could
reduce our sense of human responsibility: why take responsibility ourselves if we can leave
the responsibilities to technology? ( Peterson and Spahn 2010; Peterson 2012).
From a postphenomenological point of view, such arguments build on an unjustified
separation of humans and technologies: the question seems to be if moral agency can be a
property of technologies, just like it is a property of human beings. But from a postphenom-
enological perspective, moral agency should not be located ‘ in’ technologies themselves,
Postphenomenology and Ethics 45

but in the interactions between humans and technologies. Ethics is ‘ done’ on the basis
of human-technology relations, in which technologies have a role as ‘ moral mediators’
( Verbeek 2011; Kudina 2019): they play a mediating role in the moral relations in which
human beings are engaged. By helping to shape how humans behave and understand the
world, technologies-in-use also help to shape moral decisions and moral action.
One of the central examples with which this phenomenon of moral mediation has been
investigated is prenatal diagnostics. Sonograms create new moral relations between ex-
pecting parents and the fetus. First of all, sonograms make the fetus visible already during
pregnancy. This changes the relation between mother and fetus: the mother now becomes
the ‘ environment’ of the fetus, while the fetus appears on the screen as a quasi-independent
human being. Not the unity of the pregnant woman and the fetus, but a visual depiction
of the fetus itself becomes the basis for developing a moral relation to the fetus. Moreover,
sonograms make it possible to get information about the health condition of the fetus before
it is born, and in countries where abortion is legal this makes parents responsible for getting
a child with a specific health condition: what used to be ‘ fate’ now becomes ‘ choice’. This,
as well, affects the moral relations between with the fetus: it informs decisions about par-
enthood and abortion ( Verbeek 2008).

Mediated Morality
Technologies do not only mediate morality at the micro-level of individual human-
technology relations, but also at the macro-level of moral frameworks and values. To make
this visible, recent studies have expanded the scope of the postphenomenological approach.
Besides focusing on technological mediation, these studies also started to include the
‘appropriation’ of technologies by human beings ( Verbeek 2016; Kudina 2019). The initial
focus of postphenomenology on what things do ( Verbeek 2005), also in moral relations, has
left underexposed that also human beings have an active role in the coming about of human-
technology relations. The technological mediation of moral actions and interpretations is
not only the result of the characteristics of these technologies, but also of the ways in which
human beings ‘ appropriate’ them, as part of their relations with the world. The morally
mediating role of sonography, for instance, does not only result from the technological ca-
pacity to make the fetus visible, but also from the human appropriation of this capacity as
a possibility to investigate the medical condition of the fetus, and a potential basis for making
decisions to act.
Philosopher Olya Kudina has developed a model to investigate this interplay between
mediation and appropriation: the so-called ‘ hermeneutic lemniscate’ ( Kudina 2019). This
model is an expansion of the figure of the ‘ hermeneutic circle’, which explains the dy-
namics between interpreter and interpreted (Gadamer 1988). This hermeneutic circle has
the following structure: by interpreting the world, the world gets constituted in a specific
way for the interpreter; as a result, this newly ‘ constituted world’ becomes a new context
for the interpreter, which, then, constitutes her or him in a specific way; on this basis, the
interpreter develops a renewed interpretation of the world; etc. As Kudina explains, this
circular relation between interpreter and interpreted is in fact mediated by technologies.
The resulting ‘ technologically mediated hermeneutic circle’ connects humans, technol-
ogy, and world via a lemniscate-shaped structure: ∞. Humans interpret a technology in a
specific way ( human —> technology), which enables the technology to mediate human
interpretations of the world in a specific way (technology —> world). Against the back-
ground of this specific understanding of the world, the technology acquires a specific role
and meaning (world —> technology), which in its turn constitutes the user in a specific way
(technology —> human), etc.
46 Peter-Paul Verbeek

Again, the example of sonography is helpful to illustrate this. People have a relation to
the ultrasound technology from the intention to make the fetus visible; on the basis of
this interpretation of the technology, information about the medical condition of the fetus
becomes available, which then constitutes the fetus as a ‘ potential patient’. Against this
background of potential patienthood, and in a society that allows abortion, the ultrasound
technology then gets interpreted as a technology that can not only be used to see the fetus,
but also to prevent the birth of children with a specific health condition; and on the basis of
this new interpretation of the technology, parents get constituted as being responsible for
getting a child with a specific health condition, and therefore as decision-makers about
the life of the fetus. Moral mediation appears to be a dynamic process of interpretation, in
which technological mediations and human interpretations are closely intertwined.
This hermeneutic lemniscate makes it possible to connect the micro-level of individual
human-technology relations to the macro-level of moral values and frameworks. Also at this
macro-level, technologies play a mediating role. Dutch philosopher of technology Tsjalling
Swierstra has indicated this phenomenon as ‘ technomoral change’ (Swierstra et al. 2009):
technological developments cannot only be evaluated with the help of moral frameworks,
but also affect these frameworks themselves. A good example in this context is the moral
impact of the birth control pill, as analyzed by Annemarie Mol (Mol 1997). The pill was
not only an outcome of the sexual revolution, but also helped to shape it. By disconnecting
sexuality from reproduction, it had a substantial impact on value frameworks regarding sex-
uality. Before the birth control pill, having sex was intricately connected to the possibility
of pregnancy, but the introduction of the pill in fact normalized having sex that was not
directed at reproduction. Because of this, the pill contributed to a growing acceptance of
sexual relations that cannot result in reproduction, like homosexual relations: it mediated
moral frameworks regarding sexuality.
This phenomenon of technomoral change, and its dynamics of mediation and appropri-
ation, can be studied empirically. Olya Kudina, for instance, has investigated online dis-
cussions about Google Glass on YouTube, focusing on the implicit and explicit definitions
of privacy that play a role in these discussions. Her study shows that Glass invited people
to define ‘ privacy’ in new ways that are different from the regular definitions that can be
found in textbooks ( Kudina 2019). Glass made people understand their privacy as more
than the right to be left alone, or to have control over their data: they started to define it as
the privacy of being together (is the other person really with you, or looking at something
else?), for instance, and of personal memories (will recordings of events from a first-person
perspective make memories less ‘ private’?).
This phenomenon of mediated morality brings an interesting extra dimension to the
ethics of technology. The ethical evaluation of technologies appears to not only require
anticipation of their future social implications, but also of the impact they might have on
the moral frameworks from which they might be evaluated in the future. This can even
be seen as a new variant of the ‘ Collingridge dilemma’ in the governance of technology
( Kudina and Verbeek 2019). This dilemma, which is also called the ‘ control dilemma’, says
that attempts to guide innovation processes always seem to be either too early or too late:
at an early stage of the development of a new technology, change is relatively easy, but the
potential social implications of the technology are not clear yet, but when these implications
do have become clear, changing the path of development of the technology has become
hard (Collingridge 1980).
Technomoral change adds an ethical dimension to this dilemma: not only is it hard to
anticipate the future impacts of technology, but also the future moral frameworks from
which these impacts will be evaluated. To deal with this situation, it is important to create
Postphenomenology and Ethics 47

‘threshold situations’ for technologies-in-development: situations at which it is already pos-


sible to investigate empirically how technologies might induce value change or moral me-
diation, because of a small-scale, experimental introduction of the technology in society.
Such threshold situations make it possible to anticipate future social impacts and moral
change with an empirical rather than a merely speculative basis ( Kudina and Verbeek 2019).

Conceptual Disruption
A third dimension of the ethical implications of human-technology relations is the phenom-
enon of conceptual disruption. Here, technologies affect yet another layer of ethics: beyond
the micro- and macro-levels of morality, there is also a sub-level or infrastructural level at
which technologies have an impact on ethics by disrupting the very concepts with which
humans can do ethics in the first place.2 Several contemporary technologies – like robots, ge-
nome editing, and climate engineering technologies – escape the concepts with which ethi-
cal theory has been working over the past decades or even centuries. The concept of ‘ moral
agency’, for instance, loses its self-evidence when robotic technologies like self-driving cars
are equipped with ‘ learning’ algorithms that enable them to make decisions about the lives
of human beings in case of a crash. And how to deal with the concept of ‘ human rights’
when the DNA of an organism contains both human and nonhuman elements? Should we
consider this organism to have animal rights, human rights, or a blend of both? Or how to
understand the concept of ‘ risk’ in relation to climate engineering technologies that could
‘dim the sun’ ( Roeser et al. 2019)? How to use the concept of ‘ intrinsic value’ when nature
itself becomes an engineering project? Should risks be acceptable for future generations, or
for nature itself? And if so, how to represent them in democratic processes?
Conceptual disruption is not a direct result of human-technology relations but is encoun-
tered when humans try to deal with ethical questions that arise from our interactions with
technologies. Some of these concepts indeed concern the implications of technology for
our understanding of the human being itself, where concepts like autonomy, solidarity,
empathy, and accountability are challenged in our interactions with technologies. Other
concepts concern the relations between technology and society, like justice, well-being, and
democracy. And yet another set of concepts concerns the relations between technology and
nature, like naturalness and artificiality, control, and intrinsic value. New and emerging
technologies affect ethics at its deepest level: they challenge or even escape the categories
with which we can do ethics in the first place. And by doing so, they urge us to revise, ex-
pand, or innovate the conceptual infrastructure for ethical analysis and reflection.

Guidance Ethics
Analyzing the moral significance of technologies is not only an interesting academic ac-
tivity, but also offers many opportunities to connect the ethics of technology to practices
of design and innovation. When it becomes possible to analyze how technologies are mor-
ally significant, after all, the ethics of technology can also turn itself into an ethics for
technology.
In the field of design ethics, several approaches have been developed already, to which
the postphenomenological approach adds its own distinct dimension. One of the most in-
fluential approaches in the field is Batya Friedman’s ‘
Value Sensitive Design’ approach. This
interdisciplinary approach enables designers to anticipate the values at stake in the technol-
ogy they are designing, in order to feed this anticipation back into the design process itself
(Friedman and Hendry 2019; see also Van den Hoven et al. 2017).
48 Peter-Paul Verbeek

The postphenomenological approach can be used to expand this program of ‘ value sen-
sitive design’. First of all, postphenomenology makes it possible to take moral mediation
and mediated morality into account when designing technologies. Rather than aiming to
‘load’ technologies with predefined values, as is often the case in Value-Sensitive Design,
the dynamics of the interaction between humans, values, and technologies then becomes
the starting point ( Verbeek 2013, 2017; Smits et al. 2019). Design then becomes a process of
intervention in an ongoing dynamics in which values are not given beforehand, but develop
in close interaction with the technologies that are designed and evaluated with the help of
these very values.
Second, the postphenomenological approach makes it possible to develop a new, con-
structive approach in applied ethics. The field of applied ethics has been strongly influenced
by bioethics. This branch of ethics focuses on ‘ ethical assessment’, for good reasons: it is
often executed by medical-ethical committees that evaluate proposals for research or med-
ical intervention in order to approve or reject them. The bioethical model of ethics is often
directed at asking questions of ‘ yes or no’, ‘
permit or forbid’. In the ethics of technology,
though, the possibility to connect ethical reflection to the design, implementation, and use
of technologies allows for a broader ethical approach than ‘ assessment’ only. Ethics can also
function as ‘ accompaniment’ of technology, aiming to guide technology in society.
The ‘ Guidance Ethics Approach’ ( Verbeek and Tijink 2020) is a good example of this
accompanying role of ethical reflection. The approach was developed by a Dutch working
group on the ethics of digital technologies in which companies, governmental organiza-
tions, and academics work together ( Verbeek and Tijink 2020, 62). The approach aims to
make ethical analysis ‘ actionable’, by connecting it concretely to the development, im-
plementation, and use of new technologies. It aims to be an ethics ‘ from within’ rather
than ‘ from outside’: it finds its basis in concrete engagement with technological practices
rather than in distant analysis. Moreover, Guidance Ethics has a ‘ bottom-up’ rather than
a‘ top- down’ character: it aims to empower people who are developing the technology or
experiencing its impact. Rather than delegating ethical reflection to ethical experts who
apply ethical theories to concrete technologies, Guidance Ethics gives a voice to profes-
sionals and citizens: their nearness to the technology in the various stages of its develop-
ment and deployment is a good basis for ethical analysis, and for connecting this analysis
to practices of design, implementation, and use. And, finally, Guidance Ethics should be
seen as a form of ‘ positive ethics’ rather than ‘ negative ethics’. This does not imply that
the approach is always positive about technologies, but rather that it does not primarily
direct itself (negatively) at keeping at bay what we do not want, but rather ( positively) on
helping to shape the conditions for what we do want. Just like ‘ positive psychology’ focuses
on stimulating well-being rather than ‘ curing diseases’, and ‘ positive design’ focuses on
shaping conditions for flourishing rather than ‘ solving problems’, positive ethics focuses on
connecting values to technology, rather than defining the boundaries that demarcate what
we do not want.
The approach has three distinct steps (see Figure 6.1). Its first step (‘ Case’) is a careful
analysis of the technology that is at stake, focusing both on its material-technological de-
tails and on its concrete context of application and use. The purpose of this step is to get a
close understanding of the technology and its social and societal embedding. The second
step of the approach (‘ Dialogue’) aims to identify the key values that are at stake in relation
to this technology. It arrives at these values by (a) identifying the actors who are involved
in the development and functioning of the technology, and who experience the impact of
the technology, and ( b) anticipating the potential effects (‘ mediations’) that this technology
could have on all relevant actors. The third and last step focuses on formulating concrete
Postphenomenology and Ethics 49

Figure 6.1 Guidance Ethics Approach ( ECP | Platform voor de InformatieSamenleving)

‘Options for action’. The list of key values that results from step 2 is not used to reach a
verdict about the technology, but to formulate concrete options for action regarding the
technology itself (is redesign needed to support these values?), its environment (do we need
regulation or supporting technology?), and its users (could empowerment of users, educa-
tion, and communication play a role in supporting the values derived in step 2?).
The Guidance Ethics Approach is not a normative theory itself: it aims to derive its nor-
mative content from the people who follow the approach. Yet, this does not take away that
the approach is itself basis on a normative starting point: it is directed at emancipation and
empowerment, enabling stakeholders in society to anticipate the implications of technology
for society, to analyze the normative dimension of these implications, and to translate this
analysis into the design, implementation, and use of the technology.

Conclusion
Postphenomenology brings a perspective of human-technology relations to the ethics of
technology. By investigating how technologies help to shape human practices, perceptions,
and interpretative frameworks, it makes visible a moral dimension of technology itself.
Technologies mediate moral actions and decisions, help to shape moral frameworks, and
can even disrupt the concepts with which we can do ethics in the first place. This moral
significance should not be seen as an intrinsic property of technology itself: postphenom-
enology does not claim that technologies are ‘ moral agents’ just like humans are. To the
contrary: it claims that moral agency should never be seen as something ‘ purely human’, but
as intrinsically mediated by technologies.
This moral significance of technology brings an extra dimension to the ethics of tech-
nology: it shows that ethical analysis needs to take into account how this analysis is itself
affected by the very technologies it aims to analyze. The yardstick is not independent from
50 Peter-Paul Verbeek

the things to measure with it – if we can compare ethics to measuring at all. This means
that ethics of technology should stay close to the technologies themselves and their concrete
implications for human beings, societies, and ethical practices, frameworks and contexts.
The Guidance Ethics Approach offers a structure for doing exactly that.

Notes
1 This chapter incorporates reworked fragments from Verbeek, P.P. (2021). The Empirical Turn.
In: S. Vallor (ed.), The Oxford Handbook of Philosophy of Technology. Oxford: Oxford University
Press.
2 This phenomenon of conceptual disruption is the main object of investigation for a large con-
sortium of Dutch researchers that received funding for a 10-year research program (2019–2029)
on the ‘Ethics of Socially Disruptive Technologies’ (www.esdit.nl).

References
Collingridge, David. 1980. The Social Control of Technology. New York: St. Martin’s Press.
De Boer, Bas, Te Molder, Hedwig and Verbeek, Peter-Paul 2020. Constituting ‘ Visual Attention’:
On the Mediating Role of Brain Stimulation and Brain Imaging Technologies in Neuroscientific
Practice. Science as Culture 29 (4): 503–523.
Friedman, Batya and Hendry, David G. 2019. Value Sensitive Design: Shaping Technology with Moral
Imagination. Cambridge, MA: MIT Press.
Gadamer, Hans- Georg. 1988. On the Circle of Understanding. In Hermeneutics vs. Science, eds. John
Connolly and Thomas Keutner, 68–78. Notre Dame, IN: University of Notre Dame Press.
Heidegger, Martin 1977. The Question Concerning Technology. In The Question Concerning Tech-
nology and Other Essays, trans. W. Lovett. New York: Harper and Row.
Ihde, Don 1990. Technology and the Lifeworld. Bloomington: Indiana University Press.
Kudina, Olya 2019. The Technological Mediation of Morality: Value Dynamism and the Complex Interaction
between Ethics and Technology. Enschede: University of Twente.
Kudina, Olya and Verbeek, Peter-Paul 2019. Ethics from Within: Google Glass, the Collingridge
Dilemma, and the Mediated Value of Privacy. Science, Technology, & Human Values 44 (2):
291–314.
Mol, Annemarie 1997. Wat Is kiezen? Een empirisch-filosofische verkenning. Enschede: Universiteit
Twente (Inaugural Lecture).
Peterson, Martin 2012. Three Objections to Verbeek. In Book Symposium on Peter Paul Verbeek’s Mor-
alizing Technology: Understanding and Designing the Morality of Things, Philosophy and Technology, eds.
E. Selinger et al. Philosophy and Technology Vol. 25, 619– 625.
Peterson, Martin and Spahn, Andreas 2010. Can Technological Artefacts Be Moral Agents? Science
and Engineering Ethics 17 (3): 411– 424.
Roeser, Sabine, Taebi, Behnam and Doorn, Neelke 2019. Geoengineering the Climate and Ethical
Challenges: What We Can Learn from Moral Emotions and Art. Critical Review of International
Social and Political Philosophy 23 (5): 641– 658.
Rosenberger, Robert and Verbeek, Peter-Paul 2015. Postphenomenological Investigations: Essays on
Human-Technology Relations. Lanham: Lexington.
Selinger, Evan 2006. Postphenomenology: A Critical Companion to Ihde. New York: SUNY Press.
Swierstra, Tsjalling, Stemerding, Dirk and Boenink, Marianne 2009. Exploring Techno-moral
Change: The Case of the Obesitypill. In Evaluating New Technologies, eds. Paul Sollie and Marcus
Düwell, 119–138. Dordrecht: Springer.
Smits, Merlijn, Bredie, Bas, Van Goor, Harry and Verbeek, Peter-Paul 2019. Values that Matter:
Mediation Theory and Design for Values. Academy for Design Innovation Management Conference
2019: Research Perspectives in the Era of Transformations: 396– 407.
Van den Hoven, Jeroen, Miller, Seumas and Pogge, Thomas (eds.) 2017. Designing in Ethics. Cam-
bridge: Cambridge University Press, 78–94.
Postphenomenology and Ethics 51

Verbeek, Peter-Paul 2005. What Things Do: Philosophical Reflections on Technology, Agency, and Design.
University Park: Penn State University Press.
Verbeek, Peter-Paul 2008. Obstetric Ultrasound and the Technological Mediation of Morality – A
Postphenomenological Analysis. In Human Studies 2008–1, 11–26.
Verbeek, Peter-Paul 2011. Moralizing Technology: Understanding and Designing the Morality of Things.
Chicago: University of Chicago Press.
Verbeek, Peter-Paul 2013. Technology Design as Experimental Ethics. In Ethics on the Laboratory
Floor, eds. S. van den Burg and Tsj Swierstra, 83–100. Basingstoke: Palgrave Macmillan.
Verbeek, Peter-Paul 2015. Beyond Interaction: A Short Introduction to Mediation Theory. Interac-
tions 22 (3): 26–31. ISSN 1072-5520.
Verbeek, Peter-Paul 2017. Designing the Morality of Things: The Ethics of Behavior- Guiding
Technology. In Designing in Ethics, eds. Jeroen van den Hoven, Seumas Miller, and Thomas Pogge,
78–94. Cambridge: Cambridge University Press.
Verbeek, Peter-Paul and Tijink, Daniel (2020). Guidance Ethics Approach. The Hague: ECP.

You might also like