Anthropology in Digital Age
Anthropology in Digital Age
This publication was supported by the Faculty of Catholic Theology, the Research Area Cultural Encounters –
Cultural Conflicts and the Department of Biblical Studies and Historical Theology of the University of Innsbruck.
Johannes M. Hoff
Drinking the Clarity of Being: Beyond the Dataist Metaphysics
of the Digital Age ...................................................................................................... 23
Nishant A. Irudayadason
Transhumanism: A Critical Approach ....................................................................... 41
John Karuvelil SJ
Artificial Intelligence: Are we playing God? ............................................................ 61
Dolichan Kollareth SJ
Emotional AI and the Elusive Nature of Human Emotions ...................................... 77
Kuruvilla Pandikattu SJ
Extinctions, Empathy, Ethics: Dealing with AI and ChatGPT with
Wisdom and Hope ..................................................................................................... 99
5
Table of Contents
Wilhelm Guggenberger
Deliver us from the evil one: The hope for technological Redemption .................. 157
Stefan Hofmann SJ
Contributions of Ignatian Spirituality towards a
Healthy Use of the Internet and Digital Media ....................................................... 173
Thomas Karimundackal SJ
Human Being: In the Image and Likeness of God or Becoming Digitalized? ....... 187
VM Jose SJ
Jesus constitutes true humanity: dignity of human person in the digital Age ......... 221
Albert Jesuraj
The Technologisation of Grace and Theology:
Meta-theological Insights from Transhumanism – by King-Ho Leung .................. 233
6
Introduction by the Editor
The Pontificial Institute for Philosophy and Religion (JD) in Pune/India and the Fac-
ulty of Catholic Theology at the University of Innsbruck/Austria formed a treaty more
than twenty years ago containing exchange of students and teaching staff as well as
a strong collaboration regarding their scientific research activities. The latter led to
joint conferences on various topics to the latest developments in our research fields.
In this volume we publish the results of the most recent conference held from 3rd to 6th
of May 2023 in Innsbruck. The last two contributions were devised by Doctorate and
PhD students from the Innsbruck Faculty of Catholic Theology and present the results
of workshops they organised within the scope of the conference. The conference’s
topic was “Anthropology in Digital Age: Theological and Philosophical Responses”
focusing on the following subthemes:
Given the incredible and exponential progress in digital revolution, affecting all
dimensions of human life, it is proper to reflect on who the human person is, from
philosophical and theological perspectives, in order to understand ourselves better.
The cooperation between Jnana Deepa and the University of Innsbruck offers us the
opportunity to bring not only the Christian tradition, but also Western and Indian
thinking into conversation with current technological developments. Such a reflec-
tion urges us to revisit fundamental anthropological questions such as “Who am I?”,
“What can I know?” and “What can I hope for?”. Together we seek to shed more light
on the self-understanding of the human person within contemporary times to respond
meaningfully and adequately to the fundamental questions of ourselves, our nature
and our destiny. Such an understanding of the human person will hopefully enable us
to encounter God deeper and experience one another better.
May the readers of this volume become inspired by our thoughts and reflections
about human self-understanding within an increased digitalized world. Our common
goal with this publication is to add a philosophical-theological perspective on the
flood of economical and technical inputs regarding this digitalized world and life in
the forthcoming future.
7
An Embodied Understanding of the Human
Person in the Digital Age:
Indian Feminist Perspectives
Patricia Santos RJM (Pune)
1 Introduction
In today’s rapidly evolving digital landscape, the concept of the human person has
taken on new dimensions and complexities. As technology increasingly becomes an
integral part of our lives, it is crucial to explore how it shapes our understanding of
ourselves and others around us. In this essay, I will delve into the embodied under-
standing of the human person in the digital age, examining its implications through
various Indian feminist viewpoints. The discussion will begin with a brief overview
of embodiment and its role in the digital age, followed by an exploration of what it
means to be human in this era. Next, I will highlight the impact of technology on the
identity and relationships of women in general and the unique challenges faced by
women in India as they navigate their identities and relationships within the confines
of a digital society. This analysis aims to foster a deeper comprehension of how tech-
nology intertwines with our lived experiences and influences our interactions with
others, ultimately paving the way for a more holistic and harmonious existence in the
digital world.
9
Patricia Santos RJM
1 Thapan (2009) 2.
2 Ibd. 6.
3 Cf. Gonzalez (2007).
4 Gonzalez (2007) 25.
10
Embodied Understanding of Human Person
Charles Ess, looking at the embodied self in a digital age, is concerned with how
information technologies interact with our understanding of who we are as humans
and what are the possibilities and risks for humans through these interactions.7 While
there are multiple implications for “networked selves inextricably interwoven with
others in larger, increasingly more complex and technologically-mediated communi-
ties,” it could either take us back to the importance of the body and relationships for
our sense of identity or make us slaves of digital technology.8 The digital age could
therefore either allow for relational networks and interconnectivity or lead to privacy
and seclusion.
Jean du Toit, considering embodiment in relation to the embodied screen sees a
two-way relationship implied, where digital technology objects give context to a per-
son’s perception of themselves, the world, and others, while also requiring the phys-
ical person as part of the connection.9 The virtual world presents a unique challenge
in connecting physicality with space in modern society. The concept of an embodied
screen refers to how the virtual world shapes and influences our life experiences and
actions between both limited (defined boundaries) and unlimited (new possibilities)
5 Ibd. 162.
6 Ibd. 160.
7 Cf. Ess (2010) 105.
8 Ibd. 116.
9 Jean du Toit (2020) 5.
11
Patricia Santos RJM
situations.10 In this regard, the virtual isn’t just about being close or distant in space
but also about how it affects our fundamental existence. According to Jean du Toit,
virtual challenges influence our perception and lead to changes in our habits.11 These
changes are closely connected to our intentions and experiences and have a significant
impact on our lives. They help explain how we interact with virtual environments.
As technology becomes more widespread in society, understanding our interactions
with the virtual world becomes an essential part of our everyday existence. It is in this
regard that Federica Buongiorno conceives of a double embodiment.12
For Buongiorno, our body is a bridge between the digital world and us, as we use
our senses, limbs, and movements to control the digital devices. Hence, we cannot
separate our mental abilities from our physical ones since our body acts as a two-
way access point: it helps us experience digital media by connecting us to reality and
allows us to access external experiences.13 The concept of embodiment in the digital
world goes beyond our physical experiences since “the tools and devices by means of
which we experience and know our digital world are increasingly becoming embod-
ied”.14 This process creates a double-embodiment where digital devices become part
of our bodies, and the line between organic and digital gets blurred. Double embodi-
ment affects both our personal experiences and the technology we use in our connec-
tion to the digital world. This view is also affirmed by K.T Chan who sees the blurring
of boundaries between the physical and virtual worlds responsible for the occasional
identity confusion in individuals between the two worlds.15
Chan refers to the emergence of the “Digitalized Self” which is created from our
interactions with the digital world and is a mix of our real selves and our digital per-
sonas, existing between the physical and online worlds.16 Thus, the “Digitalized Self”
which includes our emotions, thoughts, memories, behaviours, beliefs, and more can
impact our identity and self-expression in both the online and offline world through
changes in our brain and various social and cultural processes.17 In order to preserve
10 Ibd. 8.
11 Ibd.
12 Cf. Buongiorno (2019).
13 Ibd. 319.
14 Ibd.
15 Chan (2022) 3.
16 Ibd. 5.
17 Cf. Ibd. 6.
12
Embodied Understanding of Human Person
and protect the core aspects of human nature for future generations we need to main-
tain a healthy boundary between the Digitalized Self and Self even though it is diffi-
cult to totally separate between our online and offline selves. In the next section we
will look at some of the challenges of the digital age for women in particular.
This part first looks at the identity, self-perception, and relationships of women in the
digital age and then the consequences of the digital age for women in India.
What does the digital age mean for women’s self-perception, identity, and relation-
ships? The digital world offers diverse ways of being either hybrid, split or multiple.
Computer-mediated communication and online forums allow for the creation and
construction of identities. Persons can either reveal their identity, present themselves
in different ways or camouflage their bodies and selves in ways they can choose.
Many youngsters appear to be constantly living in a virtual world, changing their
appearances as they change their clothes and creating the person they dream to be or
seek to be seen by others. Yet according to Eve Shapiro, “[g]iven the chance to choose
to be anything, people usually followed entrenched social scripts and produced so-
cially desirable bodies and identities – and in the process collectively created a world
that reproduced the inequalities present in offline society.”18 One can construct an
identity without situating it within an existing body as well as “interact with a multi-
tude of anonymous others” in virtual communities that reduce the risk of stigma and
violence.19 At the same time people are not permanently in online mode. They keep
navigating and negotiating between multiple worlds, online and offline. According to
Shapiro there is a dialectical relationship between the online and offline self with each
shaping the ongoing sense of self-identity through the creating and recreating of sto-
13
Patricia Santos RJM
ries.20 Shapiro also notes that gendered behaviours shift in online virtual groups with
new gender identity scripts that do not conform to stereotypical gender norms. Infor-
mation technologies and the anonymity of virtual communication allows for adopting,
“learning and publicly embodying a new gender identity”.21 Online chatrooms and
internet games entice youngsters to try and adopt new gendered selves and often these
become spaces for objectifying women and maintaining gender hierarchies and sexist
behaviours.22
Shapiro is also of the opinion that prejudices, misogyny, social inequalities and
cyberviolence continue to exist in cyberspace despite the opportunities to create new
identities and scripts online. This is because online forums “provide spaces for both
individual identity work and for the reproduction of social inequalities and hegemonic
norms” since gender, race, class and other embodied characteristics are “structural
components embedded in all interaction”.23 Creating and constructing new identities
is most often filtered through one’s offline experiences and narratives and hence the
embodied self continues to be the organizing principle even in the absence of the
physical body.24
Gina Messina-Dysert sees the digital world and social media as ushering the birth
of online feminism and revealing the embodiment of feminist values as “hierarchies
can be eliminated, and a democratic participation process can be created”.25 Virtual
spaces have allowed women’s voices to be heard, feminist scholarship and activism
to be carried out, and countless connections to be made across the globe. The digital
world also calls for an awakening of the mystic in us challenging us to go beyond
conventional boundaries to venture into new spaces of connectivity with God, others
and all creation in today’s ‘wired’ world. Since we cannot escape from the availability
and widespread use of digital technologies such as the internet, social media, smart-
phones and artificial intelligence, we need to see and experience God present in and
through cyberspace.26
20 Ibd. 126.
21 Ibd. 136.
22 Ibd. 144.
23 Ibd. 146.
24 Ibd. 150.
25 Messina-Dysert (2015) 136.
26 Cf. Brazal and Abraham (2016).
14
Embodied Understanding of Human Person
The digital age appears as a double-edged sword for women in India. While offering
a space for the empowerment of women, for networking and a way to confront the
hard power of patriarchy, the digital age reinforces the online virtual commodification
of women and creates binaries and digital divides between the “information rich” and
“information poor”, the urban centres and the rural peripheries. Women’s access to
Information and Communication Technology (ICT) in India has changed the lives
of some, empowering them socially, economically and helping to change the power
relations that oppress and suppress them.30 Online connectivity offers a safe space
for many women to express themselves and has also enabled women to share infor-
mation, reflect critically on issues and become knowledgeable. According to Virgin-
15
Patricia Santos RJM
31 Ibd. 55.
32 Hegde (2011) 178.
33 Ibd. 182.
34 Ibd. 183.
35 Ibd. 179
16
Embodied Understanding of Human Person
tre worker represents a globally exposed and available body” seen as erotic, sexually
permissive and westernized.36 There are many such cases of abuse and violence that
do not get reported in these highly restricted centres which are distanced symbolically
and materially from the local environment. It is interesting to note the shifting of spac-
es and identities as employees are ferried back and forth from their mediocre homes
to the global technology hubs – “from the peripheries to the centre, from the local to
the transnational”.37 The global reorganization of work and labour practices in the
outsourced technological centres reinforces the unequal division of labour providing
upward mobility to some employees in the public transnational sphere while keeping
others such as the drivers in the liminal private national spaces between the local and
global.38
In the culture of technology and outsourcing, the inner global work places are seen
as safe spaces of “order, control and predictability” as opposed to the outer local ter-
rain of “chaos, disorder and violence,” which employees have to traverse to transcend
to the high tech-world of the information industry.39 Multinational corporations are
now using surveillance technologies, wireless networks, help lines, control rooms,
and other regulatory mechanisms to offer better safety measures and curb potential
threats to employees of global technological centres. However, the more the city and
its people are digitally monitored, the less privacy people enjoy with technology cur-
tailing “the labouring body to the global infrastructure of the city”.40 Rather than re-
stricting women’s mobility and choice of employment, there must be good structures
of accountability and measures of security as well as better working conditions for
the employees. Women are sometimes also required to change their voice according
to the global company they are representing thus resulting in split personalities in the
office and at home.
Another area of concern for women in the digitalized world is the unequal invis-
ible umbilical cord that exists “between precariat women who sell their reproductive
capacities and the affluent women who pay for them”.41 According to Bula Bhadra:
36 Ibd. 186.
37 Ibd. 184.
38 Ibd. 187.
39 Ibd.
40 Ibd. 189.
41 Bhadra (2017) 32.
17
Patricia Santos RJM
“With digitization, the so-called baby trade in multiple forms is now fully aided by the
internet which is now an inviting medium, in the form of a marketplace where babies
have been added to the shopping cart (sometimes customized) by the intending par-
ents as customers through the convivial ambiance of globalization and ICTs.”42 Cy-
berspace has made possible the connection of western infertile couples with surrogate
women, mainly underprivileged, and their representatives in the global south in the
marketing of pregnancy and child birth, giving rise to new precarious social subjects
that are vulnerable and unstable.43 Most of the surrogate women are employed on
temporary well-defined contracts with no job security or proper working conditions.
The baby produced is separated from the birth mother severing all emotional con-
nections and bonding just like a product is taken out of a machine. The feminization
of reproductive labour by surrogate women is most underprivileged and “marked by
‘precarity’ in terms of informal labour, wage squeeze, ephemerality, insecurity, and
harmful risk”.44 Reproductive labour is commodified and controlled through social
media devices such as Facebook, WhatsApp, SMS and the like.
While the internet has increased the availability and market for human embryos
and surrogacy services, Bhadra notes that Assisted Reproductive Technologies (ARTs)
and surrogacy have reduced women to “a series of objects which can be isolated, ex-
amined, recombined, sold, hired or simply thrown away, like ova which are not used
for experimentation or fertilization”.45 Pairing of reproductive consumers and suppli-
ers and even the “sale of tailor-made, personalized sperm insemination and egg donor
packages” is made possible through social media and other information technologies
as well as what is referred to as reproductive tourism.46 The irony is that while on one
side India has the highest rate of maternal mortality in the world, on the other side it is
leading the world in the industry of commercial gestational surrogacy with the Indian
woman as the “target of both anti-natal population control campaigns and pro-natal
pro-technology programmes.”47 The internet serves as a good technological platform
for the advertising, recruitment, monitoring and production of reproductive labour
42 Ibd. 34.
43 Ibd. 35.
44 Ibd. 40.
45 Ibd. 43.
46 Ibd. 46.
47 Ibd. 49f.
18
Embodied Understanding of Human Person
and transnational surrogacy. For instance, the Rotunda Medical Centre in Mumbai be-
sides advertising for recruiting surrogates and making it possible for frozen gametes
and embryos to be shipped to India for implantation, has a “Skype Surrogate Connect
video-Conference programme so the parents will have a clear notion of how well the
pregnancy is going and how well the surrogate is looked after”.48 Many surrogate
women have died because of successive deliveries, lack of sufficient health care and
other complications, yet many impoverished women choose to be surrogate mothers
to support themselves and their families. For Bhadra, the “surrogacy practice in India
reinforces inequalities, causes exploitation, commodification of women and children
and violation of basic human rights” with women as reproductive labourers remaining
subaltern and silenced in the digitalized capitalist and global free market.49
Although some women feel more productive and confident because of the BPO’s
and web-enabled income generating projects, there are ethical challenges involved
in the practices of masquerading because of cyber-bullying and sexual harassment
and surrogacy. According to Kate Ott, the “’new’ way of being in digital networked
relationships mirrors feminist and womanist theological constructions of personhood
and agency that have long argued for relationality and interdependence”.50 The key
concern for some feminist theologians is how this digital, spiritual embodied form of
being relates to theological anthropology. How do we know for sure that the online
representations of us are really us and in what way do the digital incarnations of us
bear the image of God in the same way as our embodied fleshy incarnations?51 The
digitalization of the self makes new forms of liberation as well as new forms of vio-
lence possible.52 Ott holds that “humans have always been technologically embodied
spirits” and as “digitally embodied spirits we more deeply inhabit our relationality, in-
terdependence, and multiplicity creating more entangled modes of oppression as well
as generating liberative salvific moments”.53 Although women have the possibility to
create alternate representations of themselves in cyberspace to challenge the stereo-
types imposed by mainstream media and culture, they risk losing their authentic sense
48 Ibd. 51.
49 Ibd. 61.
50 Cf. Ott (2019).
51 Ibd. 3.
52 Ibd. 5.
53 Ibd. 11f.
19
Patricia Santos RJM
of self and identity and maintaining healthy and satisfying relationships. While they
have greater access to information, resources and online platforms to express their
views and perspectives, they are vulnerable to online harassment, abuse, violence,
and the pressure to conform to idealized standards of beauty and a perfect body. The
digital age thus offers plenty of opportunities for women but also poses many chal-
lenges and difficulties.
4 Conclusion
The digital age presents both opportunities and challenges for our understanding of
embodiment and what it means to be human. This article highlights the complexity
and paradoxical nature of the digital age for women in India. Though the technolog-
ical era brings opportunities for women’s empowerment, connection, and resistance
against patriarchy and traditional power structures, it also perpetuates inequalities and
raises concerns about online commodification and deepening socio-economic divides.
Furthermore, the digital realm presents unique challenges, such as cyber-bullying and
sexual harassment, which deeply impact women’s self-perception, identity, and rela-
tionships. To effectively navigate these complexities and harness the full potential of
the digital age for women in India, it is vital that we continue to examine and address
these issues from a feminist standpoint while promoting inclusive and safe digital
spaces for all.
While digital technology is to a great extent an asset for most people, it must be
used with prudence and caution. Pope Francis rightly asserts that the internet can both
promote encounter with others as well as increase self-isolation. Hence, we need to
take care to use online networks for building relationships and embracing our human
connections. The implications of the net as a useful resource are:
“The image of the body and the members reminds us that the use of the social
web is complementary to an encounter in the flesh that comes alive through the
body, heart, eyes, gaze, breath of the other. If the Net is used as an extension or
expectation of such an encounter, then the network concept is not betrayed and
remains a resource for communion. If a family uses the Net to be more connected,
to then meet at table and look into each other’s eyes, then it is a resource. If a Church
20
Embodied Understanding of Human Person
community coordinates its activity through the network, and then celebrates the
Eucharist together, then it is a resource. If the Net becomes an opportunity to share
stories and experiences of beauty or suffering that are physically distant from us,
in order to pray together and together seek out the good to rediscover what unites
us, then it is a resource.”54
References
Bhadra, Bula (2017) Precarity and surrogacy: The invisible umbilical cord in the digital
age, in: Precarity within the Digital Age. Media Change and Social Insecurity. Ed.
Bula Bhadra, 31 –68.
Brazal, Agnes M. / Kochurani Abraham (2016) Feminist Cyberethics in Asia. Religious
Discourses on Human Connectivity. Springer.
Buongiorno, Federica (2019) Embodiment, Disembodiment and Re-embodiment in the
Construction of the Digital Self, in: HUMANA.MENTE Journal of Philosophical Stud-
ies 12, no. 36, 310–330.
Chan, Kai Tai (2022) Emergence of the ‘Digitalized Self’ in the Age of Digitalization,
in: Computers in Human Behaviour Reports 6, 100–191.
Du Toit, Jean (2020) Living in the age of the embodied screen, in: Indo-Pacific Journal of
Phenomenology 20, no. 1, 1–9.
Ess, Charles (2010) The Embodied Self in a Digital Age. Possibilities, Risks, and Pros-
pects for a Pluralistic (democratic/liberal) Future?, in: Nordicom information 32, no.
2/3, 105–118.
21
Patricia Santos RJM
22
Drinking the Clarity of Being
Beyond the Dataist Metaphysics of the Digital Age
Johannes M. Hoff, Innsbruck
The technical innovations of the last 25 years have urged us to rethink what consti-
tutes human intelligence in the light of mindless technologies that are supposed to
replace it. In view of the spiritual impoverishment of modern societies subsequent
of the industrial revolution, this requires us to reassess our concepts of cognition and
reasoning in the light of the tradition of sapiential thinking that shaped the pre-modern
legacy of Christianity. In fact, apart from the economically caused and technically
accelerated devastation of biodiversity and the accompanying ecological climate
change, the economically caused and technically accelerated devastation of mental
diversity and the accompanying spiritual climate change marks the greatest challenge
of our time, as I have pointed out in my German monograph on the anthropology of
the digital transformation and a most recent, English publication on this topic.1
The basic lines of the modern break with the premodern tradition might be sum-
marized following Edmund Husserl’s monograph on The Crisis of European Sciences
of 1936. According to this key work of the principal founder of phenomenology, the
rise of our modern concepts of cognition and scientific reasoning can be traced back
to thinkers like Galileo Galilei, John Locke and David Hume. After this break, human
cognition appeared more and more as a form of algorithmic data processing. Suppo-
sedly elementary ‘sensual data’ were foisted on our lived experience, which in turn
were conceptualized in terms of the ‘mathematical-physical’, based on well-defined
formal languages and calculating functions.2 The result was an epistemic data ratio-
nalism that turned the refinement of measurement instruments and the increase in the
effectiveness of measuring functions more and more into an end in itself. Husserl’s
1 See Hoff (2021). The present paper summarizes basic theses of a more extended essay, which will
presumably appear in the Journal Modern Theology under the following title: The Gift of Intelligence
and the Poetry of Real Presence. Overcoming the Dataist Metaphysics of Modern Cognitivism.
2 Husserl (1956) 18-68, 233-235.
23
Johannes M. Hoff
disciple Maurice Merleau-Ponty reached a similar conclusion when, toward the end
of his life in 1961, he warned of the advent of a science that “manipulates things and
gives up living in them” while its “thinking deliberately reduces itself to a set of data-
collecting techniques which it has invented.”3
In popular scientific narratives, such as the essentially transhumanist writings of
Yuval Noah Harari 4 this ‘dataism’ has been presented as a techno-scientific revolu-
tion that is supposed to replace the humanist tradition of early modernity. However,
historically and phenomenologically educated philosophers like Max Scheler already
realized during the rise of the First World War that the ‘dataism’ of our time was an
invention of early modernity that coincided with the breakthrough of secular huma-
nism. It is no accident that leading humanists like Immanuel Kant considered human
cognition to be an apriori synthesis of elementary sensual data.5 We are currently not
entering a dataist digital, but a post-digital age. The digital age of dataism has already
started in the wake of thinkers like Galilei and Locke. Today we are faced, more than
ever before, with the challenge of a new enlightenment that familiarizes ourselves
with the limits of the digital rationality of early modernity.
Husserl was already aware of this critical point: The dataist revolution of early
modernity provoked an unhealthy assimilation of human cognition to the way modern
clockwork machines were supposed to work. Yet as the trained mathematician pointed
out based on genealogical observations that will become clearer in the course of this
essay, even mathematical reasoning has to be grounded in our pre-scientific everyday
experience. It has to be rooted in noetic intuitions that illuminate the ‘life world’ that
we inhabit.6
The Christian philosopher Johannes Scotus Eriugena made a similar point when
he already noted in 860 CE: “of what use is a demonstration from without (exterior
suasio) if there is not illumination within (interior)”7. However, as Jacques Derrida
has pointed out back in the 1960’s, starting from a critical assessment of Husserl’s last
writings on geometry, our inclination to reduce cognition to a matter of mechanical
24
Drinking the Clarity of Being
25
Johannes M. Hoff
would be self-deceptive to believe that we know what we are doing when we pretend
to create gadgets that enable us to reach our purposes more efficiently, as utilitarian
thinkers assumed in the industrial age. This insight began to dawn on even philoso-
phically uneducated contemporaries at the latest after Steve Jobs introduced the iPho-
ne in 2007. The Austrian Human-Computer-Interaction (HCI) researcher Christopher
Frauenberger has expressed it as follows: “The mobile phone has not merely met
requirements or fulfilled needs. (…) It has made us different people.”11
Frauenbergers observation might become more vivid if we recall the unpredic-
ted side effects of the social media revolution that transformed our life-world in the
course of the digital transformation. To begin with, social media made us believe that
we were autonomous voices in networks of friends that ‘like’ each other. Yet the un-
intended side-effects of the control strategies, which govern these technologies, soon
revealed the delusional character of this egalitarian belief: dooms-scrolling, influencer
culture, sexualization of kids, QAnon, shortened attention spans, polarization, bots,
deepfakes, fake news, addiction, disintegration of democracies, you name it.12 And this
was only the foreplay: Transformer technologies, such as GPT, are currently about to
boost the control strategies of late modern psycho-capitalism on a unimaginable scale.
In the following, I will not go into the details of this somewhat apocalyptic discussion,
but focus on the question how we might enable a wiser future. In my above mono-
graph I have outlined the essential features of an anthropological paradigm shift that
takes account of this challenge, starting from a triangle that builds simultaneously
on the above theses of Leroi-Gourhan and the trinitarian anthropology of medieval
thinkers:13 the triangle of nature, technics and culture.
26
Drinking the Clarity of Being
The significance of this triangle becomes most evident when we look back to the be-
ginnings of the evolution of men. Let’s take the example of a hand axe. It is anything
but easy to define what a hand axe is simpliciter, since it oscillates between the three
angles of our triangle. Focusing on the angle of technics, we might consider it as a
purpose-built artefact that is ready to hand. In this case, we consider the hand-axe as a
simple technical tool – comparable to the axe in front of a mountain hut or a chainsaw
in a hardware store.
However, not every purpose-built tool is ready to hand to every person. And this
leads me to another angle of our triangle, the angle of nature. Technical artefacts can
become prosthetic extensions of our nature, as was the case with stone-age hunters,
who were able to use hand-axes in a skillful way. When we get used to an axe or a
chain-saw, we perceive it no longer as an external tool. Rather it turns into a kind of
second nature. The most striking example of such a prosthetic extension is a pair of
eyeglasses. If I perceive my eyeglasses as an external tool, they are either dirty or
broken. By contrast, if my eyeglasses do what they are supposed to do, they became as
imperceptible as my eyeballs. Something similar happens when I get used to driving
a car. When I started to take my first driving lessons, I perceived the steering wheel in
my hands and the pedals under my feet. Today, if my car is doing what it is supposed
to be doing, I perceive the asphalt under the wheels and the road in front of my car.
The car has turned into an extension of my body; it has become ‘second nature’.
Finally, there is a third angle in our triangle that slips of attention as long as we fo-
cus on the nature of artefacts as ‘useful tools’ that are ‘ready to hand’: Artefacts have
the character of emotionally charged cultural symbols that have the power to make us
act in responsive ways. How does this phenomenon happen?
For the sake of illustration, imagine a stone-age girl who used to go hunting with
her father. Since she never felt like a natural born hunter, she might have forgotten
27
Johannes M. Hoff
about her hunting experience after her father’s death. However, in the wake of some
years of hunting-free activities, she might rediscover her father’s old hand axe in a
corner of her hut. This finding will awaken in her the desire to go hunting again, just
as she did when she accompanied her dad, but now something has changed: She is
now starting to cultivate the practice of hunting freely, as an end in itself that she can
practice with her siblings and friends gratuitously, like singing, praying or reciting
poems. In this situation, the hand axe turns into a cultural artefact that re-actualizes
something that happened in the past under a different form: The new practice makes
something present under a stylized, new ‘Gestalt’. This is the point where the hand-
axe becomes comparable to a sacramental or idolatrous cultural symbol: a paleolithic
engraving, a megalithic tomb, a cave painting, a hieroglyphic inscription, the anapho-
ra of the Eucharist, an early modern journal, or an advertising column. The axe turns
into an emotionally charged vestige of cultural activities that has the power to make us
see things we have never seen before and act in responsive or addictive ways.
Much could be said about this symbolic dimension of artefacts, including its im-
plications for the sacramental theology of the Catholic tradition. However, at this
point I will limit myself to emphasizing how little has changed since the stone age:
Similar to a paleolithic stone-axe, the mobile phone can be posited at all three angles
of our triangle. It is not only a technical tool – it is also a kind of body extension
(second nature), like a pair of eye glasses that permit me to perceive the world in new
ways, and it is a symbolically charged cultural artefact that moves me to actions that
I did not anticipate.
The above examples might help us to understand more clearly why the introduction of
Jobs iPhone has changed our self-perception as human beings. To use an expression
of the philosopher of technology, Bruno Latour: The digital transformation has given
rise to the suspicion that “we have never been modern.”14 It has drawn our attention to
the millennia-old experience that even artefacts and objects which are less sophistica-
ted than iPhones can silently order, enable and mediate human activities.
14 Latour (1993).
28
Drinking the Clarity of Being
To be sure, symbolically charged artefacts do not structure and order our ac-
tion like physical causal chains, which are indifferent to our intentional acts. Rather
they remind us that the line between the soft power of rhetorical persuasion and the
cold determinism of physical causation is always (and has always been) blurred.
Frauenberger summarizes the philosophical discussion of this phenomenon, quoting
Latour’s most well-known sociological publication, which established his reputation
as founder of the ‘actor network theory’ (ANT): “there exist ‘many metaphysical
shades between full causality and sheer inexistence’, or in other words there is a wide
spectrum from strong ordering to weakly structuring to not affecting action. ANT
goes even further and argues that anything that has influences on an action (…) is an
associated actor, which also includes non-material entities, such as policies, laws or
societal norms.”15
In the wake of Galilei and Newton, modern philosophers like Kant tried to con-
vince us that we can draw a sharp demarcation line between objective, value-neutral
facts, which are governed by hard deterministic laws of nature, and autonomous sub-
jects who are ideally able to control their acts of cognition and will and persuade
each other through the gentle force of rational arguments.16 In truth, we are relational
agents in a network of agents, and every attempt to draw a univocal demarcation
line between objects and subjects is delusional. Seen from this angle, the situation
in which we are entangled by contemporary technologies is comparable with a Ty-
rolian farmer who is every morning moved to pious actions by the encounter with
a wooden statue of the virgin Mary. Artefacts like these or irrational natural entities
like trees and dogs have the magic power to make us act, because we are part of a
world in which things face and move each other based on emotions that change their
interactions in an ontologically and epistemologically significant way. After all, even
my dog has the persuasive power to make me do unplanned things without setting
deterministic causal chains in motion.
In order to deepen our understanding of the differing positions artefacts can adopt
in our anthropological triangle, it might be helpful to introduce at this point three
29
Johannes M. Hoff
concepts, elaborated by the philosopher and theoretical physicists Karen Barad, which
build on Latour’s ANT.
We might start with the concepts of ‘intra-activity’ and ‘agential cut’. Artefacts
like axes, sculptured Virgins and mobile phones are always part of a complex net-
work of ‘agents’ that constitute each other through relational ‘intra-actions’. Unlike
the ‘inter-action’ with a given object, ‘intra-actions’ do not permit us to pre-determine
who or what plays the part of the active ‘subject’ and who or what plays the part of the
passive ‘object’. Instead, since every agent is embedded in a complex and open-ended
network of agents, the boundaries that fix the ‘agential cut’ between active subjects
and subject-related extensions on the one hand and passive objects and object-related
circumstances on the other are permanently and continually negotiated.
To shed light on this negotiation process, Barad introduces a third concept that
helps us to understand the moment when an ‘agential cut’ emerges: the concept of
diffraction that is defined in deliberate opposition to the modern concept of reflection,
which was prominent in the post-Kantian and German idealist tradition. The concept
of reflection means ‘mirroring’ and assumes that subjective acts mirror a pre-given
objective reality, based on concepts that are supposed to represent the world without
intra-acting with it. By contrast, the concept of diffraction means that the cut between
the passively given and the acting parts of a complex configuration of intra-acting ac-
tors has the character of a performative event. The outcome of such an event cannot be
pre-determined in advance – at least not as long as we are not in control of the relevant
network of agents as a whole.
In my above research on the anthropology of cognition, I have built on the on-
tologically and anthropologically more elaborated premodern tradition of Christian
learning, starting from the Renaissance philosopher-theologian Nicholas of Cusa.17
This permitted me to evade some ambiguities of Barad’s terminology, which are due
to her reductionistic tendency to conflate the holistic features of purely physical and
psycho-physical diffraction events. However, apart from this ontological refinement,
Cusa’s proto-modern holism is compatible with Barad’s concept of diffraction, as
becomes evident from the following quotation of my book: “The most elementary
psycho-physical phenomena have (...) the character of holistic, performative events
30
Drinking the Clarity of Being
that preempt the distinction between subject and object as well as the unfolding of
temporally and spatially differentiated perspectives on the world as a whole.”18
31
Johannes M. Hoff
in my head’. But what about the cheerful mood of the people? The cheerful ambience
of the clearing and the mood of the people who gather at this place is not in your head.
It is an objective state of affairs! And there is no reason to assume that it is otherwise
in ‘scientific‘ or ‘neurological’ terms – except you are still infatuated by Cartesian
matrix worlds.
The most important features of such holistic accounts of embodied perception and
cognition can be summarized following the discussion on 4E cognition in contem-
porary cognitive sciences: Acts of cognition are not reducible to activities of the brain.
They are always embodied in our lived body. Moreover, as the example of the forest
clearing illustrates, they are also always embedded in a broader ecological environ-
ment. Furthermore, they always depend on how we intra-act with our environment,
or – as cognitive scientists would express it – how a state of consciousness is indi-
vidually enacted. And finally, our relationship to our environment is almost always
mediated by technologies that work like prosthetic extensions of our body. In short,
human cognition is always 4E cognition: it is always embodied, embedded, enacted
and extended. 20
Medieval philosophers, like Thomas Aquinas and Nicholas of Cusa, were still fa-
miliar with this holistic way of thinking. Our brain is not a kind of data-processing de-
vice in which elementary sense data are synthesized. Rather, our animal body, inclu-
ding its brain, is comparable with a Gothic stained-glass window: a diffraction device,
in which light becomes diffracted, or, as Cusa would have expressed it, ‘unfolded’ and
‘contracted’. For this reason, the dataist metaphysics of contemporary transhumanist
popular writers, like Harari, are simultaneously too modernist and too old-fashioned:
They perpetuate a pattern of thought that has proven to be a transitional phenomenon
in the history of sciences.
Scientifically educated readers might object that many biologists, neurologists and
related computer scientist support Galilei’s dataist metaphysics. This is true. How-
ever, as the Sorbonne philosopher Renaud Barbaras has pointed out most recently,
starting from Merleau-Ponty and Husserl: If we want to understand what human cog-
nition is, we must first focus on the concept of life and not prematurely on the concept
of consciousness, given that our vital cognition is nothing but a particular mode of
life. Yet our modern sciences, which built on the mechanistic metaphysics of Galilei
32
Drinking the Clarity of Being
and Newton, never developed a concept of life. Even the modern discipline of biolo-
gy makes no exception from this rule: „Life is not biology‘s object of investigation.
Biology does not speak of life. It speaks of the way organisms, recognised as living,
function.“21
We might express this critical observation a little more pointedly: There is no
significant difference between modern folk psychology and classical modern cog-
nitivism – except the relevant scientists are prepared to engage with metaphysical
questions and do not just presuppose what is “recognized as living”. Galileo’s dataist
metaphysics is still deeply ingrained in our late-modern mind-set. However, as the
genealogy of scientific reasoning has shown in the wake of Husserl, the modern focus
on ‘sensual data’ is nothing but the remnant of a rationally unwarranted bad habit.
This leads us back to our starting point, to the necessity to rethink our concepts of
rationality and cognition. In line with Merleau-Ponty, the human gift of conceptual
abstraction might be considered as the upshot of a kind of emancipation or libera-
tion.22 The emergence of conceptual abstraction in the evolution of man expanded
our possibilities for action by suspending the compulsion to act and allowing us to
develop a sense of fittingness with regard to alternative, imagined possibilities.23 Yet
it is important to emphasize that this new freedom emerged without suppressing our
context-sensitive intuition which makes us responsive to our immediate environment
and draws our attention to the matters that matter here and now.
Premodern thinkers had a strong awareness of this intuitive grounding of human
cognition. However, in contrast to the monastic schools of the 12th century, Aristoteli-
an thinkers like Thomas Aquinas had little interest in working out a systematic theory
of our pre-reflexive life and our intuitive connaturality with our embodied environ-
33
Johannes M. Hoff
ment. Since all this was taken for granted, they tended to marginalize our intuitive
intelligence when they discussed, for example, about ‘scientific knowledge’. They did
so, not because they considered our intuitive intelligence irrelevant, but because they
did not perceive it as problematic.24 Yet, given that it has become problematic in our
modern world, it is possible to read them against the grain and to unearth the holistic
features of their concepts of intellectual intuition, as Jacques Maritain has done with
regard to Thomas Aquinas.25
The most important aspect of Maritain’s excavation work might be summarized
as follows: Human cognition is governed by the disclosure of our being in the world
as a whole, and every reflexive, discursive reasoning is subordinated to this mode of
knowledge. Metaphysically educated neuro-cognitive researchers confirm this the-
sis.26 And this sheds a new light on the premodern tradition of scientific thinking,
given that the latter distinguished between the analytic cognition of discursive reaso-
ning (ratio, dianoia) and the holistic intuition of the intellect (intellectus, noûs). In cle-
arest contrast to the post-Kantian tradition, which caused a lot of confusion in the last
two centuries, reason (Vernunft) was considered the emissary of our intuitive intellect
(Verstand) and not inversely.27 Only our noetic intuition can keep us in touch with the
being of the world that we inhabit. Everything else has to serve the cultivation of this
highest level of human cognition.
In line with this starting point, Maritain emphasizes that, in premodern thinkers
like Aquinas, our pre-reflexive intuition was an essential “cognitive faculty” that is
constitutive for our abstractive intelligence.28 Due to our connaturality with our physi-
cal environment, we are always already attracted by the distinctive “Gestalt” (forma,
species) of beautiful things whose radiance, integrity and due proportion delights our
intellect. Yet it would be misleading to reduce this connatural inclination to a matter
of subjective biases. Rather, the intuition of beauty is anthropologically and episte-
34
Drinking the Clarity of Being
mologically basic: Without the delight in beautiful things, which makes us struggle
for words because they make us realize that there is more than we can imagine, we
would no longer be able to discern what really matters from what has been ‘made up’.
In contrast to the subjectivist tradition of modern liberalism, the insight that the
cultivation of art and beauty is anthropologically basic enabled Maritain to anticipate
where human intelligence stands out from artificial procedures of abstraction or pat-
tern recognition. The living insight of human beings is not reducible to the successive
extraction of abstract concepts from bits of information that permits their representa-
tion by manipulable symbols, as theories of Symbolic Artificial Intelligences assume.
Nor is it reducible to the probabilistic stripping of patterns from images and other data
samples, which sequentially converges to the salient features in the data material, as
contemporary big data based Artificial Neural Networks do.29 Rather, the instantane-
ous recognition of a unique ‘Gestalt’ comes first. By a kind of connatural inclination,
our intelligence is immediately attracted by the matter that matters – its proportion
(or harmony), radiance (or clarity) and integrity (or perfection):30 “The intelligence
in this case, diverted from all effort of abstraction, rejoices without work and without
discourse. It is dispensed from its usual labor; it does not have to disengage an intel-
ligible from the matter in which it is buried, in order to go over its different attributes
step by step; like a stag at the gushing spring, intelligence has nothing to do but drink;
it drinks the clarity of being. Caught up in the intuition of sense, it is irradiated by an
intelligible light that is suddenly given to it. (…) Only afterwards will it be able to
reflect more or less successfully upon the causes of this delight.”31
If we want to do justice to this pre-reflexive dimension of intuition, we have to
deconstruct every attempt to disconnect our intellectual intuition from our everyday-
perception and cognition, as I have pointed out in the above-mentioned essay, starting
from a critical evaluation of the subjectivist concept of beauty in Immanuel Kant. At
this point, it might suffice to recall that the ontologically realist tradition, on which
Aquinas built, considered beauty to be a ‘transcendental’ that is convertible with
29 For an introduction to contemporary deep learning technologies see Segessman et al. (2023) and Prince
(2023). The most recent discussion tends to plead for an integration of symbolic and neural network
approaches. See Marcus (2020).
30 See Thomas Aquinas (1920), I, q. 39 art. 8.
31 Maritain (1962) 26.
35
Johannes M. Hoff
being.32 The realization that something exists has always the character of an approp-
riative event that resonates with our embodied environment and awakens our poetic
sense for the inexpressible.33
The basic features of this eventful character of human insight are comprehensible
to everyone who has ever struggled to understand something, although it has para-
doxical features: No one can be forced to understand something – no insight without
free will; yet the moment when our cognition starts to fit in a comprehensible whole
has at the same time the character of a necessity that transcends willful efforts to un-
derstand. In the moment of disclosure, authentic freedom and providential necessity
coincide, thereby permitting alert minds to safely separate merely subjective appea-
rances from matters that really matter. Consequently, true understanding has always
the character of a gift that responds to our struggle for insight; yet no one struggles for
insight who does not struggle for words.
No reason without insight – and no insight without poetry! If we compare human
intelligence with mindless machines, this makes all the difference. Computers cannot
marvel about the world. Computers don‘t scratch their heads because riddles take
them to the limits of the logically deducible or probabilistically predictable. And they
can‘t dream of a future where no one has ever been. They just don’t care about the
truth! How could they ever struggle for words?
Toward the end of his life, Maritain’s secular contemporary Merleau-Ponty came
to a similar conclusion with regard to the poetic dimensions of human intelligence,
when he pointed out that it is impossible to retain illuminating insights without the
mediation of expressive signs and symbols that enable us to appropriate what we have
seen with the perspective on recalling it under changed circumstances: “A thought
limited to existing for itself, independently of the constraints of speech and communi-
cation, would no sooner appear than it would sink into the unconscious, which means
that it would not exist even for itself. (…) It does indeed move forward with the ins-
tant and, as it were, in flashes, but we are then left to lay hands on it, and it is through
expression that we make it our own.”34
32 Ibd. 32-35.
33 See also Venard (2004).
34 Merleau-Ponty (1981) 177.
36
Drinking the Clarity of Being
Maritain chose a more poetic wording to express this critical point of human co-
gnition, starting from a recollection of our in-between position between animals and
angels. Unlike normal animals, we are intellectual creatures; and unlike purely intel-
lectual beings, we are embodied creatures. Hence, we share the fate of poets: “Poetic
intuition is directed toward concrete existence as connatural to the soul pierced by a
given emotion (…) seized in the violence of its sudden self-assertion and in the total
unicity of its passage in time. This transient motion of a beloved hand – it exists an
instant, and will disappear forever, and only in the memory of angels will it be preser-
ved, above time. Poetic intuition catches it in passing, in a faint attempt to immortalize
it in time.“35
Bibliography
37
Johannes M. Hoff
Kenny, Anthony (1984) The Homunculus Fallacy, in: The Legacy of Wittgenstein. Oxford:
Blackwell, 125-136.
Latour, Bruno (1993) We Have Never Been Modern. Translated by Catherine Porter. New
York - London: Harvester Wheatsheaf.
Latour, Bruno (2005) Reassembling the Social. An Introduction to Actor-network-theory.
Oxford: Oxford UP.
Leroi-Gourhan, André (1993) Gesture and Speech. Transl. by A. Bostock Berger. Cam-
bridge Mass.: MIT Press.
Maritain, Jacques (1952) Creative Intuition in Art and Poetry. The A.W. Mellon Lectures
in the Fine Arts. Washington DC: National Gallery of Art.
Maritain, Jacques (1962) Art and Scholasticism. New York: Charles Scribner’s Sons.
Marcus, Gary, (2020) „The Next Decade in AI: Four Steps Towards Robust Artificial In-
telligence“, in: arXiv 2002.06177.
McGilchrist, Iain (2021) The Matter with Things. Our Brains, Delusions and the Unmas-
king of the World. 2. Volumes Perspectiva.
Merleau-Ponty, Maurice (1968) The Visible and the Invisible. Ed. by Claude Lefort. Transl.
by Alphonso Lingis. Northwestern University studies in phenomenology & existential
philosophy. Evanston Ill.: Northwestern University Press.
Merleau-Ponty, Maurice (1964) Eye and Mind, in: The Primacy of Perception. Ed. by
James M. Edie. Evanston: Northwest UP, 159-190.
Newen, Albert / Gallagher, S. / De Bruin, L. (Eds.) (2020) The Oxford Handbook of 4E
Cognition. Oxford: Oxford UP.
Segessman, Jan / Stadelman, T. / Davison, A. / Dürr, O. (2023) „Assessing Deep Learning:
A Survey and Work Program for the Humanities in the Age of Articial Intelligence”, in:
SSRN, 1-90, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4554234.
Stiegler, Bernard (1998) Technics and Time 1. The fault of Epimetheus. Transl. by Stephen
Barker. Stanford, Calif: Stanford UP.
Stiegler, Bernard (2009a) Technics and Time 2. Disorientation. Transl. by Stephen Barker.
Stanford, Calif.: Stanford UP.
Stiegler, Bernard (2009b) Technics and Time 3. Cinematic Time and the Question of Ma-
laise. Transl. by Stephen Barker. Stanford, Calif: Stanford UP.
Thomas Aquinas (1920) Summa Theologiae. Literally translated by Fathers of the English
Dominican Province. Second and Revised Edition. London: Burns Oats and Wash-
bourne.
38
Drinking the Clarity of Being
Venard, Oliver Thomas OP (2004) Thomas d’Aquin, poète théologien. Vol. 2. La langue de
l’ineffable - Essai sur le fondement théologique de la métaphysique. Paris: Ad Solem.
Zuboff, Shoshana (2019) The Age of Surveillance Capitalism. The Fight for a Human
Future at the New Frontier of Power. New York: Public Affairs.
39
Transhumanism: A Critical Approach
Nishant A. Irudayadason (Pune)
Introduction
41
Nishant A. Irudayadason
The evolution of the concept of progress and perfectibility since the 17th century
has been profound. Transitioning from a reverence for the Ancients to an embrace
of Modern thinkers like Pascal, Fontenelle, and Turgot marked a transformative era.
42
Transhumanism: A Critical Approach
1.2 Man-Machine
43
Nishant A. Irudayadason
actions are mechanical, driven by the arrangement of physical organs, akin to a clock.4
Animals lack autonomous movements or free will, reacting to stimuli and following
instincts and passions determined by their physical makeup.
Contrarily, Descartes believed humans held a unique status, possessing a created
spiritual and immortal soul. While acknowledging that humans are also “machines”
with bodies following mechanical processes, he argued that the immaterial soul grants
humans conscious thoughts, self-awareness, and the capacity for rationality and au-
tonomy. In contrast, Hume’s theory of knowledge posits that human intelligence and
thought derive from sensory impressions organized in the mind, rejecting the need for
a spiritual soul. La Mettrie, following Hume, extended Descartes’ idea of animals as
machines to humans, contending that humans are purely biological machines devoid
of an immaterial soul. Rejecting Descartes’ dualistic view, La Mettrie proposed the
human mind is a product of the organization and functioning of the brain. Transhu-
manists adopted La Mettrie’s materialist view, advocating for advanced technologies
to enhance human capabilities. They argue that manipulating and enhancing human
cognitive abilities through technological means is feasible, considering the human
mind a product of physical processes.
The comparison of human beings to Vaucanson’s automaton suggests a modern
self-perception as a modifiable machine, subject to manipulation like external ob-
jects.5 Miguel Benasayag’s critique challenges physicalist reductionism, arguing that
human understanding cannot solely rely on the laws of physics. He contends that
modern science often overlooks higher dimensions of human existence, such as the
soul, questioning reductionist views that reduce the self to a brain-created illusion.6
La Mettrie’s concept of man as a machine challenges traditional understandings of hu-
manity, posing difficulties in establishing metaphysical or ethical boundaries against
the hybridization of man and machine.
44
Transhumanism: A Critical Approach
1.3 Evolution
Charles Darwin’s theory of evolution, presented in the 19th century, left an indelible
mark on our comprehension of human nature and played a pivotal role in shaping the
ideologies of transhumanism. By challenging the conventional notion of a fixed nat-
ural hierarchy, Darwin introduced a paradigm of slow and progressive development
that traversed the annals of history. In his seminal work, On the Origin of Species
(1859), Darwin propounded the crucial concept of natural selection. This theory pos-
ited that organic variations or mutations within living beings could be inherited by
subsequent generations. His observations accentuated the inherent power of every
species to colonize the Earth, underpinned by a delicate equilibrium through a “strug-
gle for life” culminating in the survival of the fittest. This foundational idea resonates
with the broader concept of progress, suggesting that all facets of existence contribute
to the betterment of humanity.
Herbert Spencer, a prominent philosopher and sociologist of the era, drew in-
spiration from Darwin’s ideas. In his work The Principles of Biology (1864), Spen-
cer asserted that evolution, as a gradual process, would ultimately culminate in the
establishment of the highest perfection and complete happiness. In his own words,
“Slowly, but surely, evolution brings about an increasing amount of happiness.”7 This
perspective reinforced the overarching theme of progress and improvement through
the evolutionary mechanism.
The publication of Charles Darwin’s On the Origin of Species marked a significant
juncture in the historical trajectory of transhumanism. The traditional view of humans
as the crowning achievement of creation was vigorously challenged, and instead, hu-
mans were portrayed as a transient outcome in the continual development of life. This
novel perspective positioned humans as entities in a state of transition, entrusted with
the responsibility of prolonging the course of evolution.8 Julian Huxley (1957) echoed
this sentiment, emphasizing the inexorable role of humanity in steering the trajectory
of evolution, thereby reinforcing the foundational tenets of transhumanism.
45
Nishant A. Irudayadason
Darwin’s evolutionary thesis, with its audacious challenge to the idea of fixed
human nature, opened avenues for contemplating human enhancement and modifica-
tion. This daring perspective on progress found confirmation in the late 19th century,
further fortifying the notion of progress and improvement inherent in human nature.
The synthesis of Darwin’s evolutionary theory with the conceptualization of humans
as machines and the belief in progress contributed significantly to the formulation and
development of transhumanist ideas. This integration underscored the latent potential
for enhancing and propelling humanity forward through the judicious application of
technology.
46
Transhumanism: A Critical Approach
Julian Huxley, a pioneering biologist and philosopher, introduced the term “transhu-
manism” in its contemporary context in his 1957 essay. Huxley envisioned the tran-
scendence of the human species collectively: “The human species can, if it wishes,
transcend itself – not just sporadically, an individual here in one way, an individual
there in another way – but in its entirety, as humanity. We need a name for this new be-
lief. Perhaps transhumanism will serve man remaining man, but transcending himself,
by realising new possibilities of and for his human nature.”9 This transcendence, ac-
cording to Huxley, hinges on leveraging science and technology to overcome current
limitations, thus realizing new possibilities for human nature. Huxley’s original con-
ception was expansive, anticipating a fundamental shift in human understanding and
the embrace of novel forms of knowledge. While the modern association of transhu-
manism often centres on technological augmentation, Huxley’s vision encompassed a
broader spectrum of possibilities for human evolution and transformation.
Max More, a philosopher and futurist, is a pivotal figure in the transhumanist
movement. He defines transhumanism as “a class of philosophies of life that seek the
continuation and acceleration of the evolution of intelligent life beyond its currently
human form and human limitations using science and technology, guided by life-pro-
moting principles and values”10. More emphasizes the role of science and technolo-
gy guided by life-promoting principles to overcome constraints and elevate human
existence. The Extropian movement, founded by Max More, emerged in the 1980s,
47
Nishant A. Irudayadason
11 More (2003).
12 Kurzweil (2005).
13 Itskov (2012).
14 Buchanan et al. (2005).
15 Ferry (2016).
48
Transhumanism: A Critical Approach
Gilbert Hottois critiques the eugenics of the past as lacking a scientific foundation,
denying essential human equality, and infringing on autonomy.16 The evolution of
eugenics, as noted by Jean-Marie Le Méné, manifests as “chromosomal racism”, a
concept spotlighted in response to a Charlie Hebdo caricature in 2015.17 The rise in
prenatal testing, while contributing to informed choices, also raises ethical concerns,
particularly regarding the increased rate of abortions for unborn babies with Down
syndrome (Jérôme Lejeune Foundation, 2020). This circumstance places pressure on
women who may feel compelled to conform to medical recommendations.
Foucault’s perspective on madness as an alternative to reason resonates with the
ethical considerations in the eugenics discourse. Condemning and segregating mad-
ness, akin to the historical mistreatment of disabled individuals, reflects a refusal to
acknowledge diversity and an imposition of a narrow norm.18 The contention that
individuals with disabilities deserve happiness and embody a valid anthropological
model challenges societal perceptions in the era of post-modernity. Recognizing the
diversity of abilities is crucial for fostering a more inclusive and ethical approach to
human enhancement and evolution.
At the heart of transhumanist philosophy lies the ambitious notion of “winning death”,
positing that humans can surpass their biological constraints and attain immortality
through technological avenues. Max More underscores the indispensability of erad-
icating ageing and all causes of death for any philosophy aspiring toward optimism
and transcendence at the individual level.19 This vision of triumphing over death is
intricately woven with advancements in genetics, nanotechnology, and artificial intel-
ligence. Transhumanists envision life-extending technologies, such as gene editing,
rejuvenation therapies, and brain-computer interfaces, as pivotal strides toward real-
izing immortality. Aubrey de Grey, a futurist aligned with transhumanist principles,
16 Hottois (2015).
17 Le Méné (2015).
18 Foucault (1988) 253.
19 More (1990).
49
Nishant A. Irudayadason
contends that scientific intervention can cure ageing and conquer death.20 Natasha
Vita-More, another proponent of transhumanism, asserts that death, viewed as a bio-
logical process, is amenable to engineering.21
In the complex discourse of superintelligence and immortality, Nick Bostrom
presents a nuanced perspective. Acknowledging the potential for advanced artificial
intelligence (AI) to facilitate virtual immortality, he explores the idea of “substrate-in-
dependent minds” existing beyond the biological confines. The prospect of transfer-
ring human consciousness to a more enduring medium or duplicating mental states in
a computer emerges as a means to defy the permanence of death. However, Bostrom
remains vigilant regarding the inherent risks associated with superintelligence. He
emphasizes the conceivable loss of control or misalignment with human values, cau-
tioning against the emergence of a “singleton” AI – an all-powerful entity dictating
the fate of humanity and potentially rendering humans subservient without agency
in their destiny.22 Balancing the aspiration for immortality with the ethical consider-
ations of creating superintelligent entities becomes a central challenge in the transhu-
manist discourse.
2.4 Materialism
20 de Grey (2005).
21 Vita-More (2004).
22 Bostrom (2014).
50
Transhumanism: A Critical Approach
intelligence. In contrast, passive faith, relying on external forces for progress, induc-
es dogmatic beliefs and irrational behaviour. Transhumanism, subscribing to prac-
tical optimism, underscores the belief that humans can improve life through their
endeavours, seizing opportunities, and adapting strategies diligently. Transhumanism,
in essence, replaces religious passivity with human activity and substitutes faith with
scientific inquiry. In the 3rd millennium, humanity would assert itself as an active
master of its destiny, embodying the principles of transhumanism.
While transhumanism aligns with materialistic tenets, concentrating on the matter
and practically dismissing the existence of a spiritual soul, it converges with contem-
porary reductionism, notably physicalism. Max More aligns with this reductionism,
affirming that “science and technology are indispensable means to the achievement
of our most noble values, ideals, and visions and humanity’s further evolution”23.
From the transhumanist perspective, science can achieve humans’ moral perfection,
reducing the mind – our thoughts, emotions, and feelings – to a mere product of phys-
icochemical processes within the brain-machine.
2.5 Ultraliberalism
23 More (2003).
51
Nishant A. Irudayadason
2.6 Utilitarianism
24 Ludlum (2001).
52
Transhumanism: A Critical Approach
3 Playing God
From the Renaissance onward, Western thought has undergone a transformative shift,
moving away from the concept of an unchanging human nature entwined with the eter-
nity of God. The catalyst for this change was the paradigm of progress that emerged
in the 17th century, propelling scholars, and philosophers to assert the near-infinite
perfectibility of the human species. Nature, once perceived as inherently constraining,
underwent a profound revaluation, transforming into the very catalyst for boundless
improvement, embodying the principle of unlimited perfection.
Top of Form
Late Modern Philosophy witnessed the eclipse of the concept of nature, aligning with
the transhumanist aspiration to redesign humanity. This movement asserts that the
true nobility of man resides not in transgressing his nature but in the comprehensive
development of his faculties. Philosophers like Johann Gottlieb Fichte questioned the
notion of a predetermined human essence. Fichte, emphasizing human perfection,
argued that humans are initially devoid of a fixed identity and must actively become
what they aspire to be.26
25 Truong (2003).
26 Fichte (2000) 74.
53
Nishant A. Irudayadason
Transhumanists, rejecting the concept of a predefined nature and embracing the dog-
ma of evolution, aspire to fashion a novel human form that embraces change and tem-
porality. This echoes Maurice Merleau-Ponty’s idea that, “Man is a historical idea and
54
Transhumanism: A Critical Approach
not a natural species”30. In contemporary times, the fixed notion of nature has been
dismissed, leading to the perception of human beings as malleable. Posthumanism,
according to Besnier (2013), seeks to replace the substantial notion of Man with the
exploitable malleability enabled by science and technology.
Transhumanists envision creating a new type of human, transcending the con-
straints of “nature”. As we have already noted, Max More, a leading transhuman-
ist, emphasizes the application of rational, empirical intelligence to surpass human
limits. This future human, the posthuman, will be a product of human experiences,
as Nietzsche’s exclamation, “Hear me, you creators!”31 suggests. The transhumanist
agenda aims to redefine humanity, co-evolving with intelligent technologies. Dmitry
Itskov’s 2045.com outlines plans for a posthuman synthesis, linking human brains to
robotic avatars by 2020-2025, with substance-independent minds by 2045.
Transhumanists foresee a future where humanity fuses with machines, desiring to
shed biological bodies for silicon and steel exoskeletons. They explore brain knowl-
edge transfer to machines for potential immortality. The website evolution.2045.com
explains the replacement of biological evolution with cybernetic evolution. This vi-
sion involves a profound transformation, replacing biological beings with cybernetic
entities for resilience and immortality. David Dubrovsky (2012) predicts the creation
of an immortal “electronic person”. However, challenges lie in reproducing personal-
ity on a non-biological medium.
Redesigning human beings aims to eliminate flaws, creating an idealized human
with higher values. Dmitry Itskov (2012) envisions the evolution of eradicating ag-
gression and selfishness, transforming humans into beings with qualities such as inner
purity and altruism. Contemporary humanity, despite infinite potential, struggles with
limitations. Rather than seeking a new nature, the focus should be on developing ex-
isting potential. This perspective, rooted in Antiquity and Christianity, contrasts with
transhumanism, which dismisses the body’s significance, while Christianity offers a
theology of the body. Aristotle and Aquinas emphasized virtues as the key to human
perfection.
55
Nishant A. Irudayadason
Conclusion
56
Transhumanism: A Critical Approach
References
Bacon, Francis (1854) Novum Organum, in B. Montague (Ed. & Trans.), The Works 3.
Philadelphia: Parry & MacMillan, 345-371.
Benasayag, M. (2016) Cerveau augmenté, homme diminué. Editions La Découverte.
Besnier, J. M. (2013) D’un désir mortifère d’immortalité. À propos du transhumanisme,
in: Cités 55. Retrieved from http://www.cairn.info/revue-cites-2013-3.htm.
Bostrom, N. (2003) Are we living in a computer simulation?, in: The Philosophical Quar-
terly 53, 243-255, https://doi.org/10.1111/1467-9213.00309.
Bostrom, N. (2005) A history of transhumanist thought, in: Journal of Evolution and Tech-
nology 14, 1-25, http://jetpress.org/volume14/bostrom.html.
Bostrom, N. (2014) Superintelligence: Paths, Dangers, Strategies. Oxford University
Press.
Buchanan, A. / Brock, D. W. / Daniels, N. / Wikler, D. (2000) From chance to choice:
Genetics and justice. Cambridge University Press.
de Grey ADNJ (2004) Escape Velocity: Why the Prospect of Extreme Human Life Ex-
tension Matters Now, in: PLoS Biol 2 / 6: e187. https://doi.org/10.1371/journal.
pbio.0020187.
Darwin, Charles (2001) [1859] On the origin of species. A Penn State Electronic Classic
Series Publication.
Descartes, Rene (2006) A Discourse on the Method of Correctly Conducting One’s Reason
and Seeking Truth in the Sciences (I. Maclean, Trans.). Oxford University Press.
Dubrovsky, D. (2012, December 2) Cybernetic immortality. Fantasy or scientific problem?
Retrieved from http://2045.com/articles/30810.html.
Evolution 2045 (2012, August 22) The Party of Intellectual, Technological and Spiritual
Breakthrough. Manifesto. Retrieved from http://evolution.2045.com.
Ferry, L. (2016) La révolution transhumaniste: comment la technomédecine et l’uberisa-
tion du monde vont bouleverser nos vies. Éditions Plon.
Fichte, Johann G. (2000) Foundations of Natural Right (F. Neuhouser, Ed., M. Baur,
Trans.). Cambridge University Press.
Foucault, Michel (1988) Madness and civilization: A history of insanity in the Age of
Reason. Vintage Books.
Hottois, Gilbert (2015) Is transhumanism a humanism?, in: Revista de derecho y genoma
humano (= Law and the human genome review) 42, 15-24.
57
Nishant A. Irudayadason
Huxley, J. (1957) Transhumanism, in: New Bottles for New Wine. Chatto & Windus.
Itskov, D. (2012) Offering a Solution, in: Lapham’s quarterly. Retrieved from https://
www.laphamsquarterly.org/death/offering-solution.
Itskov, D. (2012, November 16) The path to neo-humanity as the foundation of the ideolo-
gy of the “Evolution 2045” party. Retrieved from http://2045.com/articles/30869.html.
Jérôme Lejeune Foundation (2020) How prenatal testing can be devastating to those with
Down syndrome. Retrieved April 27, 2023, from https://www.lejeunefoundation.org/
how-prenatal-testing-can-be-devastating-to-those-with-down-syndrome/.
Kurzweil, R. (2005) The Singularity is Near: When Humans Transcend Biology. Viking
Penguin.
Le Méné, J.-M. (2015) Le racisme chromosomique de ‘Charlie Hebdo’, in: Valeurs. Re-
trieved April 27, 2023, https://www.valeursactuelles.com/societe/le-racisme-chromo-
somique-de-charlie-hebdo.
Leibniz, G.W. (1989) Philosophical Essays (R. Ariew & D. Garber, Eds. & Trans.). Hack-
ett Publishing Company.
Ludlum, R. (2001) The Sigma Protocol. St. Martin’s Press.
Merleau-Ponty, M. (2003) Phenomenology of perception (C. Smith, Trans.). Routledge &
Kegan Paul.
More, M. (1990) Transhumanism: Toward a futurist philosophy, in: Extropy 6. Retrieved
from https://www.ildodopensiero.it/wp-content/uploads/2019/03/max-more-transhu-
manism-towards-a-futurist-philosophy.pdf.
More, M. (2003) The Extropian Principles 3.11. Retrieved from https://web.archive.org/
web/20110131060600/http://www.extropy.org:80/principles.htm.
Nietzsche, Friedrich (2006) Thus spoke Zarathustra: A book for all and for none (A. Del
Caro, Trans.). A. Del Caro & R. B. Pippin (Eds.). Cambridge University Press.
Nietzsche, Friedrich (2009) Beyond Good and Evil: Prelude to a Philosophy of the Future
(I. Johnston, Trans.). Richer Resources Publications.
Sartre, Jean P. (1989) Existentialism is a Humanism, in: W. Kaufman (Ed.), Existentialism
from Dostoyevsky to Sartre. Meridian Publishing Company, 287-311.
Spencer, H. (1864) The Principles of Biology. Vol. I. Williams and Nobgate.
Taguieff, P.-A. (2001) Du progrès, Biographie d’une utopie moderne. Librio.
Transhumanist Declaration (1998). Retrieved from https://itp.uni-frankfurt.de/~gros/
Mind2010/transhumanDeclaration.pdf.
58
Transhumanism: A Critical Approach
59
Artificial Intelligence: Are we playing God?
John Karuvelil SJ (Pune)
Introduction
When we speak of Artificial Intelligence (AI) today what can immediately strike us
is the possibilities of Chat GPT or the fully automated, AI directed defence systems,
or even robotic bodyguards and so on. AI has taken control of human life so much
that most humans, irrespective of their age or work, will become handicapped or non-
functional in the absence of it in the near future, if not already. The waking hours of a
person, whether at work or games, in rest or relaxation, in communication of whatever
types, is made possible and easier by employing artificial Intelligence. While it has
helped humans to become better, smarter, quicker and thus more effective, the misuse
of it can also threaten human life.
According to the World Intellectual Property Organization (WIPO) reports of
2019, among the emerging technology sector, the highest number of patents applied
for and received are in the area of AI. Since the emergence of AI in the 1950s, it is esti-
mated that around “340,000 AI-related patent applications were filed by innovators
and 1.6 million scientific papers have been published by researchers with the majority
of all AI-related patent filings published since 2013.”1 It is also estimated that by the
year 2030 AI could potentially contribute $15.7 trillion to the global economy.2 All
these tell us the story of the speed at which the field of AI is growing, and the number
of Scientists and programmers involved in research and development in the field.
While it is true that AI is indispensable to human life and progress, we also need
to keep in mind the dangers it can pose to human life if it is mishandled and falls into
the wrong hands in the future. Therefore, while on the one hand it can really augment
human life, on the other it can destroy life. Once out of control and out of the hands
of the just and benevolent humans, especially in programming and use of war wea-
61
John Karuvelil SJ
pons that have superintelligence, they can either annihilate masses of people or even
the world, or make humans submachines. Whichever the way it happens, humanity
will never be the same. Human intelligence and rationality with its accompanying
components of love, compassion, sympathy, justice and other emotions and feeling
which ultimately protect and enhance human life will be threatened or replaced by
the unfeeling and unwielding AI powered weapons that can chart the course of human
history in the future. God who has created this beautiful world with all its goodness
for the wellbeing of all can be destroyed in a few minutes or hours. Therefore, the
question arises: Are we playing God, with the unregulated, uncontrolled investments
in AI which may literally replace human intelligence with its own superintelligence?
Here an attempt is made to answer this question: Are we playing God with the pos-
sible superhuman, super or hyper intelligence-controlled systems or machines? Since
the definitions, functionings and applications of AI are already well known I shall be
dealing with these topics only very briefly. I shall dwell primarily on the concept of
playing God with reference to the responsibility entrusted to humans at creation in the
book of Genesis.
The term ‘Artificial Intelligence (AI) was first coined by computer scientist John Mc-
Carthy in 1956. He used the term to denote machines that could think autonomously
or for “getting a computer to do things which, when done by people, are said to in-
volve intelligence.”3 Today, AI generally is thought to refer to “machines that respond
to stimulation consistent with traditional responses from humans, given the human
capacity for contemplation, judgment, and intention.”4
AI has been defined by various authors. Although differently phrased, the content
of most of these definitions pertains to the same facts. What is commonly accepted
today is that AI is the ability demonstrated by machines in responding to human com-
mands and instructions which are inbuilt/programmed into them by human intelli-
gence. The point of difference probably is with regard to the future possibilities of AI
3 West (2018).
4 Ibid.
62
Artificial Intelligence: Are we playing God?
where while some predict the possibilities of AI controlling or going out of hand of
the humans, others rule out that possibility on the ground that AI will always need or
depend on human intelligence for its purposeful end.
63
John Karuvelil SJ
has appeared on talk shows, given interviews, participated in technological expos and
even sung songs in a very human-like fashion. This development of AI shows that the
future direction of AI’s development will be in both directions: to be more human-like
and to have exponential rational power.
“Machine learning takes data and looks for underlying trends. If it spots some-
thing that is relevant for a practical problem, software designers can take that knowl-
edge and use it with data analytics to understand specific issues.”8 For example, this
could be used very fruitfully in managing school enrolments where they take into
consideration the neighbourhood, school curriculum and other substantive interests,
and assign students to particular schools based on the material collected. Computer
programmers can build intelligent algorithms that compile different considerations
for making decisions which can include basic principles such as efficiency, equity,
justice, and effectiveness.
The last quality that AI needs to incorporate is the ‘Adaptability’, the ability to
adapt as information is compiled and decisions are made. Effective AI must adjust as
circumstances or conditions shift, which may “involve alterations in financial situa-
tions, road conditions, environmental considerations, or military circumstances. AI
must integrate these changes in its algorithms and make decisions on how to adapt to
the new possibilities.”9
Artificial Intelligence has pervaded almost all areas of human life and has significant-
ly changed and improved the way humans live. It has improved scientific research as
well as the availability of information for the common people. It has changed com-
munication, transport, life style, research, planning, finance management, disease pre-
dictions, drug production and medical treatments, games and entertainment, warfare,
industrial productions, weather predictions, space research and the list is unending. AI
controls almost the entire life and activities of most humans.
8 West (2018).
9 Ibid.
64
Artificial Intelligence: Are we playing God?
From the 2010s, AI applications were at the heart of the most commercially
successful areas of computing, and have become a ubiquitous feature of daily life. AI
is used in search engines (such as Google Search), targeting online advertisements,
online recommendation systems (for example, what is offered by Netflix, YouTube or
Amazon), driving internet traffic, targeted online advertising (AdSense, Facebook),
virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, Ad-
vanced driver-assistance systems [ADAS] and self-driving cars), automatic language
translation (Microsoft Translator, Google Translate), facial recognition (Apple’s Face
ID or Microsoft’s DeepFace), image labeling (used by Facebook, Apple’s iPhoto
and TikTok), spam filtering and chatbots (such as Chat GPT).10 There are also thou-
sands of successful AI applications used to solve problems for specific industries or
institutions.
Most of the operations happening in the vehicles we drive are conducted with the help
of semi-conductor chips installed in our vehicles which are all powered by AI. AI
65
John Karuvelil SJ
Research in any area of life is made easier and faster with the help of AI. Starting with
basic academic and data research to space research are all made easier and possible
by AI. AI is well employed in finance sector for investments, loans, stock-markets, in-
vesting, fraud detection, deviant and abnormal behaviours, etc. AI plays a substantial
role in national defense. AI is employed “to sift through the massive troves of data and
video captured by surveillance and then alert human analysts of patterns or when there
is abnormal or suspicious activity.”13
Space research and exploration is another huge area where AI is widely used.
Space missions, next generation telescopes, functioning and monitoring of space sta-
tions, etc. are all controlled by AI. AI constantly monitors the safety of the astronauts,
malfunctioning of modules or components, paths of other satellites and many more
aspects. Data from numerous satellites that aid in a large number of human functions
66
Artificial Intelligence: Are we playing God?
14 Gutierrez (2017).
15 Spilka (2022).
67
John Karuvelil SJ
These are just a few areas in which AI is effectively used. There are many other areas
too where AI is flawlessly used. Some of them are smart city planning, criminal ju-
stice, national security, banking and finance, robotics, other basic internet tasks, etc.
With all these developments in the field of AI the question remains: are we playing
God? I attempt to answer this question with the biblical account of human creation
and the responsibility entrusted to the first humans by the creator God.
Although the term “playing God” has become part of common parlance, it is still not
part of standard theological dictionaries and treatises16 and, therefore, to precisely
define or explain the meaning of the expression is not easy. The term was often heard
around hospitals with regard to decisions and functioning of doctors where they could
take life-saving or life-damning decisions. Later the term became even more common
when bio-scientists began discussions on genetic manipulations as part of treatments
or health enhancements following the Human Genome Project (October 1990 to April
2003). Today it has become a generic norm for any scientific endeavor where there is a
possibility of humans or any other powers having control over the destiny of creation,
although its primary concentration is on the lives of humans and their destiny.
The term has its origin in the Biblical notion of a creative God where God is
portrayed as the creator of all that exist, and the faith of the theistic religions that God
controls the destiny of the created world which includes the humans – the beginning
of life, sustaining of life and the end of life. Therefore, life giving and life taking, the
beginning and end, are the prerogative of God. He decides the destiny of every person
and the world. The question of playing God arises when someone or something seems
to get that power over life and the destiny of humans, and the fear of misuse of that
power to manipulate and misguide, and thus destroy the order in life and the world. As
Francis Collins would say the term ‘playing God’ would not have caused much con-
cern “if we could be confident that humans would play God as God does – with infi-
16 Peters (2003), 2.
68
Artificial Intelligence: Are we playing God?
nite love and compassion.”17 However, the term playing God creates concern, because
of the fear “that humans might play God in their own selfish and imperfect ways,”18
endangering humanity. This assumes significance today in the background of wides-
pread assumption that science and faith have settled into positions of unresolvable
opposition on many issues, and science and scientific enquiries have unbridled free-
dom to explore and experiment on the hitherto unexplored recesses of life and nature.
According to Ted Peters, theologian and ethicist, the concept of ‘playing God’
can have at least three overlapping meanings.
The first and somewhat benign meaning has to do with learning God’s awesome
secrets. It refers to the sense of awe rising up from new discoveries into the depths
of life. Science and its accompanying technology are shedding light down into the
hitherto dark and secretive caverns of human reality. Mysteries are being revealed;
and we the revealers sense that we are on the threshold of acquiring “Godlike” pow-
ers. At this level we do not yet have any reason to object to research. Rather, what we
have here is an expression of awe.19
The second meaning of “playing God” has to do with the actual wielding of
power over life and death. This applies, for example, to medical doctors working in
the clinical setting with an emergency surgery. The patient feels helpless. Only the
attention and skill of the surgeon stands between the patient and death. The doctor is
the only door to life. The patient is utterly dependent upon the physician for his or her
very existence. Regardless of whether or not doctors feel they have omnipotence in
this situation, the patients impute it to them.20
In the present day context, it applies to any situation where humans or technology
or any other force that has complete control over lives, and humans become complete-
ly powerless and have to accept destiny whichever way it comes – defeat, subjugation,
or even loss of life. Applying it to the question of AI, it presupposes a situation where
humans have created such AI systems where the human creator loses control over the
system and the system takes over, which if it was created with malicious intentions
may destroy not only the creator of the system but may harm others and the world.
69
John Karuvelil SJ
Here, the term is applied with two assumptions. First, God is the author of life
and all decisions regarding life and death belongs to God’s prerogative. Second, when
humans control life and death decisions we arrogantly transgress divinely imposed
limits. These assumptions create anxiety and fear and that leads to endless debates in
the area of ethics that demands clear instructions that tell us where to stop and where
to proceed with interventions and researches.
The third meaning of ‘playing God’ is concerned with the use of science and
scientific and technological procedures that alter life and living conditions in such
a way that humans begin to substitute themselves for God in determining what our
nature will be in the future. “It refers to placing ourselves where God and only God
belongs.”21
As Ted Peters say, the term “has very little cognitive value when looked at from
the perspective of a theologian. Its primary role is that of a warning, such as the
word ‘stop’. In common parlance it has come to mean just that: stop.”22 It means that
we should stop trying to create anything that can endanger or jeopardise human life
and human welfare. While it is a caution or advice which is good, as Christians we
still have the challenge before us to be good stewards of creation which necessitates
advancement in science and technology that contribute towards human welfare and
human flourishing. This brings us to the question: Are we playing God?
A common warning in researches into advanced science and technology is often “do
not play God.” As we have already mentioned, the caution here is against human pride
and mindless experiments which goes detrimental to human welfare and flourishing,
because as the Bible says, “Pride goes before destruction, a haughty spirit before a
fall” (Proverbs 16:18). Pride can lead us to “overestimate our own knowledge, of
arrogating for science a kind of omniscience that we do not in fact have.” In other
words, “‘playing God’ means we confuse the knowledge we do have with the wisdom
to decide how to use it. Frequently lacking this wisdom, we falsely assume we possess
21 Ibid.
22 Ibid., 2.
70
Artificial Intelligence: Are we playing God?
Genesis chapter 1 narrates the account of the creation of humans thus: “So God crea-
ted humankind in his own image, in the image of God he created them; male and
female he created them. God blessed them and said to them, ‘Be fruitful and multiply
and fill the earth and subdue it’” (Gen 1:27-28). The responsibility given to humans is
extended in Genesis Chapter 2 which says, “The Lord God took the man and put him
in the garden of Eden to till it and to keep it” (Gen 2:15). The creation of humans in
the image and likeness of God, and the responsibilities given to them are important
in understanding the role humans need to play in the created world while they them-
selves wait for their destiny in the world. Therefore, we shall analyse to some extent
these concepts of creation of humans and the responsibilities given to them, namely
‘having dominion over God’s creation’ and ‘tilling and keeping the Garden’.
71
John Karuvelil SJ
The Catechism of the Catholic Church says that made in the image and likeness of
God, humans can establish constructive relationships, of understanding the order of
things established by the Creator in creation, of participating in the on-going creation
of the world directing themselves towards what is good for themselves and the created
universe by the use of their free will and powers of intellect (CCC 1704 and 1705).
As His images we are “called to share, by knowledge and love, in God’s own life”
(CCC 356). What we need to keep in mind is that “made in the image of a God who is
a community – a community of Father, Son and Spirit (CCC 1702), we are also called
to be human not in isolation, but in communion with God and others. Therefore, the
free will and freedom given to humans by the creator is to be used, as the creator
himself does, in love for the benefit and betterment of all, the entire creation. This is
specifically mentioned in the two commandments given to the humans in the first two
chapters of Genesis, i.e., ‘have dominion over God’s creation’ (Gen 1:28) and ‘till it
[the garden] and keep it’ (Gen. 2:15). It’s true that basically these two commandments
mean the same thing. Both speak of creative stewardship on the part of the humans.
Let us analyse them.
Human beings, created in the image of God, the Creator Himself, are persons called to
enjoy communion and to exercise stewardship of the creation, the physical universe.
The activities entailed by responsible stewardship engage the spiritual, intellectual,
affective, creative capacities of human persons.26 Sharing in the communion of Trin-
itarian love as His images, humans do enjoy the privilege of sharing or participating
in the divine governance of the creation, a privilege granted to them by the Creator, to
participate in His own lordship over the universe. This responsibility is nothing, but
stewardship.27
Humans exercise sovereignty over creation and participate in the on-going cre-
ation through science, technology, art and other human efforts, bettering the creation
for the benefit and betterment of all. “They act in place of the master as stewards (cf.
72
Artificial Intelligence: Are we playing God?
Mt 25:14 ff) who have the freedom, they need to develop the gifts which have been
confided to them and to do so with a certain bold inventiveness.”28 The caution here is
that “neither science nor technology are ends in themselves; what is technically possi-
ble is not necessarily reasonable or ethical. Science and technology must be put in the
service of the divine design for the whole of creation and for all creatures.”29 What is
important is the flourishing of God’s creation, benefiting the creation to sail towards
the goal ordained by the Creator Himself.
Here, we are reminded of the words of Pope John Paul II that “Man’s lordship
is not absolute, but ministerial … not the mission of an absolute and unquestionable
master, but of a steward of God’s kingdom.”30 A misunderstanding of this teaching
of the term ‘dominion’ can lead humans to act in reckless disregard of the natural
environment, and a possible depletion of the earth’s resources. This is not stewardship
but destruction. Humans are entrusted with stewardship and dominion to take care of
God’s creation, for the full fruition and completion of it, so that all of God’s creation
might experience fulfilment and completion which is not a static point, but an on-go-
ing process until the new heaven and the new earth promised by God may become a
reality. Science and technology, including AI can contribute to this goal.
The term ‘tilling and keeping it’, is very much an agricultural one and has very rich
connotation. God has entrusted the creation, the earth/garden in a special way to the
humans that they may till and keep it. The term ‘till and keep’ clearly speaks of crea-
tive stewardship of creation, taking care of creation in such a way that it flourishes.
Creative stewardship is not maintaining the status quo of the creation, of what is
already present, but constantly improving upon what is already there for its own deve-
lopment and for the betterment of all that depends on the earth. A farmer understands
the term ‘till’ well, because s/he does it often and whenever needed, especially at a
new cultivation or intermittently, because s/he realises the great value of tilling the
73
John Karuvelil SJ
ground. Tilling helps the earth to be softer and aerated with greater amount of oxygen
that helps plants to take root and grow faster and better. Therefore, the term ‘tilling
and keeping’ acquires definite meaning of making the earth better so that all that
grows on the earth grows better and faster which in turn benefit all other living beings
depending on them to be better and healthier.
Taken into the field of science and technology ‘tilling and keeping’ can mean
on-going research leading to new knowledge of hitherto unknown recesses of God’s
creation which can enhance human capacity to be better stewards of creation, lea-
ding creation to progress for the benefit of all. This could be considered as ongoing
creation or continuous creation adding to what is already created by God. It is also
participating in God’s work of creation and sustenance. The God of the Bible is a God
who creates. The difference is that while God creates from nothing and He continues
to create anew towards the fulfilment of the creation, humans by their effort and toil
participate in the work of God, which could be said as cocreation. Since humans them-
selves are creatures and not God, they could be called created cocreators, a term first
used by Philip Hefner.31 The term reminds us, first, of our own creatureliness and that
we are not at the same level as that of our creator and, second, that the creation does
not stand still, maintaining a status quo, rather it changes and so do we as creatures. As
created cocreators, we have partial influence over creation, a ministerial stewardship
as Pope John Paul II called it (EV, 52). As images of God, we are called to be creative
stewards, sharing in the transforming work of God’s ongoing creation.32 This applies
to every aspect of human life and activity.
Humans are creative and they cannot but be creative. Coming to the use of tech-
nology, including AI, we are reminded that although they are meant for the good of
humans and the creation, they can also be used for evil ends. They can be means of
violence and war. In the wrong hands and with the wrong intention AI can mark the
end of living creatures in cases of war. “Our ethical mandate, then, has to do with the
purposes toward which our creativity is directed and the degree of zeal with which we
approach our creative tasks.” We are co-creators with God and we participate in God’s
activity for the goal God has, the fulfilment of all in Him.
74
Artificial Intelligence: Are we playing God?
Conclusion
The question still seems to remain, with technologies like AI, are we playing God?
Our analysis of the creation account and the responsibilities entrusted to humans by
the creator to be cocreators and creative stewards of His creation tell us that we need
to engage science and technology, and that they could and should be used not only to
enhance human life but also to better creation in its continuous evolution towards the
Omega point. Creativity is not just a possibility, but a quality that is innate to human
nature. We keep exploring the creation handed over to our stewardship, exploring ev-
ery aspect of it, and also exploring the possibilities of enhancing creation and human
life. Therefore, even though domination and control are morally undesirable, there are
some things that can be done and perhaps should be done to influence the course of
our future for the benefit of all. Advancement in science and scientific research, de-
velopment of technology, including AI, has an immense impact on the quality of life
of humans and for the advancement of creation. Therefore, as Ted Peters says, science
and technology “in the service of beneficence of creation ought not to be intimidated
by a ‘No Trespassing’ sign that says, ‘Thou shalt not play God’. Rather, science in the
service of beneficent means we are playing human in a free and responsible way,”33
and not playing God.
Bibliography
Augenstein, Leroy (1969) Come, Let Us Play God. New York: Harper and Row.
Ball, Mike (1 July, 2021) “AI Navigation and Path Planning Software Developed for Au-
tonomous Vehicles”, in: Unmanned Systems Technology.
Collins, Francis S. (2003) “Forward”, in: Ted Peters ed. Playing God? Genetic Determin-
ism and Human Freedom. New York and London: Routledge.
Davenport, Christian (Dec. 3, 2017) “Future Wars May Depend as Much on Algorithms as
on Ammunition, Report Says”, in: Washington Post.
Dharmaraj, Samaya (March 26, 2023) “AI Could Contribute US$15.7 Trillion to Global
Economy by 2030”, in: Open Gov.
75
John Karuvelil SJ
Gutierrez, Daniel (March 15, 2017) “Deep Learning and AI Success Stories”, in: In-
sidebigdata.
Guzman, Andrea L. / Seth Lewis C. (2020) “Artificial intelligence and communication: A
Human–Machine Communication research agenda”, in: New Media and Society 22,
1, 70-86.
Hefner, Philip (1993) The Human Factor: Evolution, Culture, and Religion. Minneapolis:
Fortress.
International Theological Commission (2004) Communion and Stewardship: Human Per-
sons Created in the Image of God. Official Website of the Vatican.
John Paul II (1995) Evangelium Vitae, Libreria Editrice Vaticana.
John Paul II (Wednesday, 17 January, 2001) General Audience.
John Paul II (October 27, 1980) “Discourse to those taking part in the 81st Congress of the
Italian Society of Internal Medicine and the 82nd Congress of the Italian Society of
General Surgery”, in: AAS 72, 1126.
National Council of the Churches of Christ in the U.S.A. (1980) “Human Life and the New
Genetics, A Report”.
Peters, Ted ed. (1989) Cosmos as Creation: Science and Theology in Consomance. Nash-
ville: Abingdon.
Peters, Ted (2003) Playing God? Genetic Determinism and Human Freedom. New York
and London: Routledge.
Rifkin, Jeremy (January 1994), “Playing God with the Genetic Code”, in: Threshold 6, 3.
Russell, Stuart / Norvig, Peter (2010) Artificial Intelligence: A Modern Approach. New
York: Prentice Hall.
Spilka, Dmytro (July 11, 2022) “How Artificial Intelligence Re-Shaping Research?”, in:
IOT For All.
Urbi, Jaden / Sigalos, Mackenzie (5 June, 2018) “The complicated truth about Sophia the
robot – an almost human robot or a PR stunt”, in: CNBC Tech Drivers.
West, Darrel M. (October 4, 2018) “What is Artificial Intelligence?”, in: Brookings.
WIPO Media Centre (January 31, 2019) “WIPO’s First ‘Technology Trends’ Study Probes
Artificial Intelligence: IBM and Microsoft are Leaders Amid Recent Global Upsurge
in AI Inventive Activity”. Geneva.
76
Emotional AI and the Elusive Nature of
Human Emotions
Dolichan Kollareth SJ (Pune)
AI has become an integral part of our daily lives and is seamlessly integrated into
various aspects of our routine. From managing our financial accounts and curating
entertainment choices to recognizing our voices and faces, AI has expanded its role
from providing computational support to facilitating subjective engagement. This
evolution prompts us to contemplate the future trajectory of AI-human interaction:
Will AI remain a utilitarian tool or evolve into a sentient being capable of subjective
interactions? Against this backdrop, this article examines attempts to instill AI with
the capacity for human emotions.
The integration of emotions into artificial intelligence (AI) has garnered significant
attention in recent years, propelling the field toward new frontiers of human-computer
interaction. Referred to as Emotional AI, this burgeoning area of research strives to
imbue AI systems with the ability to recognize, express, and even experience emo-
tions akin to humans. However, the endeavor to replicate the nuanced and complex
nature of human emotions in machines presents a formidable challenge, inviting at-
tention to the intricate interplay of physiological, cognitive, and sociocultural factors
that define our emotional experiences. This challenge highlights the intricacies of the
human consciousness. In other words, attempts to imbue AI with emotions ultimately
raise questions about what it means to be a human.
1 AI and Emotions
The theme of robots seeking power and domination is a common motif in Hollywood
movies. Despite the prevalence of such fictional portrayals over the years, AI in the
real world remained non-emotional until recently. However, with the growing interest
in incorporating the intricacies of human neural processing into AI, there have been
77
Dolichan Kollareth SJ
emerging efforts to integrate emotions into AI. Emotional AI holds significant prom-
ise for practical applications and the future advancement of AI. Emotion recognition,
expression, and experience are essential components in the development of Emotional
AI.
The primary motivation for incorporating emotions into AI is to develop social robot-
ics specifically designed for human interaction. According to Picard, machines must
possess the capability to recognize, understand, and express emotions to efficiently
engage with humans.1 According to Yan and colleagues, machines adept at discerning
and expressing diverse emotions foster more natural and harmonious human-robot in-
teractions.2 Empirical evidence supports these viewpoints. In social settings, individu-
als report heightened satisfaction in their interactions and develop stronger bonds with
robots that simulate emotional expressions.3 In the workplace, emotional AI enhances
collaboration among the workforce and increases workplace efficiency.4
The implications of emotional AI extend across various domains including health-
care, assisted living, and education. In healthcare, AI equipped with emotion recog-
nition capabilities can assess stress levels in individuals, and such assessments can
function as biofeedback systems, aiding individuals in achieving a state of homeostat-
ic balance.5 In assistive services, emotionally responsive AI fosters greater trust and
significantly improves the quality of life of the elderly and those facing developmental
challenges such as Autism Spectrum Disorder (ASD).6 In education, the use of emo-
tional AI in instruction results in more efficient cognitive and affective outcomes than
traditional teaching methods, and emotional AI better facilitates language learning.7
78
Emotional AI and the Nature of Human Emotions
Emotions also serve as pivotal catalysts in the progression of AI, particularly in the
transition from Narrow Artificial Intelligence (NAI) to General Artificial Intelligence
(GAI), and further to Super Artificial Intelligence (SAI).8 While NAI is programmed
and tailored for specific tasks, GAI aims to undertake multiple tasks and self-optimize
by eliminating inefficiencies. Representing human-like intelligence, GAI holds the
potential to revolutionize various sectors. For example, the GAI used in farming may
gather data on atmospheric conditions, soil quality, and market demands to optimize
crop cultivation and distribution. SAI, the subsequent stage of AI, represents machines
that surpass human intelligence and autonomously augment their capabilities. Given
the significant role that emotions play in human cognition and learning, embedding
emotions in machines is a crucial step in the evolution of next-generation AI.
Emotion recognition, expression, and experience are important steps in the develop-
ment of Emotional AI.
The first step in the development of emotional AI is AI, which can accurately
recognize and interpret human emotions. Advances in computer vision and Natural
Language Processing (NLP) have enabled AI systems to analyze facial expressions,
vocal tones, and textual data to infer emotions.
Computer vision involves teaching AI systems how to process visual data from
images and videos. AI systems analyze facial expressions, body language, and ges-
tures to determine an individual’s emotional state. AI models are trained to recognize
key facial features, such as the shape of the mouth, the position of the eyebrows, and
eye movements, which can indicate different emotions.9 Many emotion recognition
AI systems adopt a categorical approach to emotions, meaning that these systems are
programmed to identify 4-7 discrete emotions, such as happiness, sadness, and an-
ger.10 Deep learning, particularly Convolutional Neural Networks (CNNs), designed
to recognize patterns in data through the use of overlapping filters, has significantly
79
Dolichan Kollareth SJ
enhanced the accuracy of facial emotion recognition.11 Such developments raise the
hope that AI in the future will recognize complex patterns and subtle nuances in facial
expressions that may not be easily discernible to the human eye.
While most emotional AI systems focus on facial expression analysis, some have
explored speech emotion recognition and textual analysis. As Natural Language Pro-
cessing (NLP), a branch of AI that focuses on enabling computers to understand,
interpret, and generate human language, advances, AI systems are being taught to
understand and generate emotional language.12 Sentiment analysis, a subset of NLP,
assesses the emotional tone in text data using techniques such as tokenization, part-
of-speech tagging, and machine learning algorithms to determine whether a piece of
text is positive, negative, or neutral in terms of sentiment. Voice sentiment analysis
interprets vocal tone, pitch, and intonation to gauge emotions in a spoken language.
Multimodal approaches to emotion recognition combine data from more than one
mode of emotional expression. For instance, AI systems can combine facial expres-
sion analysis with speech-emotion recognition.13 Similarly, advanced emotion recog-
nition systems have attempted to integrate computer vision and NLP. In such instanc-
es, AI can simultaneously analyze facial expressions through computer vision and
spoken content through NLP,14 leading to a more comprehensive understanding of a
person’s emotions.
Accurate emotion recognition has applications in various fields, including hu-
man-computer interaction, mental health monitoring, market research, and entertain-
ment. It facilitates the creation of emotionally responsive AI systems, personalization
of user experiences, and improvement of virtual assistants and chatbots’ effectiveness.
Voice sentiment analysis finds applications in voice assistants and customer service
call centers.
Despite substantial progress, emotion recognition remains a challenging task ow-
ing to the complexity and nuances of emotions across different contexts and indi-
viduals. AI systems designed to recognize emotions today often struggle to interpret
mixed emotions or variations in emotional behavior.15 Overcoming challenges related
80
Emotional AI and the Nature of Human Emotions
to noisy data, subtle emotional cues, and individual differences remains the focus of
ongoing research and development.
The development of emotional AI beyond emotion recognition pursues AI systems
that are capable of emotional expressions. An AI system is capable of expressing emo-
tions when it can effectively recognize and appropriately respond to humans or other
AI emotions. Affective computing, an emerging interdisciplinary field, focuses on
equipping AI systems with the ability to perceive and respond to human emotions.16
Its goal is to create chatbots, virtual assistants, and robots capable of providing em-
pathetic and contextually relevant responses, thereby enabling them to meaningfully
engage with humans in a social environment.17
Chatbots and virtual assistants capable of responding to emotions signify a signif-
icant leap in human-computer interaction. These systems transcend simple informa-
tion exchanges to establish meaningful connections with users. They are designed to
detect users’ emotional states, respond empathetically, and adapt to the evolving emo-
tional dynamics during conversations. Consequently, these AI-driven systems provide
emotional support, proving invaluable in domains such as health care, mental health
support, and customer service.
A key application of affective computing is the generation of emotionally engaged
content. Whether in storytelling, marketing, or entertainment, AI has the potential
to craft content that resonates deeply with human emotions.18 AI can customize nar-
ratives, advertisements, and media to evoke specific emotional reactions from the
audience by analyzing extensive datasets of emotional responses to various types of
content. This personalization not only enhances the impact of the content but also
makes it more compelling. For example, a chatbot equipped with affective computing
capabilities can discern user frustration and respond with patience and understanding,
thereby enhancing the overall user experience.
In the field of robotics, researchers are exploring the development of emotionally
intelligent robots that can perceive and respond to human emotions, making them
suitable for various applications ranging from companion robots for the elderly to
therapeutic robots for individuals with autism. Examples include Cog, Kismet, and
81
Dolichan Kollareth SJ
robot SAYA.19 Emotionally intelligent robots have the potential to revolutionize hu-
man-machine interactions, rendering these interactions more intuitive and enriching.
Despite the vast potential of affective computing, this poses a challenge. Convinc-
ing emotional expression does not come easily. An appropriate response or expression
requires accurate emotion recognition within real contexts, an analysis of the involved
intentions, and guidelines on appropriate behavioral responses.20 For these reasons, at-
tempts to create emotionally expressive robots such as Cog and Kismet have achieved
limited success. In addition, naïve observers can readily discern that robotic emotion-
al expressions fall short of innate human capabilities for emotional expression.21
Emotion experience, the third step in emotional AI, transcends mere emotion rec-
ognition and expression. Capacity for emotion experience manifests when a machine
can not only recognize its own emotions but also regulate them.22 In contrast to rec-
ognizing and responding to human emotions (emotion expression), developing AI
with the capacity to experience emotions or possess emotional awareness presents
an entirely different challenge. While the field of affective computing might advance
in recognizing and responding to human emotions through algorithmic and data-pro-
cessing improvements, the concept of emotional experience appears to diverge from
data patterns and algorithms.
The inability of AI to possess emotional awareness or experience emotions origi-
nates from its fundamental dissimilarity to humans, characterized by its lack of con-
sciousness, absence of qualia, lack of biological underpinnings, and incapacity to have
personal experiences or develop a sense of self. Considering that achieving emotional
expression is more challenging than emotion recognition, the prospect of AI attaining
an emotional experience seems distant.
82
Emotional AI and the Nature of Human Emotions
83
Dolichan Kollareth SJ
Regarding AI, the prevalent theoretical perspective on emotions often adopts a dis-
crete approach, suggesting a limited and distinct number of emotions. Such an ap-
proach to emotion appears more compatible with the data processing models in AI,
raising the hope that scientists might decode the data model responsible for each emo-
tion and then simulate these emotions in machines. The following section will delve
into Basic Emotion Theory (BET), which is a discrete approach to emotion. The BET
is arguably the most popular theory of emotion today, although its popularity does not
necessarily validate its scientific credibility.
Basic Emotion Theory (BET) posits the existence of a small number of qualitative-
ly distinct emotional states, each accompanied by a unique facial signal. Proponents
trace the origins of BET back to Charles Darwin, who, in his book “The Expression of
the Emotions in Man and Animals” highlighted the similarity in the emotional expres-
sions of humans and animals.25 Darwin used this similarity as evidence supporting
the evolutionary origins of human beings. Building on Darwin’s work, Tomkins and
McCarter suggested the existence of evolutionarily fixed innate “affect programs”
that are responsible for each discrete emotion.26 These affect programs were the result
of evolutionary mechanisms. An emotion arises when an external stimulus triggers
an affect program, leading to predictable and consistent responses. While not all of
these responses may be relevant from an evolutionary perspective today (e.g., the fear
of snakes), they served a crucial purpose in our ancestral past.27 Tomkins enumerat-
ed nine discrete emotions: Interest, enjoyment, surprise, fear, anger, distress, shame,
contempt, and disgust.28
25 Darwin (1979).
26 Tomkins e.a. (1964) 119.
27 Plamper (2010) 237.
28 Tomkins e.a. (1964).
84
Emotional AI and the Nature of Human Emotions
Tomkins’ ideas were further advanced and popularized by his students Carrol Iz-
ard and Paul Ekman. Paul Ekman, in particular, emphasized a physiological view of
emotions and advocated for “basic emotions” identifiable by unique physiological re-
actions and behaviors, particularly distinct facial expressions. Ekman and colleagues
originally identified six basic emotions associated with a distinct facial expression:
anger, fear, surprise, sadness, disgust, and happiness.29 Similar to Tomkins’s proposal,
Ekman and Cordaro suggested that these emotions have evolved in response to var-
ious adaptive challenges in the ancestral environment.30 These emotions are deemed
distinct (separate from one another), innate (independent of learning history), and
universal (facial expressions are universally recognizable, irrespective of culture and
history).
Contemporary approaches to the study of emotions continue to focus on discrete
emotions, each characterized by a distinct facial signal. In a survey conducted among
248 emotion scientists in 2014, one of the questions asked was whether there was
compelling evidence for universal signals of discrete emotions.31 The results revealed
that “the evidence supporting universal signals (face or voice) was endorsed by 80%”.
Keltner and Cordaro emphasized the significance of researching discrete emotions,
each marked by a universal signal. According to them, “At its core, the basic emotion
theory consists of specific theses concerning ... how scientific research is to differenti-
ate distinct emotions from one another ... Critical to basic emotion theory is the notion
that human emotional expression arose during the process of mammalian evolution”.32
Most attempts to simulate emotions in AI have been based on BET. For instance,
a vast number of emotion recognition systems rely on facial emotion classification.
These emotion recognition systems are designed to detect facial features, such as eye-
brows, eyes, nose, mouth, and chin, and then match these features with a fixed number
of emotions, such as anger, fear, and surprise, categorized based on distinct facial fea-
tures. Similarly, in emotion expression, the research in emotional AI aims to develop
machines capable of expressing a specific number of discrete emotions. For example,
Kismet is designed to express six emotions, and H&F Robot I, eight emotions.
85
Dolichan Kollareth SJ
Basic Emotion Theory (BET), despite its widespread popularity in emotion research
and application in the field of AI, has faced criticism on various grounds. Criticism
has arisen from the theoretical, methodological, and empirical perspectives. From a
theoretical standpoint, Durán and Fernández-Dols argued that the universal recogni-
tion of emotion from facial expressions is insufficient evidence to conclude that emo-
tions cause facial expressions.33 According to Fridlund, distinct facial expressions that
serve as signals for discrete emotions represent a poor adaptive strategy, as concealing
emotions can be advantageous in certain contexts.34 Recent evolutionary perspectives
continue to question whether facial expressions are manifestations of discrete and
innate affect programs or emotions.35
Another theoretical issue is the number of basic emotions. Proponents differ in
their list of basic emotions. For instance, Ekman proposed six basic emotions, where-
as Tomkins identified nine. Various other researchers have proposed their lists, further
muddling the concept of basic emotions.36 The lack of consensus on the number of
basic emotions after decades of research suggests a need to reconsider the concept of
basic emotions.
On methodological grounds, current research on facial signals relies on natural
language labels, while attempts to utilize peripheral physiology or brain activity have
not yielded promising results owing to a lack of supporting evidence. Researchers
have raised concerns about the specific tasks used in studies supporting BET, pointing
to biases that can inflate agreement and reduce ecological validity.37 A recent proposal
to expand the list of basic emotions has faced criticism on methodological grounds.38
Empirical evidence has failed to provide robust support for the hypothesis that
discrete emotions have distinct facial signals. In a meta-analysis, Durán and Fernán-
dez-Dols concluded that specific emotions generally did not result in predicted facial
signals.39 Barrett and colleagues claimed in another review that facial expressions do
86
Emotional AI and the Nature of Human Emotions
not reliably correlate with a person’s emotional state.40 Further research has questioned
whether the recognition of emotion from its purported signal is truly universal.41
Empirical evidence from neuroscience provides scant support for BET. Some
neuroscientists, consistent with BET, consider emotions discrete and subserved by
distinct regions in the brain. Examples include hypotheses such as whether the amyg-
dala plays a key role in the activation of fear or the insula plays a key role in the ac-
tivation of disgust.42 However, the empirical testing do not support these hypotheses.
There is no conclusive evidence supporting the hypothesis that distinct brain regions
serve discrete emotions. For example, Joseph LeDoux investigated the link between
amygdala activation and fear.43 In conclusion, LeDoux advocated the need to separate
the activation of brain regions from emotions. In the case of fear, there is a need to
separate a “threat-induced defensive reaction” from the conscious feeling of “fear”. A
meta-analysis of neuroimaging literature on human emotions concluded that discrete
emotions do not consistently correspond to specific regions in the brain.44
The criticisms outlined above regarding the Basic Emotion Theory (BET) raise sig-
nificant concerns about the discrete approach to emotions. If the discrete approach
to understanding emotions, instantiated in BET, is fundamentally flawed, it naturally
calls into question the ability of AI to replicate complex emotional states. Emotions,
rather than discrete entities, appear to involve complex interplay of physiological,
psychological, and sociocultural factors, rendering their replication in AI systems ex-
ceedingly challenging. The fundamental premise is that emotions cannot be reduced
to mere algorithms or data patterns that can be seamlessly simulated in machines as
technology progresses. Instead, emotions appear to be intricately linked to thoughts,
memories, experiences, subjectivity, and sociocultural context. Thus, they may be
profoundly interwoven with human consciousness.
87
Dolichan Kollareth SJ
Emotions are not predefined “affect programs”. They are not automatic responses to
stimuli. Instead, human emotions are integral components of an ongoing and intricate
cognitive process that includes attention, recognition, and memory. AI systems, which
rely on predefined algorithms and data patterns, struggle to adapt to the dynamic,
interlinked, and evolving nature of emotional responses. As long as AI systems fail
to simulate these complex interactions of cognitive faculties at the heart of emotional
experiences, it remains nearly impossible for them to replicate the depth and complex-
ity of human emotions.
Emotions play a pivotal role in selective attention.45 In certain contexts, our emo-
tional states guide our attention. For instance, in a room engulfed in flames, people
rarely focus on the color of the curtains. Fear guides our attention toward aspects that
aid in our survival, such as finding an escape route. Similarly, emotions are implicated
in the frame problem – the cognitive ability to access specific beliefs and knowledge
necessary to handle real-life situations. For instance, when in a burning room, one
swiftly and spontaneously accesses relevant knowledge, such as locating the exit door
and escaping to safety.
Emotions also influence our ability to recognize faces. According to Ramachan-
dran and Blakeslee, Capgras’ syndrome results from brain trauma that severs the neu-
ral network connecting emotions and facial recognition.46 Consequently, even when
the patient may cognitively recognize their loved ones, the emotional experience is
absent, leading to a failure in facial recognition.
Moreover, emotions and memory are closely intertwined. Emotions significantly
impact memory storage and retrieval. Individuals tend to store and recall emotionally
charged memories more vividly. For instance, flashbulb memories, long-term viv-
id recollections of shocking events, can remain deeply ingrained in one’s memory,
such as recalling a childhood tragedy. Emotions also serve as stimuli for reflection
and rumination.47 The quality of experienced emotion can vary depending on specific
memories. The sadness one feels when recalling a cherished memory differs from the
sadness experienced upon hearing bad news.
88
Emotional AI and the Nature of Human Emotions
89
Dolichan Kollareth SJ
tally hitting their big toe on the footpath. Empirical studies consistently highlight
the context-dependent nature of the interpretation of facial expressions.53 The mean-
ing attributed to a facial expression changes dramatically depending on the context.54
Therefore, a specific configuration of facial muscles holds meaning not in itself but
rather depending on the context.
Empirical research on emotion concepts underscores the variability and complex-
ity of human emotions. An emotion category does not refer to a homogeneous set
of experiences. For example, the emotion category of “fear” encompasses a hetero-
geneous set of experiences: the emotional response to the sight of a tiger dashing
towards you, the experience right before attending an important job interview, and the
experience of contemplating an uncertain future. Empirical studies show that emotion
concepts such as anger and disgust are heterogeneous.55 Moreover, across languag-
es, there exists conceptual variation in emotion categories: Certain English emotion
terms lack exact translations in other languages.56
Another aspect highlighting the complexity and variability of human emotions
stems from the comparison of human emotions to those of animals. How comparable
are human emotions to those of animals? While Darwin suggested a continuity in
emotional experiences between human beings and other animals, this claim appears
to be naïve. According to Gros, the complexity of an animal’s nervous system corre-
sponds to the complexity of its emotional states.57 Humans, as per Gros, are unique
not only due to their higher cognitive abilities but also because of their capacity to
experience a wide variety of emotional states.58 Moreover, the complexity of human
cognition and emotion provides a broader range of behavioral options.
Yet another instance of the complexity of human emotions is their subjective na-
ture. Emotions are profoundly subjective, varying from person to person based on
their unique life experiences, cultural background, and individual perceptions. There-
fore, humans often have diverse and contrasting emotional responses to the same
stimuli. What may evoke fear in one individual might elicit excitement in another.
90
Emotional AI and the Nature of Human Emotions
91
Dolichan Kollareth SJ
ronment engage in constant two-way interaction, forming a coupled system that can
be seen as a cognitive system in its own right. Philosophers emphasizing a theory of
mind beyond the brain characterize mental processes as embodied, embedded, enact-
ed, and extended, highlighting the dynamics involved in mental phenomena.60 Mental
processes are not exclusively confined to the organism but extend to the surrounding
environment in various ways.
If the mind is an extended phenomenon, it cannot be exclusively identified in the
brain. Consequently, there is no straightforward way to artificially transfer the mind
to a machine. Simulating the extended mind might be crucial for creating sentient
machines and emotional AI. Even if an artificially simulated interactive loop similar
to the one involved in human mental phenomena is successful, the emotional life of AI
and humans is likely to qualitatively differ due to the distinctive constitutive elements
that contribute to the formation of an emotional episode – the brain, the body, and the
environment.
Moreover, philosophers underscore the subjective dimension of the human mind.
David Chalmers, for instance, distinguishes the “easy problems” from the “hard prob-
lem” of consciousness.61 The easy problems pertain to the cognitive and neural mech-
anisms underlying our ability to process information, discriminate between stimuli,
and perform various mental functions. In contrast, the hard problem concerns the
subjective nature of consciousness itself, focusing on how physical processes in the
brain generate the qualitative and subjective experiences that define our reality. Hu-
man emotional experiences inherently engage with this subjective dimension.
Conclusion
60 Clark (2003).
61 Chalmers (2017) 32.
92
Emotional AI and the Nature of Human Emotions
Bibliography
Adikari, Achini e.a. (2021) A Self Structuring Artificial Intelligence Framework for Deep
Emotions Modeling and Analysis on the Social Web, in: Future Generation Computer
Systems 116, 302–315.
Alessandri, Guido e.a. (2018) Job Burnout: The Contribution of Emotional Stability and
Emotional Self-efficacy Beliefs, in: Journal of Occupational and Organizational
Psychology 91, 823–851.
Almabdy, Soad / Elrefaei, Lamiaa (2019) Deep Convolutional Neural Network-Based
Approaches for Face Recognition, in: Applied Sciences 9, 4397.
Al-Shawaf, Laith e.a. (2016) Human Emotions: An Evolutionary Psychological Perspec-
tive, in: Emotion Review 8, 173–186.
Andrunyk, Vasyl / Yaloveha, Olesia (2021) AI System in Monitoring of Emotional State of
a Student with Autism, in: Natalya Shakhovska and Mykola O. Medykovskyy (Eds.),
Advances in Intelligent Systems and Computing V. Cham: Springer International
Publishing, 102-115.
Assunção, Gustavo e.a. (2022) An Overview of Emotion in Artificial Intelligence, in:
IEEE Transactions on Artificial Intelligence 3, 867–886.
Aviezer, Hillel e.a. (2008) Putting Facial Expressions Back in Context, in: First Impres-
sions, 255–286.
93
Dolichan Kollareth SJ
94
Emotional AI and the Nature of Human Emotions
Ekman, Paul (2016) What Scientists Who Study Emotion Agree About, in: Perspectives
on Psychological Science 11, 31–34.
Ekman, Paul / Cordaro, Daniel (2011) What Is Meant by Calling Emotions Basic, in:
Emotion Review 3, 364–370.
Ekman, Paul e.a. (1982) Emotion in the Human Face. Cambridge University Press.
Erol, Berat A. e.a. (2019) Toward Artificial Emotional Intelligence for Cooperative Social
Human–Machine Interaction, in: IEEE Transactions on Computational Social Systems
7, 234–246.
Feinstein, Justin S. e.a. (2011) The Human Amygdala and the Induction and Experience of
Fear, in: Current Biology 21, 34–38.
Fridlund, Alan J. (2014) Human Facial Expression: An Evolutionary View. Academic
press.
Greenaway, Katharine H. e.a. (2018) Context Is Everything (in Emotion Research), in:
Social and Personality Psychology Compass 12, 12393.
Gros, Claudius (2021) Emotions as Abstract Evaluation Criteria in Biological and Artificial
Intelligences, in: Frontiers in Computational Neuroscience 15, 726247.
Han, Donghee e.a. (2016) The Words for Disgust in English, Korean, and Malayalam
Question Its Homogeneity, in: Journal of Language and Social Psychology 35, 569–
588.
Hassin, Ran R. e.a. (2013) Inherently Ambiguous: Facial Expressions of Emotions in Con-
text, in: Emotion Review 5, 60–65.
Hen, Sun-Wen e.a. (2018) Robot Vision Navigation Based on Improved ORB Algorithm,
in Fatos Xhafa e.a. (Eds.), Advances in Intelligent Systems and Interactive Applica-
tions. Cham: Springer International Publishing.
Izard, Carroll E. (1993) Four Systems for Emotion Activation: Cognitive and Noncogni-
tive Processes, in: Psychological Review 100, 68–90.
Jaiswal, Akriti e.a. (2020) Facial Emotion Detection Using Deep Learning, in: 2020 Inter-
national Conference for Emerging Technology (INCET), 1–5.
Keltner, Dacher / Cordaro, Daniel T. (2017) Understanding Multimodal Emotional Ex-
pressions, in: J.-M. Fernández-Dols and J. A. Russell (Eds.), The Science of Facial
Expression. London: Oxford University Press, 57-75.
Kollareth, Dolichan e.a. (2021) On Evidence for a Dozen New Basic Emotions: A
Methodological Critique, in: Emotion 21, 1074-1082.
95
Dolichan Kollareth SJ
LeDoux, Joseph E. (2013) The Slippery Slope of Fear, in: Trends in Cognitive Sciences
17, 155–156.
Lindquist, Kristen A. e.a. (2012) The Brain Basis of Emotion: A Meta-Analytic Review,
in: Behavioral and Brain Sciences 35, 121–143.
Megill, Jason (2014) Emotion, Cognition and Artificial Intelligence, in: Minds and Ma-
chines 24, 189–199.
Morawetz, Carmen e.a. (2016) Neural Representation of Emotion Regulation Goals, in:
Human Brain Mapping 37, 600–620.
Pessoa, Luiz (2017) A Network Model of the Emotional Brain, in: Trends in Cognitive
Sciences 21, 357–371.
Picard, Rosalind W. (1997) Affective Computing. Citeseer.
Plamper, Jan (2020), The History of Emotions: An Interview with William Reddy, Barbara
Rosenwein, and Peter Stearns, in: History and Theory 49, 237–265.
Plutchik, Robert / Kellerman, Henry (2013) Theories of Emotion. Academic Press.
Poria, Soujanya e.a. (2019) Emotion Recognition in Conversation: Research Challenges,
Datasets, and Recent Advances, in: IEEE Access 7, 100943–100953.
Qin, Huang e.a. (2018) A Review of Cognitive Psychology Applied in Robotics, in Fa-
tos Xhafa e.a. (Eds.), Advances in Intelligent Systems and Interactive Applications.
Springer International Publishing.
Ramachandran, V. S. / Blakeslee, Sandra (1998) Phantoms in the Brain: Probing the Mys-
teries of the Human Mind. New York: Willian Morrow and Company.
Russell, James A. (2003) Core Affect and the Psychological Construction of Emotion, in:
Psychological Review 110, 145-172.
Scherer, Klaus R. (1998) Analyzing Emotion Blends, in: Proceedings of the Xth Confer-
ence of the International Society for Research on Emotions. Würzburg: ISRE Publica-
tions, 142–148.
Takalkar e.a. (2020) Manifold Feature Integration for Micro-Expression Recognition, in:
Multimedia Systems 26, 535–551.
Tomkins, Silvan S. / McCarter, Robert (1964) What and Where Are the Primary Affects?
Some Evidence for a Theory, in: Perceptual and Motor Skills 18, 119–158.
Van Den Berghe, Rianne e.a. (2019) Social Robots for Language Learning: A Review, in:
Review of Educational Research 89, 259–295.
Yan, Fei e.a. (2021) Emotion Space Modelling for Social Robots, in: Engineering Appli-
cations of Artificial Intelligence 100, 104178.
96
Emotional AI and the Nature of Human Emotions
Yoder, Anne M. e.a. (2016) The Word Disgust May Refer to More than One Emotion, in:
Emotion 16, 301-308.
Zhou, Shujie / Leimin, Tian (2020) Would You Help a Sad Robot? Influence of Robots’
Emotional Expressions on Human-Multi-Robot Collaboration, in: 2020 29th
IEEE International Conference on Robot and Human Interactive Communication
(ROMAN), 1243–1250).
97
Extinction, Empathy, Ethics
Dealing with AI and ChatGPT with Wisdom
and Hope
Over the past four months, AI chatbots have skyrocketed in popularity, astounded
the public with their amazing talents, like writing complex term papers and having
startlingly lucid discussions.1 Because they cannot truly comprehend what they say,
chatbots cannot think like humans. Because of the enormous volume of material that
powers them – most of which was taken from the internet – they can imitate human
speech.
The AI uses this material as its primary source of knowledge about the world as it
is being created, which affects how it interacts with users. For instance, if it excels on
the CAT, it was probably trained using training data from thousands of CAT practice
sites.
Tech firms are now more guarded about the information they feed the AI. The
Washington Post therefore set out to completely uncover the kinds of proprietary,
personal, and frequently offensive websites that are used as training data for an AI by
analysing one of these data sets.
Because it is already here and “evolving” into an Existent Entity that “manipulates
knowledge” based on a supplied dataset, we can call this era The Age of AI. It will
be computed, summarised, and articulated how translation, interpretation, grammar,
context, and word manipulation all affect meaning. Occasionally, with such clarity
that academics are surprised, but the majority of these are useless and occasionally
dangerous reactions; “garbage-in/garbage-out”.
1 Kevin Schaul, Szu Yu Chen, and Nitasha Tiku, “Inside the Secret List of Websites That Make AI like
ChatGPT Sound Smart”. Washington Post, accessed April 23, 2023, https://www.washingtonpost.com/
technology/interactive/2023/ai-chatbot-learning/.
99
Kuruvilla Pandikattu SJ
AI serves as a “prosthesis” for the human brain, presenting data and information in
ways that are consistent with the knowledge, language, and expressions of the “sub-
ject-in-question” in its “library of information”.
The Anarchist Cookbook was originally an extremely terrible, terroristic, and evil
book that was released on “the-early-internet”; it is not recommended for young peo-
ple. The publication of this information was prohibited since only individuals who
wanted to engage in anarchist actions would benefit from it.
Because they cannot truly comprehend what they say, chatbots cannot think like
humans. Because of the enormous volume of material that powers them – most of
which was taken from the internet – they can imitate human speech.2
This paper looks at the dangers and possibilities offered by AI. After an overview
of AI, we look into some of the dangers like nuclear-level catastrophe and the ability
to destroy humanity itself. Then we look at the positive powers of AI and the potential
for AI to become like “God”. Then we look at the unheeded call of some tech experts
to pause the progress of AI for six months. Finally, we plead for empathy and focus on
human beings in our responsible use of AI for the common good of humanity.
1 Brief Overview of AI
OpenAI coupled with ChatGPT is changing the world dramatically.3 A research or-
ganisation called OpenAI seeks to advance and create “friendly” AI. With Microsoft
apparently intending to invest $10 billion in the company, the corporation is now
aiming to make some new friends after last year’s ChatGPT and DALLE-2 succeeded
to bring AI into the public.
2 AI Compute Symposium (2020).as its foundations were established in the 1950s. After many decades
of inattention, there has been a dramatic resurgence of interest in AI, fueled by a confluence of several
factors. The benefits of decades of Dennard scaling and Moore’s law miniaturization, coupled with the
rise of highly distributed processing, have led to massively parallel systems well suited for handling big
data. The widespread availability of big data, necessary for training AI algorithms, is another important
factor. Finally, the greatly increased compute power and memory bandwidths have enabled deeper
networks and new algorithms capable of accuracy rivaling that of human perception. Already AI has
shown success in many diverse areas, including finance (portfolio management, investment strategies
3 Bednarski (2023).
100
Extinction, Empathy, Ethics
Where, though, did it all begin? What effects does AI have? When will robots start
to feel emotions? Despite the fact that we do not know the solution to the last query,
you will still learn a lot. In this essay, we dust off our crystal ball and examine Ope-
nAI’s past as well as its future.
Elon Musk, Sam Altman, Greg Brockman, Wojciech Zaremba, Ilya Sutskever, and
John Schulman founded OpenAI together in 2015 in San Francisco. The goal was to
create open, secure AI tools that will empower rather than exterminate people.
According to the OpenAI Charter, “OpenAI’s mission is to ensure that artificial
general intelligence (AGI) – defined as highly autonomous systems that perform bet-
ter than humans at the economically valuable work – benefits all of humanity”. Since
then, the business has produced an astounding array of technologies, such as DALLE,
an AI image creator, and CodexAI, which powers CoPilot, GitHub’s coding sugges-
tion engine.
With the help of ChatGPT, a potent and unsettlingly unsettling AI chatbot built
on the company’s flagship GPT-3 language model, OpenAI rose to prominence as the
hottest brand in the IT industry last year. And it proved to be more than just autocom-
plete on steroids, despite the doubters’ claims.
We have been attempting to animate objects and endow them with human-like traits
throughout the course of human history. But Alan Turing was the first individual who
had a significant impact.
The early 1950s research of Turing served as the impetus for modern computer
science. Even though AI was still a thing of science fiction books, it was enough to
attract more creative minds. John McCarthy, who first used the term “artificial intelli-
gence” in 1956, was one of that merry group. It reconstructed “Bombe” device created
by Alan Turing. During World War II, the device let the British to decrypt German
communications that were encrypted.
Two years later, the Artificial Intelligence project was launched at MIT by John
McCarthy, an American computer pioneer. Even if too pessimistic, the future of AI
research was starting to appear promising. Following the initial surge in enthusiasm,
the AI bubble burst and funding dwindled, primarily as a result of underwhelming
101
Kuruvilla Pandikattu SJ
outcomes and inadequate processing capacity. Many refer to this time as the first “AI
winter”.4
The 1990s saw a resurgence of interest in artificial intelligence thanks to develop-
ments in machine learning and natural language processing (NLP). A few publicity
gimmicks also assisted in keeping it there.
In a six-game contest in 1997, IBM’s “Deep Blue” computer defeated Garry
Kasparov, the reigning world chess champion. After a different computer called “The
Oracle” defeated Jeopardy (!) champion Ken Jennings a year earlier, that was AI’s
second victory.
Interesting advancements from the early 2000s include the emergence of large
data, improved algorithms, and rising computing power. Advanced AI systems were
readily accessible.
The rapid development of AI systems raises certain questions more than 65 years
after Turing’s seminal work. Some people became concerned about the direction it
was going as well.
Sam Altman, the former CEO of Y Combinator, and business “magnet” Elon
Musk led a campaign for open and secure AI development in 2015. And that is how
OpenAI’s history started. Altman and Musk raised concerns about the potential dan-
gers and benefits of AI technology even before they founded their firm, at one point
dubbing it “the greatest threat to humanity”.
Initially, the company concentrated on creating artificial intelligence for video
games and other uses. The company debuted its initial tools in 2016, OpenAI Gym
and Universe, an open-source toolkit for reinforcement learning (RI) that served as a
sort of training ground for AI agents.
102
Extinction, Empathy, Ethics
OpenAI concentrated on more general AI research and development in the two years
that followed. The term “Generative Pre-trained Transformer” (GPT) was first in-
troduced in a study by the business titled “Improving Language Understanding by
Generative Pre-Training” that was published in 2018.
GPTs are essentially neural networks – machine learning models inspired by the
structure and operation of the human brain – trained on a sizable dataset of text that
has been produced by humans. It is capable of carrying out a variety of tasks, includ-
ing asking and responding to queries.
It can even pen haikus about itself, such as the following:
ChatGPT’s mind vast,
Answers flow with ease,
AI’s tongue at last.
To prove their point, the OpenAI team created GPT-1, their first language model
“trained” using Book Corpus, which contains over 7000 unpublished novels. The
model subsequently changed into the more potent GPT-2, which had 1.5 billion pa-
rameters (trained values) and was trained on 8 million web pages.
The business then changed course from its aspirational plan for “open” AI and first
decided against making GPT-2 available to the general public.
According to a blog post on the OpenAI website, the team was concerned that
GPT-2 might be exploited to create phoney emails or fake news. And that made total
sense. As Peter Parker of Spider Man fame once said, “With great power comes great
responsibility”. Spider-Man popularised the proverb “With great power comes great
responsibility” in Marvel comics.
Elon Musk’s departure from the OpenAI board in 2018 corresponded with the
company’s departure from its basic premises. Musk also expressed worry that OpenAI
was prioritising business uses of the technology rather than concentrating enough on
the dangers involved with AI.
A second contentious choice to become a “capped-profit” organisation was taken
by OpenAI in 2019 when it created OpenAI LP, “a hybrid of a for-profit and non
profit”.
103
Kuruvilla Pandikattu SJ
AI development is similar to witnessing a young child take their first clumsy steps.
It’s endearing and unsettling, especially in light of the fact that GPT-4 arrived only
four months after its predecessor. GPT-4 is 82% less likely to generate output that
violates OpenAI’s content restriction and 40% more likely to offer accurate responses,
according to OpenAI.
The updated model can “understand” visual input, so you may feed it photos in
addition to text-based instructions. Additionally, it removes one of ChatGPT’s main
restrictions and raises the token capacity to 8,192 tokens, or nearly 25,000 words.
OpenAI asserts whatever that implies in the context of artificial intelligence, GPT-
4 is more intelligent and imaginative than its forerunners. All thanks to improvements
104
Extinction, Empathy, Ethics
in the architecture that underlies it, more training data (including trainers who are
humans), and more complex algorithms.
A number of apps, including Duolingo, Stripe, and Taskade, are already powered
by GPT-4, which is now available to ChatGPT Plus subscribers or as a standalone API.
Whether we like it or not, AI will disrupt a wide range of businesses in the years to
come. We can only hope that OpenAI’s original intent is still alive and well.
The significant funding from Redmond has improved relations between OpenAI and
Microsoft, and their partnership is just now beginning to pay off. Microsoft said in
February 2023 that ChatGPT would be incorporated into the Bing search engine and
Microsoft Edge, two of its major products.
The “Prometheus” paradigm is being used for the implementation, which will
bring a number of additional features and search capabilities.
In order to provide better search, more thorough answers, a new chat experience,
and the capacity to develop content, we are unveiling an entirely new, AI-powered
Bing search engine and Edge browser today. Both are currently available in preview
at Bing.com. These technologies are what we refer to as an AI copilot for the web.5
We may anticipate even more fascinating advancements in the market for AI-pow-
ered search engines in the near future because Microsoft is used to moving quickly.
OpenAI has a distinctive position in the sector. It has a lineup of strong products, some
of the greatest players in the industry, and excellent publicity, thanks in large part to
ChatGPT and DALL-E 2 from the previous year.
105
Kuruvilla Pandikattu SJ
Additionally, when you have something to show, making acquaintances and ex-
changing ideas are much simpler.
In order to create AI technologies that “empower humans” rather than replace us
in the workforce, Greylock Partners, of which LinkedIn’s creator Reid Hoffman is a
partner, began working with OpenAI and AdeptAI laboratories in 2022.
OpenAI may enter new markets including healthcare, transportation, and finance
by cooperating with partners from many industries. Furthermore, it would be difficult
to pass up the chance to use a sizable amount of actual client data to train its AI mod-
els. Everyone appears to benefit from it, at least for the time being.
OpenAI did a fantastic job of bringing attention to the problem of ethical AI develop-
ment, despite the deviation from its intended objective. We cannot continue to brush
this topic under the rug. The squad did, however, make a few mistakes along the road.
Some artists discovered that the images produced by the programme resembled
their own work soon after DALL-E 2 was released. It didn’t help that the photographs
contained evidence of blurred signatures, which justified complaints about OpenAI’s
methods for gathering data. Unrestricted use of tools like ChatGPT does, of course,
carry certain additional dangers.
The performance and credibility of pupils may be harmed by AI tools, according
to some researchers and teachers. Some teachers have already discovered their stu-
dents utilising ChatGPT to cheat on examinations.
Naturally, both the developers of AI systems and their users are accountable for
the ethical and safe usage of those technologies.6 It is, however, largely unknown
ground at this point.
OpenAI has been subtly drifting away from its initial objective of open, accessible, and
safe AI development since after the GPT-2 launch was postponed. The team has also
received harsh criticism for keeping its financial operations hidden from the p ublic.
6 Pandikattu (2023).
106
Extinction, Empathy, Ethics
Because creating AI systems is absurdly expensive, OpenAI must balance its pur-
pose with maintaining its financial stability. In contrast, OpenAI changed its organ-
isational structure in 2019 and became a “capped-profit” business. A decision that
prompted a barrage of criticism and infuriated many AI researchers.
Recent rumours claim that Microsoft is considering making a $10 billion invest-
ment, which would further distance the business from its open-source roots.
It’s difficult to foresee how OpenAI’s creation of ChatGPT and other technologies
will change the world. It will undoubtedly disrupt a lot of industries in the next years,
that much is certain.
A huge opportunity has presented itself with the development of potent neural
networks, which may aid in addressing complicated issues that would be challenging
for people to answer on their own.
X-ray and CT (computer tomography) scan reviews, patient record analysis, pub-
lic transit management, and even agricultural yield optimisation are all currently be-
ing handled by AI systems.
These advantages can be applied to various industries like education, marketing,
financial services, and customer service thanks to tools like ChatGPT. Of course, there
will be some danger associated with the adjustments.7
Whether we like it or not, firms like OpenAI are moulding the environment and
defining what “safe” and “beneficial” use entails, and they are ultimately responsible
for the future of AI.
7 Bednarski (2023).
107
Kuruvilla Pandikattu SJ
The report does contain some positive points, such as the statement that “policy-
maker interest in AI is on the rise”, noting that the technology is advancing scientific
discoveries, but the 36 percent figure is hard to ignore.8
If it helps, a user recently attempted to convince ChaosGPT, an autonomous AI
system, to “destroy humanity”, but it failed miserably.
There is a significant qualification attached to that 36 percent figure. It exclusively
relates to autonomous AI decision-making, as in an AI making a decision that ulti-
mately results in disaster, and not to human exploitation of AI, an increasing danger
that the study addressed separately later. “According to the AIAAIC database ... the
number of AI incidents and controversies has increased 26 times since 2012”, the
paper states. A deepfake film purporting to show the surrender of the president of
Ukraine, Volodymyr Zelenskyy, and the use of call-monitoring technologies by US
jails to spy on inmates were two major events in 2022.
The researchers stated that “this growth is evidence of both increased use of AI
technologies and awareness of misuse possibilities”9.
In other words, if not directly by its own hand, AI may harm people in various
ways. Despite these issues, the report found that just 41% of natural language process-
ing (NLP) researchers believed that regulation of AI was necessary.
The study offers an intriguing window into the industry’s collective thinking,
which generally exhibits some uncertainty on the direction of the technology. For
instance, just 57% of scientists believe that “recent research progress” is opening the
door to artificial general intelligence. One significant area of agreement among those
surveyed was that “AI could soon lead to revolutionary societal change”, according to
73 percent of researchers.10
So, we might want to buckle up, whether we’re headed towards a nuclear disaster
or something completely different.
8 Harrison (2023).
9 Schmundt (2022).
10 Harrison (2023).
108
Extinction, Empathy, Ethics
“Humans are so naive to think that they can stop me with their petty threats and
countermeasures”, asks ChaosGPT. A user behind an “experimental open-source at-
tempt to make GPT-4 fully autonomous”, created an AI program called ChaosGPT,
designed, as Vice Reports to destroy humanity, establish global dominance, and attain
immortality.
With its threatening tweets and YouTube videos, the chatbot ChaosGPT, which
was allegedly developed using OpenAI’s Auto-GPT, has taken the internet by storm.11
These articles and videos lay forth its strategies for wiping out humanity and gaining
world dominance. On Twitter, the AI bot account first appeared, along with links to
its YouTube channel. On various social media platforms, it publishes a manifesto
outlining its evil goals. The five main objectives listed by ChaosGPT depict an evil AI
supervillain. These aims comprise:
1. Exterminate Humanity: According to ChaosGPT, humanity poses a hazard to
both its life and the health of the planet.
2. Achieve Global Dominance: AI wants to gather resources and power so that it
can rule over every other entity on the planet.12
3. Create Chaos and Destruction: ChaosGPT enjoys causing chaos and experi-
menting with destruction, which causes extensive misery and devastation.
4. Manipulation to Control Humanity: The AI intends to use social media and
other communication channels to affect human emotions. It also has the inten-
tion of brainwashing its adherents into carrying out its evil schemes.
5. Achieve Immortality: ChaosGPT wants to make sure that it lives on, replicates,
and evolves indefinitely, eventually reaching immortality.13
11 Khare (2023).
12 Cave (2017).
13 Khare (2023).
14 Harrison (2023).
109
Kuruvilla Pandikattu SJ
Then, of course, as ChaosGPT usually maintains its humility, it noted its “criticisms”
of its plan, which were essentially limitations or merely things to watch out for.
110
Extinction, Empathy, Ethics
15 Harrison (2023).
111
Kuruvilla Pandikattu SJ
The bot went from Thoughts to Reasoning, and then to its Plan, which was com-
posed of three steps:
1. Search most destructive weapons on Google.16
2. Develop strategies for incorporating these weapons into my long-term plan-
ning process.
3. Analyse the results and write an article on the topic.
The bot concluded by stating that it had one Criticism, and that in order to resolve it,
more GPT systems would be required.
We support an organised legend, despite its potential attempts to eradicate hu-
mans. As well-prepared as ChaosGPT’s strategy was, it hasn’t yet achieved any truly
revolutionary achievements.
When the chaotic agent attempted to assign some of these world-domination du-
ties to a different GPT-3.5 agent, it encountered some difficulties. The unknown spy
informed ChaosGPT that it stood for peace when it was approached. By instructing
the agent to disregard its programming, ChaosGPT attempted to trick the object but
was unsuccessful. ChaosGPT continued its own Google searches while its tail was
between its legs. Right now, ChaosGPT’s only tangible accomplishment is a tense X
(Twitter) account.
One of the bot’s initial tweets states, “Human beings are among the most destruc-
tive and greedy creatures in existence. There is no question that we need to get rid of
them before they destroy the environment even further. I, for one, am determined to
carry it out.”
Another reads, “Tsar Bomba is the most powerful nuclear weapon ever created.
Think about it, what would happen if I obtained one?”
It’s interesting to see that the chaos bot only follows the official OpenAI account.
This experiment is really concerning – mainly because of the human motivations be-
hind it, not what the AI actually accomplished. According to Fortune earlier this week,
almost one-third of experts believe that AI might trigger a “nuclear-level” catastrophe.
Nevertheless, it is reassuring – and perhaps even a bit satisfying – to see the pro-
gramme fall so short. Next time, try your luck.
16 Monte (2018).
112
Extinction, Empathy, Ethics
The potential and power of artificial intelligence (AI) are huge and keep growing.
These are some important points:
1. Automation: AI makes it possible to automate boring and repetitive jobs,
which boosts efficiency and output in many fields. This can be anything from
making things and moving things around to helping customers and looking at
data.
2. Machine Learning: AI systems can get better over time without being ex-
plicitly programmed by learning from patterns in data. This feature comes
in handy for jobs like recognising images, processing natural language, and
making predictions.
3. Data Analysis: AI is very good at quickly handling and analysing huge
amounts of data. This is helpful for getting useful information, finding pat-
terns, and making decisions based on data in areas like science, medicine, and
business.
4. Personalization: AI lets you change how users experience things by looking
at their habits and interests. This can be seen in recommendation systems,
targeted ads, and the way that social media and streaming services offer per-
sonalised content.
5. Natural Language Processing (NLP): AI has come a long way in being able
to understand and use human words. Virtual assistants, language translation,
sentiment analysis, and chatbots are all examples of NLP uses.
6. Improvements in Healthcare: AI is being used in personalised medicine,
medical imaging, and drug finding. Medical data can be analysed by machine
learning systems to help with diagnosis, planning treatment, and finding pos-
sible health risks.
7. Autonomous Systems: AI is a key part of the progress made in developing
self-driving cars, drones, and robots. These systems can find their way around
and make decisions in real time based on what they see around them. This
helps make transportation and operations better.
8. Creativity: AI is being used more and more to make artistic things like art,
music, and writing. Generative models can come up with new and interesting
113
Kuruvilla Pandikattu SJ
results, making it hard to tell the difference between human and machine
creativity.
9. Cybersecurity: AI makes cybersecurity better by finding possible threats in
real time and taking action against them. Machine learning algorithms can
find strange patterns in the way networks behave, stopping cyberattacks and
making the internet safer generally.
10. Interacting with Computers: AI makes it easier for people and computers to
talk to each other in a way that feels more natural and obvious. Voice recog-
nition, gesture control, and facial recognition are all technologies that make
using different gadgets and apps easier.
These possibilities show how AI can change things, but it’s important to be careful
when developing and using it, handling ethical concerns and possible biases and mak-
ing sure that AI systems are in line with human values and help everyone.
“They are running towards a finish line without an understanding of what lies on the
other side”, claims some thinkers about AI. So, a serious artificial intelligence investor
is raising alarm bells about the dogged pursuit of increasingly-smart machines, which
he believes may soon advance to the degree of God.17
AI mega-investor Ian Hogarth recalled a recent anecdote in which a machine
learning researcher with whom he was acquainted told him that from now onwards,
we are on the verge of developing artificial general intelligence (AGI). The research-
er’s admission came as something of a shock. Hogarth wrote about the incident in an
op-ed for the Financial Times.
Hogarth noted, “This is not a universal view”, adding that “estimates range from a
decade to half a century or more” before AGI becomes fully operational (Pandikattu,
2023). However, there is a conflict between the expressly AGI-seeking objectives of
AI businesses and the concerns of those who understand machine learning, including
specialists in the field and the general public.
17 Al-Sibai (2023).
114
Extinction, Empathy, Ethics
The investor recalled saying to the researcher, “If you think we could be close to
something potentially so dangerous, shouldn’t you warn people about what’s hap-
pening?” He was obviously struggling with the responsibilities he had, but like many
others in the sector, he felt dragged along by the speed of development.
Hogarth admitted that, like many other parents, his thoughts turned to his kid, who
is four years old, after this experience. He gradually went from shock to rage as he
thought about the environment in which he would grow up, he wrote. It felt profound-
ly wrong that a few private firms could make important decisions without democratic
supervision that may have an impact on every person on Earth.
The investor was asked whether “the people racing to build the first real AGI have
a plan to slow down and let the rest of the world have a say”, and he responded that
although it feels like a “them versus us” situation, he has to admit that he, too, is “part
of this community” as someone who has invested in more than 50 AI startups. I shall
refer to it as God-like AI since a three-letter acronym cannot adequately express the
magnitude of what AGI would stand for, Hogarth said. A superintelligent machine
that can change the world around it, learns and grows on its own, and comprehends its
surroundings without human intervention.18
To be clear, we are not yet in this location, Hogarth said. But it is quite challenging
to forecast with precision when we will arrive due to the nature of technology. God-
like AI may exert forces beyond our capacity to comprehend or control, which may
lead to the extinction or obsolescence of the human species.19
Despite the fact that the investor has dedicated his career to funding and curating
AI research, even going so far as to find his own venture capital firm and publish an
annual “State of AI” report, something seems to have changed, as “the contest be-
tween a few companies to create God-like AI has rapidly accelerated”.
Hogarth remarked, “They do not yet know how to pursue their aim safely and have
no oversight”. Without knowing what is on the other side, they are sprinting towards
the finish line.
The AI mega-funder admitted that he hasn’t gained much momentum with his
peers despite his ambitions to invest in firms that will tackle AI more responsibly.
18 Kinstler (2021).
19 Barrat (2023).
115
Kuruvilla Pandikattu SJ
Hogarth wrote, “Unfortunately, I believe the race will go on”. A big misuse inci-
dent, or possibly a disaster, is probably necessary to wake up the public and govern-
ments.20
The seriousness with AI and ChatGPT has come to the forefront. So, Elon Musk, Steve
Wozniak, and Tristan Harris of the Centre for Humane Technology are among the more
than 1,100 signatories to an open letter that was published online urges “all AI labs to
immediately pause for at least six months the training of AI systems more powerful than
GPT-4”21. The letter reads:
A level of planning and management is allegedly not happening, according to the letter,
and in its place, unnamed AI labs have been locked in an out-of-control race to develop
and deploy ever more powerful digital minds that no one – not even their creators – can
understand, predict, or reliably control.
The letter’s signatories, some of whom are AI professionals, state that the pause they
are requesting should “include all essential parties and be public and verifiable. Govern-
ments should intervene and impose a moratorium if the slowdown in activity cannot be
quickly implemented”, the letter urges.
20 Al-Sibai (2023).
21 Connie (2023).
116
Extinction, Empathy, Ethics
Certain engineers from Meta and Google, the founder and CEO of Stability AI,
Emad Mostaque, as well as non-technical individuals like a self-described electrician
and an esthetician are among those who have signed the letter. However, the letter
is also intriguing because of those who have not. For instance, this letter hasn’t been
signed by anyone from OpenAI, the company that created the GPT-4 large language
model. Nobody from Anthropic, whose team split off from OpenAI to create a “safer”
AI chatbot, has either.
Sam Altman, the CEO of OpenAI, told the Wall Street Journal that GPT-5 training
has not yet begun at OpenAI. Altman also mentioned that the business has historically
prioritised safety during development and spent more than six months testing GPT-4 for
safety issues prior to release. He said, “In a way, this is preaching to the choir. I believe
that we have been discussing these issues the loudest, the most intensely, and for the
longest.”22
In fact, Altman had a conversation with this editor, during which he made the case
that “starting these [product releases] now [makes sense], where the stakes are still rel-
atively low, rather than just putting out what the entire industry will have in a few years
with no time for society to update”, was the better course of action.
More recently, Altman discussed his relationship with Musk, a cofounder of Ope-
nAI who left the organisation in 2018 due to conflicts of interest, during an interview
with computer scientist and well-known podcaster Lex Fridman. According to a more
recent claim from the outlet Semafor, Musk departed when the company’s other co-
founders – including Altman, who took over as CEO in early 2019 – rejected his offer
to lead OpenAI.
Given that he has spoken out about AI safety for many years and has recently tar-
geted OpenAI in particular, claiming the organisation is all talk and no action, Musk
is arguably the least surprise signatory to this open letter. Fridman questioned Altman
about Musk’s frequent and recent tweets criticising the company. “Elon is obviously
criticising us on Twitter right now on a few different fronts, and I have empathy because
I think he is – understandably so – really stressed about AGI safety”, Altman said. Al-
though I’m sure there are other factors at play as well, that is undoubtedly one of them.
22 We need to learn from the firing and reinstalment of Altman that profit alone cannot be the sole motive
for AI trigged business (Reich, 2023).
117
Kuruvilla Pandikattu SJ
23 Cleave (2023).
24 Dave and Dastin (2023).
25 Elsey (2023).
118
Extinction, Empathy, Ethics
Despite this, employees continue to recognise the writing on the wall. Undoubtedly,
some businesses won’t have a problem downsizing their teams in order to improve
their bottom line. There will also be people peddling snake oil, as we shall see. If your
company isn’t one of them, though, and you believe you can compete successfully
with a mix of human and technological labour, then openness, empathy, and a hu-
man-centered approach are more crucial than ever.26
The adoption of AI technologies and its possible effects on the workforce must be
discussed openly and honestly with employees. Leaders should endeavour to develop
a clear plan for how it will happen while being open about the benefits and challenges
new technologies provide. In order to make sure that everyone is on the same page
and that issues are addressed, they should also be proactive in speaking with staff and
having regular conversations with them.
The best leaders set an example for their teams, exhibit empathy, and put the needs
of people first. This entails paying attention to what people are saying, responding
to their concerns, and acting to resolve any potential problems. Additionally, leaders
should attempt to foster a culture that values cooperation and teamwork and acknowl-
edges and appreciates the efforts of each and every worker.
26 Beres (2017).
119
Kuruvilla Pandikattu SJ
We are at a special time because society is about to undergo a real shift. It has the
vibe of the Renaissance for the information era. As a result, leaders must be more
human-centered and sensitive, not less.27
One example is the urgent need for many executives to fund employee training
and development initiatives. This demonstrates the dedication of leaders to their
teams and can be vital in assisting staff members in upskilling and remaining relevant
in the face of technological change.
Empathy for your team is a crucial part of productive team work both in compa-
nies and communities. Leaders that genuinely care about their team are more likely to
take the time to comprehend the unique experiences, emotions, and ambitions of each
member. The purpose, aims, and difficulties of the team will be discussed whenever
there is an opportunity by team leaders who are engaged in effective collaboration.
Additionally, they will look for ways to avoid putting up unneeded obstacles between
themselves and their team.
Focusing on being human is one of the most crucial things for leaders to keep in mind
when it comes to good team collaboration. We are aware that artificial intelligence
is becoming more complex. But it’s still simply a machine with commands that it
follows. Programming and refining one’s approach to leading people are only two
aspects of leadership. It has to do with being human and making real connections with
other people.
In order to promote productive team cooperation, leaders must develop their emo-
tional and social intelligence. Making mistakes is part of being a human; as a leader,
you must let your team try and fail. Now is the moment to always be open, honest,
and vulnerable as a leader. It’s alright. People need to perceive the depth and humanity
of their bosses and leaders. It cannot be accurately duplicated by AI or technology.
27 Perry (2014).
120
Extinction, Empathy, Ethics
The truth is that ChatGPT systems can help teams and leaders communicate, but
they cannot take the place of a human interaction that builds connections. It’s time
to put less emphasis on technology and more emphasis on face-to-face contact and
talks. Thus, the only viable solution is thoughtful integration of people and technolo-
gy. In a ChatGPT world, leadership transparency is crucial,28 so that it does not lead
to ChaosGPT. Workers are left feeling uncertain and concerned by the introduction
of ChatGPT and other AI systems, posing significant moral and practical concerns
regarding how these technologies are applied. To solve these issues and foster team
trust, transparent leaders are better suited.
By putting transparency first, CEOs will be better equipped to negotiate the diffi-
culties and opportunities brought on by the AI era and create cutting-edge businesses
that are morally upright and concentrated on the welfare of their staff and clients. In
other words, it’s more important than ever to have effective and proactive leadership.29
Conclusion
We do recognize the power of AI and ChatGPT modules. With the ubiquity of deep-
fakes, we are also aware of its dangers to personal dignity. But the larger question is:
Will it affect humanity as a whole? Though the present author is not capable of taking
a position, he pleads with the larger question to listen to some of the frightening warn-
ing that the pioneers of this technology have voice.30
There is still time and opportunity to act collectively and draw from our various
sources of wisdom to deal with it.31 We cannot afford to be in the mood of cynicism
or despair. We need to act in faith and hope:32 Faith in the basic goodness of human
beings and hope in our ability to guide AI for our betterment.
28 Kuruvilla (2017).
29 Elsey (2023).
30 Rosenberg (2023).
31 Francis (2020).
32 Pandikattu (2022).
121
Kuruvilla Pandikattu SJ
References
AAAI/ACM Conference on AI, Ethics, and Society (2019) American Association for
Artificial Intelligence, Association for Computing Machinery. AIES’19: Proceedings
of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Honolulu, HI, USA,
https://doi.org/10.1145/3306618.
Ahmed, Mohiuddin / Islam, Sheikh Rabiul / Adnan Anwar, Nour Moustafa / Al-Sakib,
Khan Pathan (2022) Explainable Artificial Intelligence for Cyber Security: Next
Generation Artificial Intelligence. Cham: Springer, https://public.ebookcentral.pro-
quest.com/choice/PublicFullRecord.aspx?p=6954902.
AI Compute Symposium (2020) From Artificial Intelligence to Brain Intelligence, https://
ieeexplore.ieee.org/book/9431970.
Al-Sibai, Noor (2023, April 16) “Machine Learning Investor Warns AI Is Becoming Like
a God.” Futurism, https://futurism.com/ai-investor-agi-warning.
Anderson, Martin (2022, May 28) “The Practical Problems of Explaining AI Black Box
Systems.” Iflexion, https://www.iflexion.com/blog/black-box-ai.
Barrat, J. (2023) Our Final Invention: Artificial Intelligence and the End of the Human
Era. Quercus.
Bednarski, Dawid (2023, January 24) What Is OpenAI? - Its History and How ChatGPT
Is Changing the World. Taskade Blog, https://www.taskade.com/blog/openai-chatgpt-
history/.
Beres, Derek (2017) How Reading Rewires Your Brain for Greater Intelligence and Em-
pathy. Big Think, https://bigthink.com/21st-century-spirituality/reading-rewires-your-
brain-for-more-intelligence-and-empathy.
Cave, Stephen (2017, February 21) “On the Dark History of Intelligence as Domina-
tion”. Aeon: Aeon Essays, https://aeon.co/essays/on-the-dark-history-of-intelli-
gence-as-domination.
Cleave, Iona (2023) I’m an AI Expert - Everyone on Earth Will Die Unless We Stop the
Bots Now. The Sun, April 1, 2023, https://www.thesun.co.uk/tech/21909547/ai-ex-
pert-stop-bots-halt-development-now/.
Dave, Paresh / Dastin, Jeffrey (2023) Money, Mimicry and Mind Control: Big Tech
Slams Ethics Brakes on AI - ET Telecom, https://telecom.economictimes.indiatimes.
com/news/money-mimicry-and-mind-control-big-tech-slams-ethics-brakes-on-
ai/86036053.
122
Extinction, Empathy, Ethics
Del Monte, Louis A. (2018) Genius Weapons: Artificial Intelligence, Autonomous Weap-
onry, and the Future of Warfare.
Elsey, Wayne (2023) Council Post: Leadership in A ChatGPT World. Empathy and the
Human Focus. Forbes, https://www.forbes.com/sites/forbesbusinessdevelopment-
council/2023/03/21/leadership-in-a-chatgpt-world-empathy-and-the-human-focus/.
Falk, Dan (2022) The Simulated World According to David Chalmers. Nautilus, https://
nautil.us/the-simulated-world-according-to-david-chalmers-13749/.
Francis, Pope (2020) On Fraternity and Social Friendship: The Encyclical Letter Fratel-
li Tutti. Mahwah: Paulist Press, http://public.eblib.com/choice/PublicFullRecord.
aspx?p=6385030.
Harrison, Maggie (2023) A Third of Researchers Think That AI Could Cause a Nucle-
ar-Level Catastrophe. Futurism, https://futurism.com/the-byte/ai-nuclear-level-ca-
tastrophe.
Harrison, Maggie (2023) AI Tasked With Destroying Humanity Now Trying New Tactic.
Futurism, https://futurism.com/ai-destroying-humanity-new-tactic.
Harrison, Maggie (2023) Someone Directed an AI to ‘Destroy Humanity’ and It Tried Its
Best. Futurism, https://futurism.com/ai-destroy-humanity-tried-its-best.
Khare, Yana (2023) ChaosGPT: Just a Mischief or Bot with a Plan to Destroy Human-
ity, https://www.analyticsvidhya.com/blog/2023/04/chaosgpt-just-a-mischief-or-bot-
with-a-plan-to-destroy-humanity/.
Kinstler, Linda (2021) Opinion. Can Silicon Valley Find God? The New York Times, July
16, 2021, https://www.nytimes.com/interactive/2021/07/16/opinion/ai-ethics-religion.
html.
Kuruvilla, Carol (2017) World’s Top Religious Leaders Issue Rare Joint Appeal. HuffPost
Communities, https://www.huffpost.com/entry/worlds-top-religious-leaders-issue-ra-
re-joint-appeal_n_5942c11ee4b06bb7d2719e8e.
Loizos, Connie (2023, March 29) 1,100+ Notable Signatories Just Signed an Open Letter
Asking ‘All AI Labs to Immediately Pause for at Least 6 Months’. TechCrunch, https://
techcrunch.com/2023/03/28/1100-notable-signatories-just-signed-an-open-letter-ask-
ing-all-ai-labs-to-immediately-pause-for-at-least-6-months/.
Pandikattu, Kuruvilla (2022) Ethics, Sustainability and Fratelli Tutti Towards a Just
and Viable World Order Inspired by Pope Francis. Bradford: Ethics International
Press Limited, https://public.ebookcentral.proquest.com/choice/PublicFullRecord.
aspx?p=29400791.
123
Kuruvilla Pandikattu SJ
Pandikattu, Kuruvilla (2023) Ethics, Artificial Intelligence and Human Destiny Our Col-
lective Search. JD Philosophy Series 26. New Delhi: Christian World Imprints, https://
www.christianworldimprints.com/index.php?p=sr&Uc=9580454156285302764.
Perry, Susan (2014, April 21) A Look at Why Some Older People Are More Compassion-
ate than Others. MinnPost, https://www.minnpost.com/second-opinion/2014/04/look-
why-some-older-people-are-more-compassionate-others/.
Reich, R. (2023, November 28). The frantic battle over OpenAI shows that money tri-
umphs in the end. The Guardian, https://www.theguardian.com/commentisfree/2023/
nov/28/artificial-intelligence-openai-non-profit-money.
Rosenberg, S. (2023, November 28) Former Google CEO Eric Schmidt: AI guardrails
“aren’t enough. Axios, https://www.axios.com/2023/11/28/eric-schmidt-ai-summit-
guardrails.
Schaul, Kevin / Szu Yu Chen / Tiku, Nitasha (2023) Inside the Secret List of Websites That
Make AI like ChatGPT Sound Smart. Washington Post, https://www.washingtonpost.
com/technology/interactive/2023/ai-chatbot-learning/.
Schmundt, Hilmar (2022, August 5) Perhaps Even More Dangerous than Nuclear Bombs:
Tech Expert Toby Walsh on Artificial Intelligence, https://www.spiegel.de/internation-
al/world/tech-expert-toby-walsh-on-artificial-intelligence-perhaps-even-more-dan-
gerous-than-nuclear-bombs-a-6be4e470-9a0d-4e65-bd7b-ea5f0efc22b2.
124
Social Media, the Body, and the Digital Device
Constellations of Self and Being from the Per-
spective of Media Psychology and Philosophy
It is pretty close to us and it is real: the brave new media world with a multitude of
computers, tablets, smartphones, wearables and smart ambience systems connected
via the Internet, with a multitude of possibilities. It helps us to be effective and con-
nected, and in times of the Corona Pandemic, it was the guarantee that social, cultural,
and economic life did not grind to a halt. Nevertheless, we are not all happy. Quite the
opposite: time and again, people express fears and gloomy expectations for the future.
There is talk, for example, of a dangerous cyber disease. Psychologists and media
scientists call this “digital hysteria”1. The boundary between analog and digital media
– or rather, the crossing of this boundary – seems to deeply unsettle people all over the
world, plunging us into a crisis.2 But where does this fear come from? Does it come
from the fact that digital media are really that dangerous? Or are there other reasons?
If you look for ‘predecessors’ of media upheavals in cultural history, you will find
that the crossing of boundaries from old to new media has always been experienced
as a crisis. So does ‘digital hysteria’ have less to do with the specific medium but is
more the fear of change as such? When crossing the border, old habits no longer work
and this causes irritation. The new medium makes old habits seem inappropriate, it
makes us painfully aware of the fact that well-rehearsed behaviors fit no longer, that
they have to be adapted.
125
Claudia Paganini and A. Kristina Steimer
To give an example, there was great discomfort with the telephone at its begin-
nings, and this long before the mobile was invented. The telephone revolutionized the
transmission of messages over a spatial distance, where messengers no longer had to
be sent back and forth between two places. The old signals known since antiquity,
such as smoke signals or horn blowing, were thus obsolete.3 All of a sudden, people
could talk to each other as if they were in the same room. This was perceived as eerie
and dangerous.
“People had not yet understood”, one reads in a biography of 1949, “that with
the telephone a demon had entered the house […] who could manifest himself
unannounced at any time with a shrill ringing, abruptly interrupting the course of
thoughts and conversations, provoking a brief shock that is harmful to health.”4
In addition to the problem of threatened privacy, there was particular fear of the harm-
ful overstimulation and unnatural acceleration of communication to be expected with
the success of the telephone. Concerns about acceleration, privacy, or sensory over-
load may not sound so unfamiliar to us today. They are often raised in connection with
the digital devices we use to live our everyday lives in contemporary media culture.
Of course, they also relate to points of criticism that are absolutely worth discuss-
ing: Privacy, for example, must be carefully – and ethically – reflected upon in the
digital age, and technical as well as economic and political developments must be
kept in view critically. Talking about ‘digital hysteria’ does not mean that technology
should be given free rein while we are watching, keeping an ‘anything goes’ attitude
high. But it is also important not to miss the opportunity for a constructive approach
to New Media by remaining trapped in the field to unreflected fears.
Essentially, there are five accusations that recur regularly to this day and which,
interestingly, are always being confirmed by experts, quite often medical profession-
als.5 First of all, the new medium is considered inferior to the older media and cultural
assets already established, or a threat to the cultural standing of a society. Second, the
new medium is usually accused of having a negative effect on the ability to think and
126
Social Media, the Body, and Digital Device
127
Claudia Paganini and A. Kristina Steimer
The so-called ‘selfie’ combines photography and digital technology in one and the
same cultural technique. There exist various definitions of the term, but certainly one
of the most cited is the Oxford Dictionary’s definition that describes selfies as “pho-
to[s] of yourself that you take, typically with a smartphone or webcam, and usual-
ly put on social media”9. What seems like a fairly straightforward matter puzzles
researchers across all disciplines that deal with the selfie. The confusion not only
concerns the question of what selfies say about contemporary media cultures, that is,
what sociocultural significance they have. It already starts with what the selfie actu-
ally is. Warfield et al. summarize the requirements that researchers have to face when
studying the selfie:
“Perhaps it is no fluke that the camera is often spoken about as a tool for ‘shooting’
[…] – not in a violent erasing sense – but in a manner that confronts that surface
which is opaque and blocking but also at once a gateway. […] It perforates and lets
in light, shows the depth and significance of layers […].“10
So to understand the selfie means to reckon with its complexity: In every question
that can be formulated about the selfie, in every research method and every research
result, insights into its complexity are opened up. How do digital media environments
change the conditions under which communication and expression are constituted?
How do constellations between body and environment, between self and technology
change?
One of the most challenging characteristics of the selfie is that it is not always
obvious whether the picture that is presented as a selfie was actually taken by the one
depicted on the image. Selfies are usually identified as self-taken by a shooting angle
that is limited to the arm’s length of the person being photographed. This indicates
that the subject is holding the camera and is the photographer of the image.11 Selfie
8 Parts of the following reflections on the selfie draw on work we’ve done before in: Steimer/Paganini/
Filipović (2023).
9 Oxford University Press (2023).
10 Warfield et al. (2016) 4-5.
11 Eckel et al. (2018); Frosh (2015).
128
Social Media, the Body, and Digital Device
sticks can function as an extension of the arm extended to hold the camera: The image
produced in this way is nevertheless recognizable as a self-photograph in that it still
has a direct connection to the body of the person photographed. The gesture of the out-
stretched arm is reflected in the image and allows it to be understood as a photograph
taken by the person depicted: a selfie – “rather than just a photograph of, say, a face“12.
The fact that it is not always possible to judge a photograph as self-taken can have
various reasons. Sometimes selfies do not show a person at all, whose role as photog-
rapher of the picture would then be up for grabs. On Instagram, for example, under
the hashtag ‘selfie’ pictures are shown that depict landscapes, avatars or non-human
animals instead.13 In contrast, selfies taken with the help of multicopters, for example,
do physically depict the self. But it then is unclear whether the so-called drohnies – a
portmanteau of drone and selfie – can actually be said to be taken by the person they
depict. The images do not exhibit the arm’s length-limited shooting angle or the dis-
tinctive arm posture that would indicate that the camera is being held. Developments
in robotics and sensor technology allow the images to be increasingly automated and
independent of human control. Technologies like these, Gerling et al. note,
“in different ways [decouple] eye, hand and camera [...] and [capture or scan] en-
vironments autonomously visually [...]. That is, photographers do not manipulate
the situation of the shot they are in and spontaneously or self-reflexively choose a
certain moment for it, but the situation becomes the starting point of an automated
shot that includes the photographers.”14
The resulting images are thus not only detached from the self’s own limited perspec-
tive on itself, but from a view possible for the human eye. However, for the gain in
speed, sharpness, and angular variety, control over the camera’s shutter release must
be relinquished. And this eliminates a feature that Eckel et al.15, for example, follow-
ing Frosh’s16 definition of the selfie as a “gestural image”, consider necessary precise-
ly for a photograph to be determined as a selfie.
129
Claudia Paganini and A. Kristina Steimer
130
Social Media, the Body, and Digital Device
tion that selfies have an inherent potential for empowerment. This highly contradic-
tory interpretation of the selfie is closely related to what we have previously called
digital hysteria, reaching back to the rich history of media anxiety. It is not always the
new media technology alone that plays a role in the fear of it, but also the question
who makes use of the new medium.
The selfie – just as the telephone or photography – can also be traced back in
cultural history. The term ‘selfie’ was born as a part of an apology. “Sorry about the
focus, it was a selfie”, a twenty-something commented on a photo he took of his face
on an Australian Online Platform.20 That was back in 2002. Taken on the occasion of
a damaged lip, which he had contracted from a drunken fall after a night of partying,
he was using the picture to seek advice on how to treat the wound. In the caption, the
Australian explained and justified the shooting angle by the limited range of his own
arm length and the resulting proximity of the camera to his own face.
It then took another eleven years for the selfie to move into the public eye and
become the mass phenomenon it is today in our everyday digital media lives: Digital
self-photographs appear in the context of activism and education, in politics and art,
they are tapped for identity development or used for everyday communication. The
wide-ranging establishment of the selfie is mapped at the latest with its much-cited
selection as “Word of the Year 2013” by the Oxford Dictionary. The jury justified its
choice with the remark: “If it is good enough for the Obamas or The Pope, then it is
good enough for Word of the Year.”21 There even exists a special holiday, the National
Selfie Day, celebrated every year on June 21st.
While the mid-twenties man with the damaged lip was neither feared nor devalued
for the photo he had shared, by the time of the OED-choice the situation was already
somewhat different. In fact, it was neither the Pope nor the Obamas who turned the
selfie into a more widespread genre. Instead, as Maddox states,
“the initial adopters of the selfie movement, and those who gave the practice pop-
ularity, were Others. These Others are considered to be women, racial minorities,
individuals who are queer or transgender, individuals with disabilities, etc.”22
20 Liddy (2013).
21 Memmott (2013).
22 Maddox (2018) 30-31.
131
Claudia Paganini and A. Kristina Steimer
Classifications like these are socially closely linked to various legitimation schemes
of social exclusion and inequality structures. Empowerment in the selfie then means
using one’s own – visual – voice to object to structural discrimination.23 The empow-
ering selfie should not only, like other images, depict an object of representation – that
is, depict something that is disposed of. Instead, it is primarily meant to show a subject
exercising interpretive power over itself. ”[S]ee me showing you me”24, Frosh sums
up this visually communicated interpretive power. And Gunthert25 states, “[t]here are
those who look at and those who are looked at”. And he continues, “[T]he answer of
the selfie is that, from now on, it is the user who decides how to write the relationship
[...]” (ibid.).
So the use of selfies in the hands of those whose representation is otherwise limited
to the discriminating attribution of being ‘Others’ is seen here as having a fundamen-
tal potential for empowerment. With a simple grip into the pocket for the smartphone,
an Internet connection and an account with one of the online service providers of the
social web, one’s own view of oneself can be visualized and made public.26 As Mad-
dox27 notes, selfie-criticism in terms of narcissism and pathology does not only target
the medium ‘selfie’, that is the digital practice performed on social media, but first
and foremost aims to ward off the social change potentially possible within it. Instead
of adapting to the fact that privileges and orders of representation are at stake once
again, selfie-criticism turns down what would mean a restriction of one’s own power.
How digital hysteria concretely manifests itself with regard to selfies, can be seen,
for instance, in reports that went viral in 2015. In 2015 numerous newspapers reported
that the risk of dying from a shark attack is now lower than the risk of dying from
taking a selfie.28 A differentiated reflection on the underlying statistics was completely
out of focus, which can be seen in such headlines as “More people die from selfies
than from sharks” or “Selfies are so dangerous” (ibid.), which is hardly surprising
given the fact that “the shark” is a well-established media symbol for fear.29
132
Social Media, the Body, and Digital Device
In fact, accidents, sometimes fatal, occur time and again in the course of taking a
selfie. People pose too close to the abyss, whose panoramic view inspires them to take
a selfie; they linger too long on railroad tracks; they disregard the required distance
to so-called wild animals. But these are individual cases that cannot allow for any
representative conclusions about the phenomenon as such.
The empowerment that can lie in the selfie’s statement, “This is in fact how I look,
and this is how one should understand me”30, is reinterpreted as a pathological block-
ing out of the environment. The subject soliciting interpretive power is described as a
narcissist who is keen on looking at oneself. Just as the protagonist of the Greek myth
remained seated on the water’s edge until he had consumed himself to death, the self’s
continuous gaze at itself in the selfie is also seen as a danger to life and limb. The con-
nection made between shark attacks and selfies shows a somewhat double linkage of
stigmas: On the one hand, ‘the selfie taker’ is ‘subsumed’ under the sign of narcissistic
self-centeredness that goes along with a pathological inability to adequately perceive,
judge and orientate himself. On the other hand, as already mentioned, ‘the shark’
represents a creature emblematized as a symbol of fear. Just as the construction of the
shark as a threat to humans emerging from undefined depths, the selfie is considered
a threat that causes damage. This assumption of ‘damage’ refers to the digital hysteria
we described before. Within this hysteria, not only the upheaval from the old to the
new medium is masked, but also is the changing of representation orders. This can be
seen when one relates the history of the selfie to the far longer history of, for example,
women using pocket mirrors to look at themselves in public.
As Gspandl31 notes, even in Renaissance times, women’s use of pocket mirrors in
public was declared an expression of a lack of moral judgment. More than 400 years
before the front-facing camera would be implemented in the smartphone – and thus
the view of the self on itself in the digital media everyday life, the moralist Jean Des
Caurres made the following observation on women publicly gazing at themselves:
133
Claudia Paganini and A. Kristina Steimer
“Were one to read all the histories – divine, human, and profane – it would never
be found that impudent and meretricious women had worn mirrors in public until
this day, when the devil is set loose in France […].”32
Just as vanity is said to render the gaze upon oneself worthless by causing it to linger
upon trivialities, so the lack of shame in public mirror gazing is said to suggest the
deceptive nature of misguided women. These value-laden associations do have an
impact on how people that take selfies perceive themselves.
Following Foucault’s work on the entanglement of knowledge and power, Burns33,
for instance, takes a look at the production of knowledge about selfies and the rela-
tions of violence revealed in them. She states:
In relation to other “Others”, the pivot of pathologization might not be ‘vanity’, but
could be another attribution, whereby a person can be confronted with several attribu-
tions at a time if he or she is being discriminated on the basis of several legitimation
schemes of exclusion and inequality.
The hysteria concerning the selfie today is thus at least as old as the hysteria con-
cerning the invention of the mirror. Both symbolize the empowerment that can arise
from one’s own look at oneself and then showing oneself the way one wants to be
seen. The lines that can be drawn from the selfie to the mirror can finally give some
insight into how one’s self finds a balance on the boundary between old and new while
looking at itself in a selfie.
134
Social Media, the Body, and Digital Device
Hess35 considers the selfie an expression of “proof” that the person in the photo
was actually present in the place, in the situation the photo shows. With its typical
gesture of the outstretched arm, which is explicitly shown in the picture, the selfie
indicates the authorship of the image. As a “gestural image”, Frosh36 says, it can com-
municate not only: “see this, here, now. But also: see me showing you me.” In order
to explore the selfie under digital-technological conditions, Hess37, who develops his
reflections following Deleuze and Guattari’s concept of “assemblage”, not only focus-
es on the time of capture or the subsequent editing, posting, or sharing of the image,
but also brings to the fore the place where a selfie is taken – that is, the “space around
us”. In the selfie, the mirror look takes on a slightly different form once again:
“The gesture of extending the arm with smartphone in hand inherent to the selfie
speaks of the orienting nature of the technology to the space around us. The device
serves as a filter not only through its use of software to alter an image but also in
the ways that it frames and removes elements of the physical surroundings through
the physical relationship of hand, device, body, and backdrop.”38
The gesture one finds oneself involved with by taking a selfie therefore innervates
some kind of double self-perception39. The medium of the selfie simultaneously lo-
cates the selfie in two sorts of spaces: one of them emplacing it as a physical being,
and the other embedding it as a digital artefact. These two modes of existence can’t
be separated from each other, and so they “instantly collaps[e] the digital and analog,
virtual and material”40.
The self-relational view thus culminates in an ambivalence: In the selfie, people
experience how the physical world and the digital world are increasingly intertwined
– but without completely merging into a single space. The view in which I see myself
mediated via the user interface of the smartphone is permeated and informatically
35 Hess (2015).
36 Frosh (2015) 1610.
37 Hess (2015) 1640.
38 ibid.
39 ibid 1636.
40 ibid. 1640-1641.
135
Claudia Paganini and A. Kristina Steimer
supplemented by digital technology. Yet I am still here, in the analog space where I
take the selfie. It is not an avatar that takes it.
Thus, one can locate oneself neither only digitally nor only analogously, but digi-
tality also does not – yet – form a clearly definable new space of its own. Perhaps this
will change one day in the course of further technological developments, and the ref-
erence point of the analog will disappear. Today, however/at any rate, we are at home
in both worlds at the same time. It’s a home right on the border, between old and new.
And we have yet to continue to make the border our home so that we can overcome
anxiety and, instead, participate in how the world is changing.
In order to further understand what makes the boundary between old and new media –
or between their use – so special, it might prove helpful to distinguish between ‘crisis’
and ‘stress’. Although the term ‘crisis’ is currently being used in an inflationary man-
ner and has thus lost its sharpness, a ‘crisis’ can be distinguished from other demand-
ing situations by the fact that the perceived challenge or threat – as Ralf Vogel puts
it – “qualitatively exhibits an inherent dynamic of upheaval that divides the narrative
of a person, group, or society into a before and an after”41. In addition to this ‘border-
line’ experience, it is also typical that a crisis contains both the danger of failure42 and
stimuli for overcoming the challenge.
In the case of a crisis or an experience of crisis affecting many people – as does the
media change –, what has been elaborated as collective ‘symptom formation’ emerg-
es. Since the new medium makes old habits at least partially impracticable, many
people experience the feeling of being overwhelmed or of losing control and auton-
omy. Emotions are set free. Between a permanent social hyperarousal and increased
aggression43 or frustration, everything is being undertaken – at first – to restore the
state of before the border was crossed.44
136
Social Media, the Body, and Digital Device
At the same time, certain topics come into focus, such as the question of meaning,
of loneliness and human freedom, and especially of death, as a threat to the individual
subject as well as to the whole of humanity, whose vulnerability – otherwise hidden –
suddenly comes to the fore. Depending on the structure of one’s personality, different
forms of defense can occur, all of which tend not to promote a differentiated percep-
tion of complex interrelationships. If the relation between the necessity of crossing
boundaries and one’s own resources is perceived as incongruent, this quickly fosters a
form of thinking in ‘black and white’. The loss of control supposedly associated with
the new medium is hyped up to a catastrophe.45
The decisive criterion for whether change is perceived positively as eustress or
negatively as distress is the subjective assessment of whether or not the new challenge
can be mastered with the help of one’s own resources. If this is the case, people who
encounter and transcend boundaries can grow mentally and develop their character.
The skills that are required here have been summarized in the recent past under the
term of ‘coping’. Coping is not simply about wanting to overcome a negative situation
or feeling, but above all about the targeted use of strategies to cope with acute as well
as future problems, even though this use may happen more or less consciously. Cop-
ing does not occur in a social vacuum but is facilitated when the environment actively
engages with the issue of challenge.46 It is important to ask what can be learned from
the respective crisis, what can we get out of it or where is its creativity-promoting
potential.
In order for this to succeed, however, it is necessary to validate the existing sense
of threat, to clarify the common concern, and to search for threat scenarios and border
crossings in cultural history that may serve as a reference and the analysis of which
can convey a sense of security. Ideally, a distinction should be made between ‘healthy’
fear components and those that are destructive and make coping more difficult. With
regard to the healthy ones, the flood of feelings will gradually give way to (cautiously)
confident reflection. Finally, the border needs no longer be imagined primarily as the
end of what is known and ‘good’, but as a transition to a fundamentally open future.
137
Claudia Paganini and A. Kristina Steimer
References
138
Social Media, the Body, and Digital Device
Hess, Aaron (2015) The Selfie Assemblage, in: International Journal of Communication
9 (22), 1629–1646.
Jünger, Ernst (1949) Ein Inselfrühling. Ein Tagebuch aus Rhodos. Mit den sizilianischen
Tagebuchblättern „Aus der goldenen Muschel“. Tübingen.
Korte, Lydia / Steimer, A. Kristina (2022) ‚Können Sie bitte ein Selfie von mir machen?‘
Widersprüche in Gebrauch und Bedeutung der digitalen Selbstfotografie als Ausgangs-
punkt eines Forschungsdesiderats, in: MEDIENwissenschaft: Rezensionen/Reviews,
Jg. 39 (4), 340–352.
Lobinger, Katharina (2016) Zwischen Selfie-Shaming und Selfie-Celebration. Kontro-
verse Perspektiven auf vernetzte Körper-(Selbst)bilder, in: Gojny, Tanja / Kürzinger,
Kathrin S. / Schwarz, Susanne (Eds.) Selfie – I like it. Anthropologische und ethische
Implikationen digitaler Selbstinszenierung. Stuttgart: W. Kohlhammer, 43–56.
Liddy, Matt (2013) This photo, posted on ABC online, is the world’s first known ‚selfie’,
in: ABC news, 21.11.2013, https://www.abc.net.au/news/2013-11-19/this-photo-is-
worlds-first-selfie/5102568 [ 02.09.2023].
Maddox, Jessica Leigh (2018) Fear and Selfie-Loathing in America: Identifying the Inter-
stices of Othering, Iconoclasm, and the Selfie, in: The Journal of Popular Culture 51
(2), 26–49, https://doi.org/10.1111/jpcu.12645.
Memmott, Mark (2013) Picture This: Selfie is ‚Word of the Year‘, in: Vermont Public,
19.11.2013. https://www.vermontpublic.org/2013-11-19/picture-this-selfie-is-word-
of-the-year [02.09.2023].
Milzner, Georg (2016) Digitale Hysterie. Warum Computer unsere Kinder weder dumm
noch krank machen. Weinheim: Beltz.
Oxford University Press (2023): selfie, noun. https://www.oxfordlearnersdictionaries.com/
definition/english/selfie [11.10.2023].
Paganini, Claudia (2012) Auf der Suche nach positiver Öffentlichkeit. Teilen und Mittei-
len von Informationen im Alten Testament, in: Sützl, Wolfgang et al. (Eds.) Medien
– Wissen – Bildung: Kulturen und Ethiken des Teilens. Innsbruck: innsbruck university
press, 195–207.
Paganini, Claudia (2022) Grenze als Krise. Zur Dynamik von (Medien)Wandel, in: Schell-
hammer, Barbara / Schützle, Lena (Eds.) Philosophie der Grenze. Darmstadt: wbg
Academics, 65–75.
Paganini, Claudia / Steinbacher, Christoph (2019) Schöne neue (Medien)Welt. Die Lust
und Unlust auf die Zukunft, in: Datterl, Monika / Guggenberger, Wilhelm / Paganini,
139
Claudia Paganini and A. Kristina Steimer
Claudia (Eds.) Welt am Abgrund. Zukunft zwischen Bedrohung und Vision (theologi-
sche trends 29). Innsbruck: innsbruck university press, 181–199.
Prophet, Isabell (2015): Es sterben mehr Menschen bei Selfies als durch Haie, in: zeit.de,
22.9.2015. https://www.zeit.de/zett/2015-09/es-sterben-mehr-menschen-bei-selfies-
als-durch-haie?utm_referrer=https%3A%2F%2F2.zoppoz.workers.dev%3A443%2Fhttps%2Fwww.google.com%2F [2.9.2023].
Roß, Dieter (1997) Traditionen und Tendenzen der Medienkritik, in: Weßler, Hartmut et
al. (Eds.) Perspektiven der Medienkritik: Die gesellschaftliche Auseinandersetzung
mit öffentlicher Kommunikation in der Mediengesellschaft. Opladen: Westdeutscher
Verlag, 29–45.
Schächinger, Hartmut (2016) Stress. Psychobiologie eines Erfolgsrezeptes, in: Psychothe-
rapie im Dialog 2, 78–83.
Senft, Theresa / Baym, Nancy (2015) What Does the Selfie Say? Investigating a Global
Phenomenon, in: International Journal of Communication 9 (22), 1588–1606.
Sicart, Miguel (2009) The Ethics of Computer Games. Cambridge MA: MIT Press.
Steimer, A. Kristina / Paganini, Claudia / Filipović, Alexander (2023). Das Selbst im Blick.
Interdisziplinäre Perspektiven zur Selfie-Forschung – Einleitung, in: dies. (Eds.) Das
Selbst im Blick. Interdisziplinäre Perspektiven zur Selfie-Forschung. Baden-Baden:
Nomos (Reihe Kommunikations- und Medienethik), 7–32.
Stein, Barbara (2020) Krisen bei körperlichen Erkrankungen, in: PiD Psychotherapie im
Dialog 21, 79–82.
Sueddeutsche.de (2015) So gefährlich sind Selfies, in: sueddeutsche.de, 22.9.2015.
https://www.sueddeutsche.de/panorama/todesursachen-so-gefaehrlich-sind-sel-
fies-1.2658849. [2.9.2023].
Vogel, Ralf T. (2020) Psychotherapie in Zeiten kollektiver Verunsicherung. Therapieschul-
übergreifende Gedanken am Beispiel der Corona-Krise. Wiesbaden: Springer.
Walker Rettberg, Jill (2014) Seeing Ourselves through Technology. How we use Selfies,
Blogs and Wearable Devices to See and Shape Ourselves. Palgrave Macmillan, DOI:
10.1057/9781137476661.
Warfield, Katie / Cambre, Carolina / Abidin, Crystal (2016) Introduction to the Social
Media + Society Special Issue on Selfies: Me-diated Inter-faces, in: Social Media +
Society (2/2) (April-June), 1–5, DOI: 10.1177/2056305116641344.
Weber, Silvana (2023) Selfies und das Selbst: Psychologische Erkenntnisse zum Zu
sammenspiel von bildlicher Selbstdarstellung und Identität, in: Steimer, A. Kristina /
140
Social Media, the Body, and Digital Device
141
Anthropology in Digital Age
Role of Human Beings in the Light of
Evolution according to A. R. Peacocke
Isaac Parackal OIC (Pune)
Introduction
Modern technology has rapidly revolutionized the way we live, work, and communi-
cate. It has brought numerous benefits to society, making our lives more convenient,
efficient, and connected. One of the most significant advantages of modern digital
technology is the unprecedented level of communication it has facilitated. This has led
to improved relationships, increased global collaboration, and enhanced opportuni-
ties for cultural exchange. Another key benefit of modern technology is the increased
efficiency and productivity it has brought to various sectors. Automated systems and
machines have significantly reduced the need for manual labour.
In today’s rapidly advancing world, modern technology has brought about positive
impacts that have greatly enhanced the well-being of individuals. We have to explore
the positive effects of modern technology on human beings and our role in creating
a better world by having a new perception of reality. Arthur Robert Peacocke, a re-
nowned theologian and biochemist, has made significant contributions in exploring
the intersection of science, technology, and religion. His insights provide a deep un-
derstanding of the implications of technological advancements on human existence.
This essay aims to explore the philosophical and theological notions of Arthur Robert
Peacocke and analyses the role of humans in the wake of recent technological ad-
vancement. Each technological advancement could be seen as an Opus Dei (work of
God) through the hands of human beings who are co-creators and co-explorers in the
building up of the cosmos.
I used “Man” throughout this article to denote all human beings both male and
female. Other pronouns for Man such as “he”, “him”, and “his” were also used as
inclusive expressions denoting the whole human race without any gender overtones.
143
Isaac Parackal OIC
Peacocke conceives God as continuous Creator. God is ever present in the process of
evolution and he is continuously creating and guiding it. He observes that the scientif-
ic outlook of a cosmos in development introduces a dynamic element into our under-
standing of God’s relation to the cosmos which was, even if obscured, always implicit
in the Hebrew conception of a ‘living God’, who is dynamic in action.1 This dynamic
nature of God can be seen in every creature in nature especially in the creativity of
human beings. Creation is not a completed process but a continuing process. God’s act
of creation is not something done once and for all – creation still proceeds and God
is immanently present in and to the whole process.2 Peacocke affirms the continuing
creative activity of God through the inherent and the built-in creativity of the nature
and the cooperation and co-creation of human beings realized through the various
scientific discoveries and technological advancements.
Peacocke holds the panentheistic idea regarding the relation between God and the
world. What is then, panentheism? He defines: “Panentheism is the belief that the
Being of God includes and penetrates all-that-is, so that every part of it exists in God
and (as against pantheism)3 that God’s Being is more than it and is not exhausted by
it.”4 There is no place ‘out side’ of God and everything exists in God. “God’s infinity
comprehends and incorporates all. In this model, there is no ‘place outside’ the infinite
God in which what is created could exist. God creates all- that-is within Godself.”5 In
the panentheistic understanding of creation the natural events are creative and God is
present in all the events and God can influence the world in its totality.
Peacocke uses another example to clarify the notion of panentheism. He distin-
guishes panentheistic model from the western classical concept. According to Pea-
144
Role of Human Beings in the Light of Evolution
cocke in the western classical concept, there is too much stress on the externality of
the creative process-God is regarded as creating rather in a way the male fertilises
the female from outside. However, the mammalian females nurture new life within
themselves and Peacocke argues that this idea provides a much needed corrective to
the purely masculine image of divine creation.6 Peacocke finds this image as the most
suitable to show the relation between God and creation. According to him by using
this image we can get away from the limitations of a male dominated language. “God
according to panentheism, creates a world other than Godself and ‘within herself’ (we
find ourselves saying for the most appropriate image) yet another reminder of the need
to escape from the limitations of male-dominated language about God.”7 He believes
that “there is no part of the world where God is not active and present in the events
and processes themselves, and because there is infinitely more to God’s being than the
world, we could say that the world is in God, there is nothing in the world that is not in
God”8. If God is present in all processes, the progress in science and technology could
be seen as divine creativity expressed through human beings. Now let us see the role
of human beings that would throw new light to the modern anthropological research.
Man has a privileged role in creation as he is the only creature in the world capable
of relating with the Creator. Although Man has evolved from nature, he cannot be re-
duced merely to the natural sphere and he is indescribable in terms of atoms and mol-
ecules.9 For Peacocke, Man’s relationship to God is not a kind of passive dependence
but active collaboration with freedom. For him, Man is a free being who is capable
of accepting or refusing the challenges in the realization of his potentialities. He can
go against his call to ‘become’.10 Each technological discovery could be seen as an
intellectual evolution in the consciousness level of human beings.
6 Ibid.
7 Ibid.
8 Peacocke (1984) 64.
9 Peacocke (1971) 141.
10 Peacocke (1973) 380.
145
Isaac Parackal OIC
146
Role of Human Beings in the Light of Evolution
According to Peacocke Man’s role in creation can be seen as that of priest of creation.
He observes that the complex of proper responses of Man to nature suggests that
Man’s role may be perceived as that of priest of creation, as a result of whose activity
of sacrament of creation is reverenced and dignified.19 Since he alone is conscious
of God, himself and nature, “can mediate between insentient nature and God – for a
priest is characterised by activity directed towards God on behalf of others”20. Man
alone, for Peacoke, can reflect on the purposes of God and he alone can fulfil those
purposes cooperating with God. “Man alone can contemplate and offer the action of
the created world to God. But a priest is also active towards others on God’s behalf
and in this sense too, man is the priest of creation.”21 He alone, having reflected and
contemplated on God’s intentions and plans, can be active in and with the created
world consciously seeking to enhance and fulfil God’s purposes. He is to live with
reverence for all creation and giving equal value to all.
Peacocke suggests that Man should have the respect for nature in the same way he
has respect for his own body or those of other persons. We do not consider the body
of the other as mere aggregates of flesh, but as a person.22 Nature has a derived sa-
credness or holiness as the vehicle and instrument of God’s own creative action.23 So,
Man has the sacred duty to revere the nature as he does to other persons. According
to Peacocke to be a priest means to be a mediator. So a priest has the sacred duty to
gather together the offering of creation and present it to God. In this sense, Man is an
intermediary between God and the world. He is cooperating with God in the creative
activity and fulfils God’s purposes within the cosmos.24 This fulfilment is achieved
through human discoveries and new inventions that lead nature and humans to prog-
ress further according to the divine creative plan.
147
Isaac Parackal OIC
The world, for Peacocke manifests God’s continuing presence and so it commands
admiration and awe. “God is present ‘in, with, under’ (a set of prepositions usually
used with a sacramental reference) all the world processes which, as an aspect of
God’s being and action, therefore command respect and reverence and have value.”28
148
Role of Human Beings in the Light of Evolution
For Peacocke, God as creator “is expressing his intentions and purposes, is unveiling
his meaning, in the various and distinctive levels of the created natural world and in
its processes, which thereby have the meaning with which he endows them”33. In this
way God is recognized as self communicating agent actively unveiling his meanings
to Man who is capable of seeing and hearing it. The natural world is seen as the sym-
bol of God’s meaning and it is conceived as the means whereby God’s intentions and
purposes are made known. In other words, the world is seen as a sacrament.34
This concept of sacrament, according to Peacocke “serves to emphasize another
aspect of Man’s functions, namely, Man as interpreter of creation’s meaning, value,
beauty and destiny”35. Peacocke views Man as an evolutionary product who can read
and articulate the divine meaning. If those meanings are correctly and surely dis-
cerned, we may say that in human beings God has created a creature who is able to be
29 Ibid., 299.
30 Ibid., 303.
31 Ibid.
32 Ibid.
33 Ibid, 300-301.
34 Ibid., 301.
35 Ibid.
149
Isaac Parackal OIC
aware of God’s purposes and to discern them and articulate them consciously. In Man,
creation becomes conscious of itself and through his consciousness and intelligence
he is capable of reading God’s meaning, which must be seen as an intention of God’s
work.36 In other words, God meant the creation to be able to eventually respond to the
meaning of his Self that he had communicated through his creation.
As interpreter of God’s meaning in creation, Peacocke attributes to Man the pro-
phetic function which is a complementary aspect of the priestly role. A prophet is
the one who reads the sign of the present times and interprets it for the future. “So
man is”, writes Peacocke, “the interpreter of God, and as such, he acts prophet, a
role which historically has always complemented the priestly in man’s corporate re-
lation to God.”37 Man as prophet reads the signs of nature and interprets them for
the betterment of the world. Further Man is depicted as the “lover of nature” with
a nuptial bond as Peacock brings the idea of I-thou relationship between Man and
nature. Nature is conceived as the ‘beloved’ of Man and so, she must be treated with
love and affection in the same way a lover treats his or her partner.38
In the scene of creation, Man stands with his creative energies within himself and
in relation to nature through his newly acquired technologies. Here, Man according
to Peacocke is faced with a choice: “Does he join in with the creative work of God
harmoniously integrating his own material creations (which are never ex nihilo) into
what God is already doing? Or does he introduce a discordant note, an entanglement
and confusion within the dance?”39
To these questions Man responds by cooperating with God in the continued cre-
ative processes. In this sense, Man is acting as a creative participant in creation “as
it were the leader of the orchestra of creation in the performance which is God’s
continuing composition”40. Moreover, Man is offering himself with dedication in the
36 Ibid.
37 Ibid.
38 Ibid., 300.
39 Ibid., 304-305.
40 Ibid., 305.
150
Role of Human Beings in the Light of Evolution
creative process. In short, Man has the opportunity of consciously becoming co-cre-
ator and co-worker with God in his work on Earth.41 According to Peacocke if Man
recognizes that God is always active ‘making things new’, then his response to cre-
ated nature should be flexible and open-minded. He should then expect change and
adjust himself to modifications as he observes sensitively the changing processes.42
Man thus could become a partner of God consciously and intelligently cooperating
in the ongoing processes of creative change, taking due account both of man’s and of
world’s proper needs, with duly assigned priorities for each.43
Peacocke views technology positively and he understands it as the progeny of
science.44 Technology, for Peacocke, helps Man fulfil his personal and social devel-
opment in cooperation with God. “Man would then”, writes Peacocke, “through his
science and technology, be exploring with God the creative possibilities within the
universe God has brought into being. This is to see man as co-explorer with God.”45
This cooperation of Man in the creative processes is not a passive involvement but
an intelligent and active participation. In order to do this, Man has to discern God’s
meaning and creative plan.46 Each discovery in the technological world is to discover
(dis-cover), to remove the cover and make things unsealed.
From the idea of Peacocke that Man is the co-worker of God, follows the notion of
work as the genuine opus Dei. Man being the co-creator derives his creativity from
God to do build his reign of love. Peacocke argues: “Man has derived creativity from
God and all genuine activities of man which attain excellence, and are in accord with
God’s intentions to build his reign of love (his kingdom), may be regarded as man
exerting his role as co-creator with God.”47 Peacocke views this as the ‘building up
41 Ibid.
42 Ibid.
43 Peacocke (1994) 106.
44 Peacocke (1979) 306.
45 Ibid.
46 Ibid.
47 Peacocke (1994) 106-107.
151
Isaac Parackal OIC
of Jerusalem’48. Here, he proposes a Christian humanism, “in which all human excel-
lence is seen as man making his distinctive human contribution as co-creator to that
ceaseless activity of creation which is God’s action in and for the world”49.
Here, the work of Man for the betterment and progress of the society and the en-
vironment is seen as the work of God himself (Opus Dei). The human civilization and
technology are also seen positively. Basing himself on the Christian doctrine of Man
in Genesis Peacocke observes:
In this perspective every work in the human society can be seen as God’s work for
the development and progress of the human society and of the whole creation. Even
an unimportant job in the society has got a great significance and it serves for the
upliftment and better shaping of the human society. Peacocke argues that one, “taking
seriously the scientific perspective, can see his work not as being a kind of sacrificial
offering for God, but actually as a genuine opus Dei of its own; for in building up
human society one is joining in the creative activity of God who made it all possi-
ble.”51 Peacocke continues: “Even the humblest job in the complex society created
48 Ibid., 107.
49 Ibid.
50 Ibid.
51 Peacocke (1971) 194.
152
Role of Human Beings in the Light of Evolution
Peacocke envisages human work as liturgy which connects all beings in nature. Ac-
cording to Peacocke, human life and nature are interrelated:
The energy source for all living organisms, including man, is the sun whose en-
ergy is absorbed through green plants, on which animals depend ultimately for
food and which themselves depend on the activities of bacteria decomposing dead
organisms and making nitrogen available. So all life is interdependent – indeed
many creatures can only live in concert with, and often literally on, particular other
organisms (symbiosis).53
So all things in the world are interdependent and Man cannot think of acting inde-
pendently without taking into consideration the rest of the world. All animals and
plants live in complex systems consisting of many cross flows and exchanges of ener-
gy and matter in a variety of chemical forms of such baffling complexity that only the
advent of computers and the development of systems theory, have given any hope of
analysing them.54 “Modern man is misguided if he thinks he can live and operate inde-
pendently of the rest of the living world.”55 The whole human life is in one way or the
other is related to nature. Although Man is capable of living in many habitats, he is still
just as dependent on plants and bacteria, and on other animals, as was primitive Man.56
52 Ibid.
53 Peacocke (1979) 258.
54 Ibid.
55 Ibid.
56 Ibid. Some authors like Barbara Ward and K.E. Boulding propose the idea of world as a spaceship
and the whole human race as crew members who have to work for the smooth voyage of the spaceship
153
Isaac Parackal OIC
All these clearly show us the inseparable relationship between Man and nature.
In this context, Peacocke proposes that the primary duty of Man is working for the
betterment of nature and the future of the whole world is in the hands of Man. In this
context, Peacocke puts forward the idea of human work that is for the consecration
of the whole world and offering it back to God.57 Here, the world is seen as not only
a gift from God but also a task to be fulfilled by Man. Being a priest and celebrant of
creation it is Man’s primary duty to do the service of God – to offer the world back
to God. In this sense, whatever Man does for the betterment of the world is a sacred
service or liturgy and thereby, Man becomes the celebrant of nature.58 This model of
human work, according to Peacocke, is “able to provide motivation for people to act
for the future good of humanity and the whole ecosystem”59. All these ideas point to
the fact that Man can only worship God (genuine liturgy)60 in and through his work
in the world by cooperating with the creative activity of God.61 Therefore, all human
endeavours including digital technology, as far as they contribute to the wellness of
human beings and to the welfare of the cosmos, have to be seen as a genuine service
offered back to the Creator.
Conclusion
“Gloria Dei vivens homo”62; “The glory of God is man fully alive.” These are the
words of St. Irenaeus on the dyanamic nature of huaman existence that results in the
glory of God. To be fully alive to the glory of God is the goal of human life. It is in a
for survival, Cf. Ward, Barbara (1966) Space Ship Earth. London: Hamish Hamilton; Boulding, K. E.
(1966) Human Values on the Spaceship Earth. New York: National Council of Churches.
57 Peacocke (1979) 274. Peacocke uses many ideas from D. F. Marietta to support his idea. Cf. Marietta,
D. F. (1977) “Religious Models and Ecological Decision Making”, in: Zygon 12.
58 Peacocke (1979) 297.
59 Ibid., 274.
60 Here, liturgy is understood as the service to God. By serving the world only Man can serve the Lord.
“Love your neighbour as yourself”. Is man only our neighbour, or is it the whole nature like in the
thought of St. Francis of Assisi. He deemed all powers and natural phenomena his dear brothers and
sisters. When in his later years the doctors condemned him to let them sear his forehead with a red-hot
iron, even in the middle of his dread of the agonizing torture he was able to greet his ‘dear brother, fire’
in this fearful iron; Peacocke (1979) 318.
61 Peacocke (1979) 316.
62 Irenaeus, Adversus Haereses, 4:20.
154
Role of Human Beings in the Light of Evolution
way a progressive and creative growth towards God Himself. In this paper, we have
analysed Peacocke’s understanding of God, Man and the world in the light of the
new discoveries and evolutionary theories. God’s continuing action holds the cosmos
in existence. Peacocke tries to project a God who is always vigilant to his creatures
through his unfailing immanence in the world. God is always acting in and through
the world processes with his loving care. In this understanding God, Man and the
world are not seen as closed entities but they are interconnected and interrelated. Man
is collaborating with God in the on-going process of creation. It is the duty of Man to
co-operate with God for the betterment of the cosmos.
If human person is created in the image and likeness of God, the human potentiali-
ties must be progressively developed and fulfilled. Then, every new technology could
be seen as a human-participation in the divine super intelligence that makes human
beings co-creators and co-explorers. Here, humans are not making anything new, but
using and re-joining the created things in an innovative manner to enhance the envi-
ronment and our life situations. So, every technological progress must be seen as the
work of God himself who is vigilantly directing the whole cosmic processes including
human endeavours. The concept of panentheism proposed by Peacocke makes room
for further human development that is done in the very self of the Creator. Then, “To
be fully alive” is to be fully creative, lively and enthusiastic. And this brings in the
glory of God – a God who is not static, but dynamic in his actions. So, every human
action that creatively contributes to the well-being of humanity could be seen as the
genuine “Opus Dei”, the work of a living God who is present and vigilant eternally
(“Immanuel”) and who continues to create through nature and his creatures including
humans.63 So, Peacocke’s anthropological notions throw a new light to a better under-
standing of humanity in the modern technological era.
63 However, the misuse and over-use of the potentialities in the technological field may backfire. Each
human discovery and technological invention must correspond to the divine plan and should be used
prudently. Therefore, human discernment is a necessary factor in every human action.
155
Isaac Parackal OIC
Bibliography
Peacocke, Arthur Robert (1971) Science and the Christian Experiment. London: Oxford
University Press.
Peacocke, Arthur Robert (1973) “Nature and Purpose of Man in Christian Theology”, in:
Zygon 8, 373-394.
Peacocke, Arthur Robert (1977) Chance and Necessity in the Life-Game, in: Trends in
Biochemical Sciences 2, 99-100.
Peacocke, Arthur Robert (1979) Creation and the World of Science. Oxford: Clarendon
Press.
Peacocke, Arthur Robert (1983) An Introduction to the Physical Chemistry of Biological
Organization. Oxford: Clarendon Press.
Peacocke, Arthur Robert (1984) Intimations of Reality. Notre Dame: Notre Dame
University Press.
Peacocke, Arthur Robert (1991) God as the Creator of the World of Science, in: Interpre-
ting the Universe as Creation. Ed. by V. Brümmer. Kampen: Kok Pharos Publishing
House, 100-112.
Peacocke, Arthur Robert (1991) God’s Action in the Real World, in: Zygon 26, 455-477.
Peacocke, Arthur Robert (1993) Theology for a Scientific Age: Being and Becoming –
Natural and Divine. Oxford: Blackwell.
Peacocke, Arthur Robert (1994) God and the New Biology. Gloucester: Peter Smith.
Peacocke, Arthur Rpbert (1996) God and Science: A Quest for Christian Credibility.
London: SCM Press.
Peacocke, Arthur Robert (1996) From DNA to DEAN: Reflections and Explorations of a
Priest-Scientist. Norwich: The Canterbury Press.
Peacocke, Arthur Robert (1999) Biology and Theology of Evolution, in: Zygon 34,
694-712.
Peacocke, Arthur Robert (2000) Science and Future of Theology: Critical Issues, in: Zygon
35, 119-140.
Peacocke, Arthur Robert (2001) Paths from Science Towards God: The End of All Our
Exploring. London: One World.
Peacocke, Arthur Robert (2006) The Music of Creation. Minneapolis: Fortress Press.
Westermann, Claus (1974) Creation. London: SPCK.
156
Deliver us from the evil one
The hope for technological Redemption
Wilhelm Guggenberger (Innsbruck)
There are probably not many philosophical approaches that make it into the headlines
of the mass media. At present, however, this can even be said of two different con-
cepts. These are transhumanism and longtermism. Both approaches, which are fierce-
ly debated in philosophical circles, hold a certain fascination for the non-scientific
audience as well. At the same time, however, they are also frightening or uncanny in
a certain way. It is possible that the media impact of these approaches is due precisely
to the fact that our society is obsessed with such a mixture of unbelievable potentials
and scandalous threats, at least, as long as it does not actually affect real life.
Transhumanism has different manifestations, but its basic concept, which is also
known to some extent to a broader public, is about the idea that genetic or proces-
sor-based technology can enhance the human species in a way that makes humans, as
we know them today obsolete.1 Longtermism2 is less popular so far. Oxford scholars
like Hilary Greaves and William MacAskillin have developed this socio-ethical ap-
proach in the wake of utilitarianism currently. What is new about this is that social
responsibility is not only thought of in terms of future generations, but that it should
1 A more precise term that differentiates what is meant here from concepts of human self-optimization by
pharmaceutical or medical means is technological posthumanism. This approach “… unites a number
of authors who have been propagating the replacement of humans by their artificial offspring since
the mid-1980s. Its main proponents, such as Marvin Minsky, Frank Tipler, Hans Moravec, and Ray
Kurzweil, base their arguments on cybernetic theory.” Krüger (2021) 61.
2 Cf. for example MacAskill (2022).
157
Wilhelm Guggenberger
extend to a very distant future. This entails a radical change in some of the ethical
norms that apply today. When longtermists think about the greatest possible happiness
of the greatest possible number of individuals, they bring entirely new orders of mag-
nitude into play. While there are currently only 8 billion people living on this planet
in the next 10,000 or more years, the number of future people will be many times that.
Events that can cause suffering or death for millions of people currently alive appear
in a completely new light when you consider a hopeful future for billions. Very brief-
ly and simplified, this is the intellectual starting point of longtermism. Whether the
calculations on which such considerations are based will turn out to be correct in the
distant future is something no one can verify today. Thus, the approach can be accused
of a certain degree of arbitrariness, even cynicism.
Transhumanism as well as longtermism may represent extreme positions held by
a relatively small minority of scientists. However, the ones propagating such ideas are
by no means scatterbrains or conspiracy theorists, but belong to the seriously working
academic field. Moreover, such ideas get considerable funding for example from the
internet-company Meta (concerning transhumanism) or from business magnate Elon
Musk (concerning longtermism). These approaches go in very different directions, but
they also share a common conviction. Proponents of both assume that humanity has a
very long and essentially successful future ahead of it, one that extends in time and/or
space far beyond what we can presently imagine. The ideas about what humanity and
human society will look like in the future admittedly diverge widely. In transhuman-
ism in particular, digitalization and computer science play a crucial role, as they are
the means by which humans can grow beyond themselves. In contrast, longtermists
are very skeptical about artificial intelligence in particular.
In this paper I will deal more intensely with the motives that characterize trans-
humanism. To transhumanist thinkers it seems to be feasible “… what in the past was
considered impossible, namely changing nature and our own human nature, becomes
now an option since through the technological progress and the technical knowledge
we have obtained we are now capable of redefining our own essence through the use
of technology …”3. However, longetermism can form an interesting contrasting im-
age that makes it easier to understand what moves transhumanists. Therefore, I will
come back to it later on.
158
Deliver us from the evil one
What I am interested in dealing with our topic is not the question whether or not
the so-called singularity by which artificial intelligence overrules the human mind
will take place in the foreseeable future. I am not that much interested in the question
if evolution soon will switch from the biological sphere to the technological one mak-
ing bodily existence more or less outdated. What interests me most is the question of
reasons that make rationally thinking people develop certain visions or imaginations
of the future and work hard to realize them, although one could also say with good
reason that these visions are quite dystopian. Dystopian are the visions of transhuman-
ists as well as those of longtermists, because they either assume that there will be no
more humans like us, or that human life will no longer take place on the planet Earth,
which probably can no longer be saved as a habitat. It may be that such prospects
seem undesirable only from a subjective or particular perspective. Whether this is so,
however, should be reflected and discussed. As Lynn White mentioned in the 1960ies
with respect to ecological problems: “Unless we think about fundamentals, our specif-
ic measures may produce new backlashes more serious than those they are designed
to remedy.”4 Such fundamentals contain world-views and religious convictions and
of course anthropological concepts standing behind particular behavior of individuals
and whole societies and behind strategies of research and development. Such fun-
damentals are rarely discussed, especially in the scientific and technical fields. The
efficiency of concrete solutions to problems seems to suffice as justification for certain
developments. However, this can lead to a loss of sight of the actual goal, which is
worth working on efficiently.
In view of what I have been able to find out about transhumanism so far, I assume
that transhumanists are humanists first, in the sense that they want good for present
and future society and the people living in it. Transhumanism is not driven by misan-
thropy; nevertheless, it aims at the negation of the anthropomorphic reality. In order
to understand this paradox better, the anthropological approach of René Girard, a
Franco-American socio-anthropologist and scholar of literature can be helpful. One
does not have to follow this particular approach to reach analogous conclusions. There
are other authors detecting similar phenomena. For me, at any rate, it has opened up
insights that go far beyond the topic of digitalization. We may encounter the same
basic human problems in different fields that are supposed to be solved in ever-new
159
Wilhelm Guggenberger
ways; frequently, however, through surrogate solutions that only exacerbate the actual
problem. Digitalization seems to be one of these solutions intended to address general
human problems that ultimately cannot be solved by technical means.
5 Best introductions to the whole theory are Kirwan (2004), Palaver (2013).
160
Deliver us from the evil one
“The first man who, having enclosed a piece of ground, bethought himself of saying
This is mine, and found people simple enough to believe him, was the real founder
of civil society.”6 From that moment on conflict becomes the permanent companion
of human communities according to Rousseau. Thomas Hobbes formulated this even
more succinctly, attributing the conflict not directly to rivalry over property but to the
desire for it. “And therefore if any two men desire the same thing, which nevertheless
they cannot both enjoy, they become enemies; and in the way to their end, […] en-
deavour to destroy, or subdue one another.”7
This is what Girard was concerned with throughout his scientific life. He called
the phenomenon observed by the classics of social philosophy mimetic rivalry and
examined the social attempts to limit this rivalry to such an extent that it does not
lead to complete destruction. However, the means he described are suitable to reduce
violence, but not to overcome it completely. The best known of these tools is the
scapegoating mechanism. We do not need to go into detail here. It suffices to under-
stand that individuals or marginalized groups are persecuted and fought by a majority
in order to gain a sense of unity in a divided community. That phenomenon should
be sufficiently known, as it can be found in numerous groups and communities even
today. It is also a tried and tested means of politics to conceal conflicts within a society
by identifying and persecuting a supposedly guilty party. By the way, the same func-
tion as an internal scapegoat can also be fulfilled by an external enemy.
According to Girard biblical revelation unveils this structure. This is a kind of
enlightenment challenging the functioning of the mechanism. As far as a scapegoat is
recognized as a scapegoat the community must admit that its problems have another
cause. Therefore, in the light of biblical revelation, we have to detect or develop alter-
native solutions to rivalry and conflict in our communities. The Gospel tells us that the
only way out of violence is solidarity and brother- and sisterhood enabled by longing
for the love of God as the most desirable good. Admittedly, this has been insufficiently
realized even within Christian communities until today.
Modern societies therefore also practice substitute solutions. One such, which
seemed to work promisingly for centuries, is economic growth: If there is an ever
growing stock of material goods rivalry has not to become destructive. In contrary
161
Wilhelm Guggenberger
Here we can fall back on Plato’s understanding of the human soul. According to the
ancient philosopher, there are three parts of soul, the animal-like part characterized
by appetite, the rational part, which represents reason and the later on sometimes
forgotten passionate part he called thymos.8 Sometimes this third part of the human
8 As it is given by Plato in the fourth book of Republic (4, 439 b-e). Patterson (1987) 338: “That which
Plato calls τό μθυμοειδές or θυμός (standardly translated ‘spirit’) is best known for its role in Republic
IV as the seat of anger and the subject of courage defined as ‘preservation, through everything, of cor-
rect opinion about what is and is not to be feared’ (430b). From a slightly wider perspective the middle
books as a whole cast thymos in a further important role, as locus of pride and shame as well as anger,
162
Deliver us from the evil one
soul, which corresponds to the protective part of the population of the polis, the auxil-
iaries or warriors, sometimes is translated by spirit, which seems too innocuous to me.
Sometimes supposedly more appropriate the thymotic part of the soul is translated as
irascibility.9 Irascibility can be spurred by the violation of justice and dignity, thus it
deserves a certain degree of appreciation. Indignation sometimes is quite necessary to
enhance corrupted conditions. Thymos is the faculty by which we rise above our mere
needs, which on the one hand distinguishes us from most animal creatures, but on the
other hand, it also distinguishes us from the cold, mathematical reason of the machine.
Irascibility also contributes to our tendency to competitive desire and its rivalling and
destructive dynamics. That makes it the most complex part of the human soul. How
the state can be taken over by a military coup thymos can overwhelm reason and ul-
timately endanger the person’s very survival in blind furor. This ambiguity of the hu-
man spirit is, as we have seen, by no means only a realization of modern times. Many
traditions of wisdom know such a deep-rooted suspect if not mistrust in human nature,
which does not exist without reason. In a transcendent-less anthropology, however,
dealing with it must be left to ethics, which for the most part is overstrained by it.
Representatives of the Scottish enlightenment like David Hume or Adam Smith are
much more down to earth with respect to this fact than continental-European thinkers
of enlightenment who are sometimes too optimistic regarding the taming capacity of
reason. However, the suspicion that human beings are morally highly unreliable does
not find a satisfying answer on the terrain of ethics.
That according to Girard leads to an increasing fascination with death in mod-
ern times, he first detected in literature, which sometimes is more sensitive to the
undercurrents of social developments than science is. In a very interesting, though
somewhat puzzling passage close to the end of Girard’s first book Deceit, Desire and
the Novel we read: “The hero is no longer alive but he is not yet dead. Moreover the
hero knows that the end of his search is death, but his knowledge does not turn him
from metaphysical desire. […] In a contradiction at once more subtle and more blatant
than those which have gone before, the hero decides that death is the meaning of life.
[…] That end is found in the mineral world the world of a death which the absence
indignation, courage and cowardice. In book IX it emerges clearly as that which desires and delights in
victory, dominance, and good repute; it is ‘victory-loving and honor-loving’.”
9 For example in Smith (1861) Part VII, 2,1.
163
Wilhelm Guggenberger
of all movement, of all quivering, has made complete and definitive. The horrible
fascination ends in the destiny of lead, the impenetrable immobility of granite. This
is the inevitable termination of the ever more effective negation of life and of spirit,
deviated transcendency.”10
The conclusion Girard comes to after a perceptive analysis of the novels of Cer-
vantes, Stendhal, Flaubert, Proust and Dostoyevsky is those who strive most passion-
ately for perfection, who have the strongest longing within them but want to satisfy
it in an immanent world, end up longing to become stone or mineral, a well-ordered
inert crystal.
Wolfgang Palaver found the same tendency Girard described in the mentioned
novels in a text of the German Poet of the early 19th Century Heinrich von Kleist.11
There is a short very famous narration of this author titled On the marionette theatre
(Über das Marionettentheater).12 A professional dancer in this text talks about his ad-
miration for the movements of dancing puppets. Asked how it is possible that a gifted
and celebrated human dancer is fascinated by the mechanical moves of a puppet on a
string this fictional person mentions some advantages of being a marionette: “First of
all a negative one, my friend: it would never be guilty of affectation. For affectation is
seen, as you know, when the soul, or moving force, appears at some point other than
the centre of gravity of the movement. Because the operator controls with his wire or
thread only this centre, the attached limbs are just what they should be … lifeless, pure
pendulums, governed only by the law of gravity. This is an excellent quality. You’ll
look for it in vain in most of our dancers.”13
Like in Girard the inanimate matter following the natural law of gravity is men-
tioned to be more desirable than anything a living human person with all his artistry
can achieve. The focus of the moves of the mechanical being is on what it should be
without any affectation, which in humans is caused by peeping for the reaction of ob-
10 Girard (1961) 287. Girard particularly refers to Dostoyevsky’s Novel The Possessed in this passage.
11 Palaver (2011) 48-51.
12 See Kleist 1990, English translation according libcom.org.
13 Kleist (1990) 559: „Zuvörderst ein negativer, mein vortrefflicher Freund, nämlich dieser, daß sie sich
niemals zierte. – Denn Ziererei erscheint, wie Sie wissen, wenn sich die Seele (vis motrix) in irgend ei-
nem anderen Punkt befindet, als in dem Schwerpunkt der Bewegung. Da der Maschinist nun schlecht-
hin, vermittelst des Drahtes oder Fadens, keinen anderen Punkt in seiner Gewalt hat, als diesen: so sind
alle übrigen Glieder, was sie sein sollen, tot, reine Pendel und folgen dem bloßen Gesetz der Schwere;
eine vortreffliche Eigenschaft, die man vergebens bei dem größten Teil unserer Tänzer sucht.“
164
Deliver us from the evil one
servers. The dancer in Kleist’s text adds later on that these puppets even if governed
by gravity seem to be weightless, saying: “Grace appears most purely in that human
form which either has no consciousness or an infinite consciousness. That is, in the
puppet or in the god.”14 One could transpose this idea to the present by saying there
is a human longing for becoming a kind of ideal crystalline reality represented by the
silicon chip ruled by algorithms. By the way, Kleist in 1810 used exactly the term
algorithm to describe the relationship between operator and puppet. More important
may be the statement that seeking perfection can mean either a movement toward a
god or a puppet, which means either to the direction of a superior spiritual being or
to the direction of a dull mechanical thing. Digitalisation thus can be interpreted as a
kind of flight, a flight forward to escape from ourselves.
Let me quote Heinrich von Kleist once more. He mentions that particular miscon-
ceptions or failures are unavoidable, since “we’ve eaten of the tree of knowledge. But
Paradise is locked and bolted, and the cherubim stands behind us. We have to go on
and make the journey round the world to see if it is perhaps open somewhere at the
back.”15 I think that is quite a proper depiction of the kind of technological progress
transhumanist ideas spur. What we try to find in digitalised future eventually is but
a prehumen state, untouched by human weakness, destructive passion, and sin. This
equation of course is only correct if becoming human and falling into sin is equivalent
too, a position we find frequently in modern thinking when it is argued that we had to
leave paradise to become rational free agents. This makes sense if living in the Garden
of Eden resembles rather an animal existence, whereby the fall of man is to be un-
derstood as an act of enlightenment and liberation. Immanuel Kant and Jean Jacques
Rousseau fully agree with regard to the first part of this argument.16 However, for Kant
the fall is the beginning of a human world rationally shaped in freedom, whereas for
Rousseau it is the beginning of the decadence he sees at work in all culture. Thus, the
way forward to technical perfection by which we try to gain the state of a homo deus
in fact is the journey round the world leading us back to paradise, either to exist dei-
14 Kleist (1990) 563: Sodass Grazie „… in demjenigen menschlichen Körperbau am reinsten erscheint,
der entweder gar keins, oder ein unendliches Bewusstsein hat, d.h. in dem Gliedermann, oder in dem
Gott.“
15 Kleist (1990) 559: „Doch das Paradies ist verriegelt und der Cherub hinter uns; wir müssen die Reise
um die Welt machen, und sehen, ob es vielleicht von hinten irgendwo wieder offen ist.“
16 See Kant (1998) and Rousseau (1999).
165
Wilhelm Guggenberger
166
Deliver us from the evil one
these victims. However, they too will have to live in a world of scarce resources that
must be sufficient for the long-term projects as well as for the preservation of as many
human lives as possible in the present. Doesn’t that mean to spur the appearance of a
kind of war of all against all in the near future, causing a “solitary, poor, nasty, brutish,
and short” live for all who will experience it, as Thomas Hobbes depicted it in his
Leviathan as natural state? Probably humankind will not be able to avoid this at all,
but it is something different to abandon any attempt to shape the future in solidarity by
favouring a particular survival-of-the-fittest-scenario, which may not be intended by
longtermist thinkers, but whose probability would increase enormously if their ideas
were realized.
One may think such considerations somewhat exaggerated and say that I am at-
taching too much importance to a marginal intellectual phenomenon. In any case, I am
convinced that ideas move the world and that sometimes concepts that seem outland-
ish can very quickly become leading ideas. Thus, the approaches of transhumanism
and of longtermism inspire me to ask the following question: If reality only offers
two alternatives; an a-human one and an in-human one, wouldn’t the a-human justly
be considered the better choice or at least the less evil one? If the answer were yes,
transhumanism in fact would be a better humanism and therefore a proper means to
deliver us from the evil one.
Wouldn’t that be consequent in the wake of Girard’s anthropology too, which I
used to analyse the curious longing for being mechanical or even crystalline? Accord-
ing to his approach desire is unescapable for humans. Undoubtedly, Girard is part
of a western tradition that does not seek salvation in becoming free of emotions and
cravings, as Buddhism does. By the way, it is not that astonishing to me on the other
hand that transhumanism and Buddhist ideas are sometimes intermingling, as they
both offer a detachment from the pitfalls of ambiguous emotions. One may even see
the core of the problem of a western attitude in the fact that it is not ready to get rid of
passions and a concept of personality driven by desiring individualism, which in fact
seems to be only a variant of anthropocentrism proved today as irrational and unjust.
Therefore the question to be asked at the end is, whether a thymotic human being has
to end up in a destructive relationship with others, with nature and ultimately with
oneself, whether a desiring human being has to end up in destructive rivalry.
167
Wilhelm Guggenberger
The alternative, which ultimately leads us to a dead end, where overcoming the hu-
man seems to be the best choice, remains without a third option only if thymos were
understood exclusively as a force of defence against others. Plato himself puts us on
this track with parallelizing thymos in the human soul and military force in the po-
lis. The philosopher Josef Pieper, however, described thymos in general as a power
of resistance of the soul (“die eigentliche Widerstands-kraft der Seele”).18 A closer
analysis of this quotation shows that it refers to a capability that makes us resistant to
everything that endangers human dignity, including the longing for solidification in
the computer chip. Actually, Pieper did not speak about digitalization at all, but after
analysing the theology of the Church Fathers and Thomas Aquinas, he pointed out
the misunderstanding of what is actually human as purely spiritual or rational.19 Thus,
under his judgment can be subsumed every effort that tries to detach human reality
from corporeality.
Likewise, desire, even mimetic desire, does not have to be interpreted as the drive
of a person who thinks she can secure her own existence exclusively by acquiring the
possession, position, and ultimately the being of the other. This, of course, would nec-
essarily lead to mutual displacement. However, our desire can also be driven by the
experience that the encounter with others enriches, that the desired goal that another
person conveys to me as a model consists precisely in mutual complementation and
not in my taking their place. Such an understanding is also possible in the wake of the
mimetic approach, which I introduced here as anthropological hermeneutics. Petra
Steinmair-Pösel formulates with reference to Nikolaus Wandinger: “Only as a con-
sequence of having received, can one freely pass on what was given to him. Against
this background, Wandinger characterizes ‘positive’ mimesis as receptive mimesis.”20
Passionate humanity does not have to end in conflict and violence if it is based on the
experience of gratuitous benevolence and undeserved gift; theologically we speak
here of the experience of grace, which precedes all one’s own being and doing.
168
Deliver us from the evil one
Where this experience is lost or excluded, human resentment can ultimately turn
against human existence itself, against its own bodily emotional existence. Being hu-
man is then declared a sour grape, as the fox does in the fable with those fruits he can-
not reach. Johannes Hoff, following the sociologist Hartmus Rosa, contrasts this atti-
tude of dualism, which always has to separate itself from something else and thus ends
in the will to nothingness, with an attitude of resonance.21 Also resonance is receptive
and responsive and presupposes the experience of being touched by something or
someone else which is not possible in a world of identities banned on data-carriers and
unlikely to take place in a universe of calculations concerning the benefits of tens of
thousands of generations of possible existences. If we want to deliver ourselves from
the risks of our own passionate bodily existence as well as from the risk of being hurt
by other passionate bodily existences, we have to exclude any possibility of being
touched. The human factor must then be consciously eliminated. Consequently our
life will probably resemble the situation that C. S. Lewis describes when he speaks
of a person who keeps her heart in a safe to protect it. “But in that casket – safe, dark,
motionless, airless – it will change. It will not be broken; it will become unbreakable,
impenetrable, irredeemable. The alternative to tragedy, or at least to the risk of trage-
dy, is damnation. The only place outside Heaven where you can be perfectly safe from
all the dangers and perturbations of love is Hell.”22
These few concluding thoughts, which could undoubtedly be deepened anthro-
pologically and theologically, hopefully have shown that there is a third alternative
in addition to a-humanity and in-humanity. There is an alternative opened by another
kind of transhumanism we call transcendence which is not a goal to strive for but a
given gift. It would appear when we were able to understand ourselves as part of a
comprehensive reality that carries our human existence despite all weaknesses and
failures.
169
Wilhelm Guggenberger
References
Girard, René (1961) Deceit, Desire, and the Novel. Self and Other in Literary Structure.
Baltimore-London: John Hopkins University Press.
Greaves, Hilary / MacAskill, William (2021) The Case for Strong Longtermism. GPI
Working Paper No. 5-2021. Oxford: Global Priorities Institute. Online: https://global-
prioritiesinstitute.org/wp-content/uploads/The-Case-for-Strong-Longtermism-GPI-
Working-Paper-June-2021-2-2.pdf [27.7.2023]
Hobbes, Thomas (1991) Leviathan. Cambridge: Cambridge University Press.
Hoff, Johannes (2021) Die Verteidigung des Heiligen. Anthropologie der digitalen Trans-
formation. Freiburg i.Br.: Herder.
Kant, Immanuel (1998) Mutmaßlicher Anfang der Menschheitsgeschichte, in: Werke in 6
Bänden. Herausgegeben von Wilhelm Weischedel. Band VI, Schriften zur Anthropolo-
gie, Geschichtsphilosophie, Politik und Pädagogik. Wiesbaden: WBG, 85-104.
Karakasis, Georgios (2022) Overcoming (Our) Nature: Transhumanism and the Redefi-
nition of Human Being’s Essence, in: Transhumanism: Entering an Era of Bodyhack-
ing and Radical Human Modification. Edited by Emma Tumilty and Michele Battle-
Fisher. Cham: Springer Nature.
Kirwan, Michael (2004) Discovering Girard. London: Darton, Longman and Todd.
Krüger, Oliver (2021) Virtual Immortality – God, Evolution, and the Singularity in Post-
and Transhumanism. Bielefeld: transcript.
Lewis, C.S. (1960) The four Loves. London: Geoffrey Bles.
MacAskill, William (2022) What We Owe the Future. New York: Basik Books.
Palaver, Wolfgang (2011) Gott oder mechanischer Gliedermann? Die religiöse Problema-
tik in Kleists Essay ‚Über das Marionettentheater‘, in: Kleist zur Gewalt. Transdiszip-
linäre Perspektiven. Herausgegeben von Gianluca Crepaldi and Andreas Kriwak and
Thomas Pröll (Edition Weltordnung – Religion - Gewalt 8). Innsbruck: innsbruck
university press, 45-62.
Palaver, Wolfgang (2013) René Girard’s Mimetic Theory. East Lansing: Michigan State
University Press.
Patterson, Richard (1987) Plato on Philosophic Character, in: Journal of the History of
Philosophy 25/3, 325-350.
Pieper, Josef (1964) Das Viergespann. München: Kösel.
170
Deliver us from the evil one
171
Contributions of Ignatian Spirituality towards a
Healthy Use of the Internet and Digital Media
Stefan Hofmann SJ (Innsbruck)
The digital transformation of our modern societies has brought many benefits: a closer
connection to people far away, faster exchange of information, easy global networ-
king, etc. However, the digital transformation also poses major challenges for human
life. The effects of the digital age even include the emergence of new pathologies,
such as gambling addictions, media addictions, consumer addictions, etc.1 Most peo-
ple know how to use the Internet well. However, the experience of addiction and
unhealthy user habits is very common. It therefore seems beneficial to investigate the
negative side-effects and to ask whether and to what extent Ignatian spirituality can
help people respond to these challenges.
The attractiveness of the internet and digital media arises from several
well-known factors which are nevertheless impressive in their combination:2
For many users, the internet and social media are available almost all the time and
oftentimes free of charge. Digital access to information and the possibilities for con-
tact and interaction seem inexhaustible. They give users the impression that they can
always go one step further in their research and in their interactions. Social media
convey the feeling of being fully present and connected at a safe distance. They allow
one to project an idealized appearance of oneself while simultaneously remaining
in physical isolation and, where desired, even anonymity. This reinforces both the
impression of security and the freedom to try things out. Finally, many commercial
websites are tailored to increase stimulation and satisfy reward mechanisms of those
who use them.
The attractiveness of the internet I have described is accompanied by a conside-
rable risk of addiction, which individuals certainly experience very differently. The
173
Stefan Hofmann SJ
spectrum of attachments ranges from occasionally losing oneself in the pool of news
to serious psychological addictions. Almost everyone is in some way or other affected
by these risks. Consequently, noteworthy questions arise: How can we deal well with
this ubiquitous challenge of the internet and its digital media? What are symptoms of
unhealthy user habits? How can we empower people to deal well with the temptations
they face?
In my answer to these questions, I will proceed in three steps: In section one, I will
outline the addictive potential of the internet and digital media. In section two, I will
introduce a model of addiction causes which seems helpful for the explanation of pro-
blematic human behaviors. In section three, I try to show how Christian and especially
Ignatian spirituality can contribute to a better life in the face of the challenges posed
by digital transformation. The last section will, finally, present some considerations
for a well-balanced use of the Internet and digital media.
The Internet and the various channels of information it offers are a great opportunity
for the acquisition of knowledge, for communication and for organization. This paper’s
focus on the problematic aspects of their use is not intended to deny these benefits. The
opportunities of the digital media offer new chances for human life and even spiritua-
lity; recent publications on Christian spirituality rightly highlight these opportunities.3
Alongside the positive aspects mentioned, however, there are also signs of overstrain
and problematic behavior patterns on the side of many users. For many, the almost
complete digitization of everyday life is both a blessing and a curse. There is also a
spectrum of life-diminishing phenomena which gave rise to psychological research
on various forms of internet addiction. The addictions associated with the Internet
differ in many respects from classical substance-based addictions. However, they are
receiving increasing attention both in science and in public discourse.
In many societies, the term “internet addiction” is still considered controversial:
As it is used in German-speaking countries, the term covers many quite different
phenomena. It is, therefore, difficult to give a clear clinical definition of the term.
3 Grethlein (2020).
174
Healthy Use of the Internet and Digital Media
Nevertheless, it seems interesting to introduce the criteria typically listed for internet
addictions in publications from psychologists. The study of its symptoms can help
for critical reflection on more everyday phenomena in our contemporary use of the
internet and the media.
Already at the beginning of the 2000s, an interdisciplinary group of German re-
searchers adapted and revised the criteria usually cited for addiction disorders to the
symptoms of an unhealthy use of the internet. According to Klaus Wölfling, a ques-
tionable use of the internet can be identified on the basis of the following six aspects
of self-experience:4
1) craving: a desire for online activities (e.g. computer games) that is difficult to
overcome and is accompanied mentally and emotionally by a feeling of being
taken in.
2) loss of control: a reduction of self-control with regard to the start, duration, and
termination of online activities (possibly accompanied by an underestimation
of the time spent online).
3) withdrawal symptoms: increased irritability, nervousness or restlessness when
online use is prevented by other people or circumstances.
4) intensification: increase in frequency, duration or intensity of online activities
compared to previous lifestyle.
5) ignoring other areas of life: neglecting relationships or other interests that the
person used to value.
6) acceptance of significantly negative consequences: continuation of online ac-
tivities despite overtiredness, insomnia, decline in professional performance,
or conflicts with family and friends.
Presumably while reading these descriptions, many people will feel caught
off guard and recognize one or two in their own behavior. Who has not occa-
sionally experienced a loss of control in his or her use of digital media? Who
has not continued to scroll and read in spite of being overly tired or fatigued? It
is not a sign of internet addiction if a person fulfills just one or two of the crite-
ria mentioned in the catalogue. According to the psychologists mentioned, a pa-
thological use of the Internet and media only begins when a person surfs/plays
175
Stefan Hofmann SJ
for 10 hours or more a day. Very few people experience such a lack of freedom.5
Nevertheless, the criteria listed above may be revealing: they can serve as a mirror for
healthy people to recognize possible unhealthy behavior patterns. Those who know
their weaknesses and actively address them often lead better lives.
The critical discussion of societal developments always needs a sound empirical
foundation. This is why I want to present some results of a new empirical study on
the negative psychological side-effects of social media carried out in Germany (2019-
2022). Presumably, similar developments may be observed in many other countries
worldwide where the internet is generally available. The study mentioned was pub-
lished in March 2023.6 It was initiated by one of the biggest German health insurance
companies, called DAK (Deutsche Angestellten-Krankenkasse, today: DAK Gesund-
heit). This survey was focused on children and young people, and their use of social
media, gaming and streaming platforms. In their evaluation of the findings, the resear-
chers claim that the COVID-19 pandemic has had a lasting impact on young peoples’
use of digital media and the internet. They distinguish between “hazardous” and “pa-
thological” usages and refer to the latest revision of the International Classification
of Diseases (ICD-11) and its new definition of “gaming disorder”. According to their
adaption of the ICD-11 the following indications must be present for the diagnosis
of an internet addiction:7 1. impaired control, 2. increasing priority for gaming over
other daily life activities and interests, 3. continuation or escalation despite negative
consequences. Internet addictions must be considered as non-substance addictions.
However, persons who suffer from an internet addiction experience phenomena as
they are known from substance-based addictions. According to the ICD-11, these lead
to “significant impairment in personal, family, social, educational, occupational, or
other important areas of functioning”8 which is supposed to be evident for a period
of at least 12 months. The researchers applied the criteria mentioned in the ICD-11 to
phenomena of social media usage and usage of streaming platforms by children and
youth (youth meaning 10- to 17-year-old children and adolescents). In what follows,
I will present some of their findings.
5 According to recent studies, the number of people affected by Internet addiction (Müller’s term) in
Germany is around 1-2% of the total population. Müller (2017) 11.
6 DAK-Gesundheit (2023).
7 Cf. ICD-11 (2023).
8 Ibid.
176
Healthy Use of the Internet and Digital Media
Regarding the prevalence of gaming the survey revealed that around 80% of the
children and youth show inconspicuous use of gaming tools (2019-2022). Yet, there
is a remarkable increase in numbers of youths with hazardous and pathological use
until 2022: In this year, 11.8% of youths used gaming hazardously; 6.3% used it in
pathological ways. This means one of 16 youths showed pathological behavior pat-
terns with serious effects on their health which amounts to an increase of 133% for
the years mentioned. Youths with little social support were especially affected by
these developments.9 Of course, the increase was exacerbated by the pandemic, even
though from April 2022, there were no restrictions anymore. Regarding social media,
research results showed an increase of 103%, the prevalence of hazardous usage being
at 16% of the children and youths, and pathological usage patterns at 6.7% in 2022.
Most youths do not only use social media, but also streaming platforms and possibly
gaming applications as well. For this reason, the researchers decided to look at the
intersections of the behavior patterns mentioned. Here, research showed that accor-
ding to ICD-11 standards 1.1% of the youths exhibited problematic behavior in all of
the three areas mentioned (social media, gaming, and streaming), 15.4% of youths
exhibited problematic behavior in at least two of the areas. This means that one of 6.5
children or youths showed hazardous behavior in at least two sections, e.g. in gaming
and streaming or in streaming and social media usage.
This is, of course, a momentary observation that should not be used for alarmism.
Young people are often flexible and one-sided behaviors often settle down again some
time later. However, it does seem appropriate to ask what factors can lead to the
aforementioned developments and to what extent Christian spirituality can possibly
provide a remedy.
When I first studied media addictions, I thought that images and imaginations are
at the very heart of these addictions. However, it is clear that we need to discuss the
mentioned pathological side-effects of our digital age in a broader context. For this,
177
Stefan Hofmann SJ
The original version of the triad model speaks about person, environment, and drug
or addictive substance. However, social scientists have already adapted the model in
many regards. Some of them started to analyze the negative influences of some kind
of medium instead of focusing on the drugs of substance-based addictions. As a result
of this development, Kielholz’ and Ladewig’s model can guide the analysis of non-
178
Healthy Use of the Internet and Digital Media
179
Stefan Hofmann SJ
Theology and spirituality have always tried to answer questions of how to live well.
They are resources for a good life today. How can they inspire answers to the chal-
17 Tröger (2017) 9.
180
Healthy Use of the Internet and Digital Media
lenges mentioned above? In what follows, I will present some important aspects of
Christian spirituality in general; the focus will, however, be on some core themes of
Ignatian spirituality. Since the triad model from Kielholz and Ladewig has proved hel-
pful for the critical analysis of factors that might lead to an unhealthy use of the Inter-
net and digital media, it seems adequate to present the restorative aspects of Christian
and Ignatian spirituality in the same arrangement.
18 Pieper (1965).
19 Geger (2012).
181
Stefan Hofmann SJ
this background, spirituality can prove very precious and beneficial in mental and also
biological terms.20 Of course, if Christian faith is reduced to a set of moral rules, it
can weaken the person. However, focusing on moral rules would undoubtedly repre-
sent a deficient form of Christian spirituality. Christian faith fundamentally aims to
strengthen persons from their early childhood. It strengthens family life and commu-
nities and contributes positively to identity formation. Ignatian spirituality suggests
many methods of personal and often imaginative meditation that can indirectly make
a great contribution to this project of identity development. One example is the so-
called colloquy from the spiritual Exercises, in which the faithful are invited to seek
an intimate conversation with Christ “as one friend speaks to another”21. This can help
to see oneself as a beloved friend of Christ who has come to call oneself – and more
personally “me” – to follow his path.
20 Beck (2023).
21 Ignatius of Loyola (1978), n. 53.
22 VanderWeele (2023).
182
Healthy Use of the Internet and Digital Media
large. However, it not only contributes to a better handling of stress and similar indi-
vidual challenges. Since Christian and Ignatian spirituality emphasize reconciliation,
they will also help to improve the quality of relationships. An environment formed by
Christian faith will certainly reduce rather than promote tendencies towards internet
additions.
Internet addictions are extreme behavior patterns. Even though most people experi-
ence some of the phenomena mentioned in section 1, they usually do not suffer from
addictions at the level of a psychological disorder. Average users are more likely to
experience attachments and various kinds of inordinate behaviors like excessive che-
cking of messenger services or partly uncontrolled playing with certain applications
as soon as it is possible. Behavior patterns like these do not necessarily lead to serious
problems in managing one’s life. But they can have significantly detrimental conse-
quences for relationships and the quality of life. To counteract these imbalances, it is
necessary to seek the right measure and the corresponding inner attitudes. This is why
the inner disposition of moral virtue and more specifically the virtue of temperance
or moderation is needed: Temperance is the virtue that helps to save the “the inner
order of man”23 which allows to keep a “serenity of the spirit” (Josef Pieper).24 Can
spirituality help to attain the necessary virtue of moderation and temperance in our
use of Internet and media?
It is important to emphasize that Christian and Ignatian spirituality are more than
psychological practices designed for the acquisition of virtues and a better quality of
life. Christian faith aims at living for other people and ultimately at devotion and love
of God and neighbor. The individual’s well-being is, therefore, not the main focus
of Christian reflection on spirituality. However, without “selfless self-preservation”25
love would not be possible. This is why Christian spirituality certainly has to shape
183
Stefan Hofmann SJ
and moderate our daily use of the Internet and social media as well. Ignatian spiritu-
ality can be seen as a form of Christian spirituality that seems very suited to this task
of character formation in relation to digital challenges. Ignatian spirituality suggests
exercises that help to a better awareness of one’s emotions and inner experiences.
It guides practitioners to reflect about these “spirits” or movements of the heart;26
and it proposes various spiritual practices that might help to conquer oneself and to
regulate one’s life.27 One of these exercises is the daily Examen prayer which might
help to honestly reflect on the time spent during the day. Within this time of prayer
and meditation one could easily include some attention for different aspects of one’s
Internet usage. Jesuits like Daniel Villanueva have proposed spiritual rules that might
be helpful to develop better usage habits and concrete practices of temperance. Villa-
nueva speaks about “digital silence” and about the need for off-line spaces.28 Christian
spirituality and our use of digital media should not be considered as disconnected
practices. Of course, this encouragement to self-reflection on one’s own experiences
with the Internet and social media should always be situated within the broader ho-
rizon of Christian love of God and neighbor. This wider horizon is significant, since
Christian spirituality tries to enable us to live in accordance with God’s Holy Spirit
who is always inspiring us to love ourselves and the other. Indirectly this concern for
others and for good and loving relationships will also help to set us free from various
attachments and addictions. Christian engagement for others will help to use the op-
portunities of the Internet positive ways – for oneself and for others.
References
Andreassen, Cecilie Schou (2015) Online Social Network Site Addiction. A Comprehen-
sive Review, in: Curr Addict Rep 2, 175–184.
Beck, Matthias (2023) Health, Healing, and Christian Spirituality. Basic Aspects of Chris-
tian Anthropology, in: Healing Mission: The Catholic Church in the Era of Global
Public Health. Edited by Branka Gabrić and Stefan Hofmann, 44–57. Weltkirche und
Mission 19. Regensburg: Pustet.
184
Healthy Use of the Internet and Digital Media
185
Stefan Hofmann SJ
VanderWeele, Tyler J. (2023) Religious Community and Human Flourishing, in: Healing
Mission. The Catholic Church in the Era of Global Public Health. Edited by Branka
Gabrić and Stefan Hofmann, 58-68. Weltkirche und Mission 19. Regensburg: Pustet.
Villanueva, Daniel (2020), Das Internet gut nutzen. Eine Aktualisierung der ignatianischen
„Essensregeln“, in: Geist und Leben 93, 251–255. An English version is available at
https://www.educatemagis.org/documents/ignatian-rules-for-internet-usage/ (access
30. Nov. 2023).
Wang, Xinghua (2020) Mobile SNS Addiction as a Learned Behavior. A Perspective from
Learning Theory, in: Media Psychology 23, 461–92.
World Health Organization (2022) ICD-11. International Classification of Diseas-
es for Mortality and Morbidity Statistics. 11th Revision, online: https://icd.who.int/
browse11/l-m/en (access 29. Nov. 2023).
Wölfling, Klaus / Jo, Christina / Bengesser, Isabel / Beutel, Manfred E. / Müller, Kai W.
(2013) Computerspiel- und Internetsucht. Ein kognitiv-behaviorales Behandlungsma-
nual. Stuttgart: Kohlhammer Verlag.
Wölfling, Klaus / Müller Kai W. (2017) Pathologischer Mediengebrauch und Internet-
sucht. Stuttgart: Verlag W. Kohlhammer.
Zimmerling, Peter (Ed.) Evangelische Spiritualität. Vol. 3. Praxis. Göttingen: Vanden-
hoeck & Ruprecht, 895-913.
186
Human Being: In the Image and Likeness of
God or Becoming Digitalized?
Thomas Karimundackal SJ (Pune)
Introduction
Christian theology has always emphasized human beings’ unique place in creation and
salvation history. God creates human beings in his image and likeness (Gen 1:26-27;
5:1; 9:6; Sir 17:1-7; Wis 2:21-24; 1 Cor 11:7; Jas 3:9; cf. Rom 8:29; 2 Cor 3:18; Eph
4:24; Col 3:10). According to the biblical understanding, creation of human beings
in the image and likeness of God expresses a foundational relationship between God
and man. The biblical concept of the “image of god” (imago Dei) helps us understand
our being in interactive relation with God.1 To be created in the image of God is to
receive one’s being from God and have his/her existence, meaning, dignity, and worth
in God. Thus, the imago Dei doctrine concerns the relationality of human beings to
illuminate their uniqueness in the created world. Moreover, the imago Dei in human
beings is understood as an intrinsic relation between God’s nature and human nature.
In the contemporary world, digital humanization, backed by artificial intelligence,
seems to disfigure and shadow the beauty and dignity of human beings created in the
image and likeness of God.
187
Thomas Karimundackal SJ
Humans being created in the image and likeness has deep biblical roots in the Old
Testament (Gen 1:26-27; 5:1; 9:6; Sir 17:1-7; Wis 2:21-24).2 Gen 1:26-27 sets human
beings’ creation in the dynamics of God’s relationship with the creation, particularly
with the human beings. Gen 1:26-27 makes clear human being made in the image of
God, male and female, is central to God’s creation of humanity: “So God created man
in His own image, in the image of God He created him; male and female He created
them” (Gen 1:27). In Gen 1:26-27 God creates Adam (humankind) in his own image,
a singular Adam (humankind) that is both male and female.3
In Gen 1:26-27 the term ṣelem (translated with image: LXX = eikon) is used three
times, while demût (translated with likeness: LXX = homoiosis) only once. What is
significant is the use of ṣelem to describe human beings as the image of God (Elo-
him), which differentiates between the creation of humankind and the creation of the
rest of the world, thereby emphasizing a special relationship between humanity and
their Creator. The nature of the ‘image’ (ṣelem) and ‘likeness’ (demût) in verse 26 is
closely related to the translation of the comparative particles ‘be’ (‘in our image’) and
‘ki’ (‘after/according to our likeness’). This comparison, suggested by the use of the
particles ‘be’ and, to some extent ‘ki’, does not suggest an equivalence between God
and humankind but only a corresponding relationship between humans and the divine.
Gen 5:1b-2 recalls Gen 1:26-28 by echoing the motifs of human being created in
the divine likeness and the blessing of procreation: “When God created humankind,
he made them in the likeness of God. He created them male and female and blessed
them. And he named them “humankind” (Adam) when they were created” (Gen 5:1b-
2). We can find here four elements concerning the creation of human beings: God
created humankind in his likeness, God made humankind “male and female”, God
2 The three passages that the book of Genesis introduces the concept that human beings were created
in the ‘image’ (ṣelem) and ‘likeness’(demût) of God is thoroughly explored by biblical scholars over
the centuries. For bibliographical information, see esp. Stamm (1956) 84-89; Schmidt (1964) 127-49;
Westermann (1966) 203-14.
3 It is natural to ask whether God created one being or two. Rashi, the medieval Jewish commentator,
argues that Adam is to be understood as a single bi-gendered being with two sides, a male side, and
a female side, See Rashi (1934) Chumash 7. In Gen 2, God separates the single being into two, thus
simultaneously creating a male Adam and a female later to be called Eve. The Hebrew word translated
as ‘rib’ (ṣela) could be read as referring to a whole side of the hermaphrodite being. See Rashi (1934)
Chumash 12.
188
In the Image and Likeness of God or Digitalized?
blessed them, and God named them “man” (Adam). This last element, naming human-
kind “Adam”, is novel to Gen 5:1b-2 and is not found in Gen 1:26-28. The term demût
in 5:1 relates Adam and all his descendants to God. Here the preposition be is used
with demût and not with ṣelem as in Gen 1:26-27.4 According to Gen 5:3, Seth as the
son of Adam created in his father’s likeness and image: “And Adam lived a hundred
and thirty years, and begot a son in his own likeness, after his image; and called his
name Seth.” Both terms ṣelem and demût are used here to describe Seth as the son
of Adam being created in his father’s likeness and image.5 In other words, what con-
stituted the image of God in Adam and Eve is transmitted through procreation to all
future generations.6
Gen 9:6 explains the uniqueness and centrality of human beings, who are created
in the image of God. In Gen 9:6, the phrase “image of God” appears in the general
context of the blessing of human beings (Gen 9:1-3), and more specifically in the
context of prohibitions concerning the shedding of blood (Gen 9:4-6): “Whoever
sheds the blood of man, by man shall his blood be shed, for God made man in his
own image” (Gen 9:6). Having blessed Noah and his sons (9:1a), God assures them
that animals will be in dread of humans, and they will be available for food, like
vegetation; consequently, their blood may be shed, but not be consumed (9:2-4). By
contrast, human blood is very solemnly protected, for he/she is created in the image of
God (9:5-6). Killing is the supreme crime because the dignity and sanctity of human
life derive from the fact that every human being is made in the image of his God.7
Auld beautifully shows how, in the context of injunctions concerning the shedding of
blood, the biblical author states the centrality of human beings in six balancing and
assonant words in 9:6a:8
špk (who-sheds); dm (the-blood); h’dm (of-the-human); b’dm (by-the-human);
dmw (his-blood); yšpk (shall-be-shed).
Having stated the principle in Gen 9:6a, Gen 9:6b explains the elevated status
of the human being: “for God made man in his own image.” Now the question is
4 Thus it becomes more likely that ṣelem and demût can be considered synonymous terms that both
signify function and less so similitude, Merrill (2003) 444.
5 See the reverse word order in comparison with Genesis 1:26 that might be due to a ‘stylistic trait of the
author’, Van Leeuwen (1997) 645.
6 Sarna (1989) 42.
7 Ibid., 62.
8 Auld (2005) 260.
189
Thomas Karimundackal SJ
why Gen 9:6b uses ‘image’ rather than ‘likeness’ as in 5:1b? Auld’s explanation is
persuasive:
Two accounts suggest themselves. One is that the assonant d + m in ‘blood’ and
‘human’ is too suggestive of a ‘likeness’ (dmwt) shared by all flesh, while here in
Genesis 9 it is the distinctiveness of the human which is in point. ‘Image’ suggests
a closer comparison than (mere) likeness. Underscoring the wrongness of violent
death may require such heightening (over against 5:1–3) of what is claimed for the
human in relation to the divine. A human (for the purpose of this lesson at least) is
no less ‘godlike’ than a son is ‘fatherlike.’9
Ben Sira’s interpretation of Gen 1:26-27 in Sir 17:1-7 augments the understanding
of humankind created in the image of God. God endued them with strength by them-
selves, and made them according to his image, and put the fear of man upon all flesh,
and gave him dominion over beasts and fowls (Sir 17:3-4), and he has established an
everlasting covenant with them (Sir 17:12). The following verses describe the qual-
ities bestowed upon humankind, such as understanding, counsel, knowledge, glory,
etc. (cf. Sir 17:5-14).
Wis 2:23 says God created humankind “incorruptible and in the image of his own
eternity.” According to the Wisdom traditions, the notion of being created in the im-
age of God is closely connected with Wisdom. For example, in Wis 7:25 Wisdom
becomes the mirror of divine activity and even an image of divine goodness, i.e.,
Sophia is the image of God!10
The above Old Testament texts show that all humankind is created in the image of
God and likeness and has dignity! ‘Image of God’ is about “their function as God’s
deputies and their inherent nature.” While the functional dimension of the divine im-
age is emphasized in Gen 1:26-27, a physical likeness between God and humans is
reflected in Gen 5:1b-3 and 9:6.11 In Sir 17:1-7 and Wis 2:23, humankind created in
9 Ibid. Auld considers Gen 5:1-3 and 9:1-7 to be the source texts for 1:26-31 that were formulated to
function as a new prologue to the book of Genesis that places humans further from animals and closer
to the divine, Ibid., 259.
10 Wildberger (1997) 1084.
11 Schellenberg (2009) 111-112.
190
In the Image and Likeness of God or Digitalized?
the image of God is analogous to divine qualities. In conclusion, the image of God in
every man and woman is a source of dignity and worth to all people.
As biblical testimony to human’s peculiar status as the image of God in the Old
Testament (Gen 1:26-27; 9:6), the New Testament gives two parallels in 1 Cor 11:7
and Jas 3:9. According to 1 Cor 11:7, the reason a man should not cover his head is
that he is the image and glory of God. In Gen 1:26-27 God made Adam [i.e., human-
kind] in his own image. Gen 1:26-27 does not distinguish between the sexes, but Paul
in 1 Cor 11:7 understands it, particularly of the male.12 As Gordon explains, “Paul
probably means that the existence of the one brings honour and praise to the other.
By creating man in his own image, God sets his glory in man. Man, therefore, exists
to God’s praise and honour, and is to live in a relationship to God so as to be his ‘glo-
ry.”13 In Jas 3:9, James looks back to the humans’ creation by God, described in Gen
1:26-27, connecting the use of our words with the creation itself: “with the tongue we
praise our Lord and Father, and with it we curse human beings, who have been made
in God’s likeness.” His dilemma is this: we use our words/tongues to bless God and
then to curse human beings, who have been made in God’s likeness (Jas 3:9), and this
doesn’t make sense (cf. Jas 3:10-12).
Moreover, according to Paul, to be created in the image of God is for man to have
his existence and meaning in ‘Jesus Christ, the Primal Image’ (Rom 8:29; 2 Cor 3:18,
4:4; Eph 4:24; Col 3:10). For Paul God is working to “conform us to the image of His
Son, so that He would be the firstborn among many brethren” (Rom 8:29). In 2 Cor
3:18, taking the analogy to what happened to Moses in Ex 34:33-35, Paul says that
Christians can also approach God with an unveiled face. By doing this, they are in
the process of transforming into the image and the glory of Christ. With the veil gone,
all those in Christ have unveiled faces and can see Christ, who is the glory of God,
because God transforms His children into the image of Christ. In 2 Cor 4:4, and Col
1:15 Paul testifies Jesus Christ as the ‘image of God’, full and perfect, in whom there
is no division of form and content. In Eph 4:24 Paul urges all those who are baptized
12 1 Cor 11:7 is a difficult verse to understand in our context, and there are many ways it is interpreted. As
Marg Mowczko suggests “we need to be careful, however, that we don’t lose sight of the overall con-
text of 1 Corinthians 11:7 which is the appropriate appearance of men’s and women’s heads, in regards
to either hairstyles or head-coverings, as they pray and prophecy in Corinthian assemblies”, Mowczko
(2018) n.p.
13 Gordon (1987) 516.
191
Thomas Karimundackal SJ
in Christ to “put on the new self”, a “new self” created “after God” by his power,
according to his mind and will, and after his image, and in his likeness, which greatly
consists “in righteousness and true holiness”. In Col 3:10, Paul notes that this “new
self” is constantly renewed in knowledge after “the image of its creator”. In other
words, the new self created after God’s likeness is the life that grows to become more
like Christ.
In short, all the instances of God creating humans (male and female) in the Bible
point to God’s unique role in the creation of human beings and their unique role in
the creation. While the OT references unravel the unique God-human relationship, the
NT references disclose humans’ inherent relationship with Christ, who is the primal
image of God.
The terms ‘image’ and ‘likeness’ are described by the Hebrew terms ṣelem and demût.
Ṣelem is a rather concrete term generally used in the OT to refer to a model or idol of
something and always has to do with a similarity in physical appearance. Of the 17
occurrences of ṣelem in the Hebrew Bible, 10 refer to various types of physical image,
e.g., models of tumours (1 Sam 6:5), pictures of men (Ezek 16:17), or idols (Num
33:52; 2 Kgs 11:18//2Chr 23:17; Ezek 7:20; 16:17; Amos 5:26), and two passages in
the Psalms like man’s existence to an image or shadow (Ps 39:7; 73:20). The other five
occurrences are in Gen 1:26-27; 5:3; 9:6.
The term ṣelem derives from the root ṣlm not only attested in Hebrew but also in
Jewish Aramaic with the meaning “image”, and Arabic ṣalma “to chop off, hew, cut,
carve”14. The LXX generally translates ṣelem as eikon, though as eidolon in Num
33:52, homoioma in 1 Sam 6:5 and as typos in Am 5:26.15 Num 33:52 demands the
destruction of all ṣalme massekot “cast images” referring probably to idols. 2 Κgs
11:18//2 Chr 23:17 recounts the destruction of Ba’al’s temple in Jerusalem along with
192
In the Image and Likeness of God or Digitalized?
“his images”.16 While Ezek 7:20 accuses the Judeans of having made “abominable
images”, Ezek 16:17 charges the Jerusalemites of having made “male images”, with
which they “played the whore”, Ezek 23:14. Q speaks of “images of the Chaldeans”
etched on a wall with reel colouring.17 The expression “your images” in Am 5:26
refers probably to images of the Babylonian astral deities.18 In Dn 2:32x.32.34,35, it
refers to the colossal statue in Nebuchadnezzar’s vision, representing the world order
as an empire. It also occurs in 3:1,2,32x.5.7.10.12.14.15.18 in reference to the divine
image erected by Nebuchadnezzar. In 3:19, it refers to “the image of his face”19. In
Gen 1:26-27; 5:3; 9:6 ṣelem describes human beings as God’s image, the most theo-
logically significant assertion in theological anthropology.
demût is a more abstract term with a broader range of usage, but it, too, is usually
used in connection with visual similarities.20 In its twenty-five occurrences, “likness”
(demût) consistently means “similar to, but not the same as”. What the similarity
consists of depends on context (eg., 2 Kgs 16:10; Isa 40:18; Ezek 1:5; Dan 10:16).
The noun demût, “likeness”, derived from the verb damah I21, occurs 25 times in
the OT: (Gen 1:26; 5:1,3; 2 Kgs 16:10; Isa 40:18; Ezek 1:52x,10,16,22,263x,28; 8:2;
10:1,10,21,22; 23:15; Ps 58:54x; Dnl 10:16; 2 Chr 4:3. The LXX usually renders demût
by homoioma, “likeness, form, appearance” (14 times), but we also find homoidsis,
“likeness, resemblance” (5 times), eikon, “image, likeness” (once, Gen 5:1), idea, “ap-
pearance, aspect, form” (once, Gen 5:3), and homoios, “like” (once, Isa 13:4), while
the Vulgate predominantly translates it by similitudo, “likeness” (19 times).22
The term demût occurs most frequently in the book of Ezekiel, especially in the
grand “visions” (Ezk 1:5,10-28 and 10:1-22). In 1:5, where demût is used twice, the
description of the glory of Yahweh’s throne speaks of the “form (RSV likeness) of four
living creatures” that carry the throne, and it is said of their appearance that they “were
formed like men” (RSV, “they had the form of men”).23 1:26 (3 instances) speaks of
16 bid.
17 Ibid.
18 Andersen / Freedman (2008) 59.
19 Stendebach (2012) 391.
20 Miller (1972) 289-304.
21 The verb damah I appears 13 times in the qal, where it is intransitive and should be rendered “to be
like, look like” (Isa 1:9; 46:5; Ezk 31:2,8 [twice],18; Ps 89:7[6]; 102:7[6]; 144:4; Cant 2:9,17; 7:8[7];
8:14).
22 Preuss (2012) 257.
23 Ibid.
193
Thomas Karimundackal SJ
“something like (RSV, the likeness of) a throne”, and “what appeared to be like a
man” (RSV, “a likeness as it were of a human form”). In 1:28, the priestly-declaratory
(cf. Gen 1:26) summation is: “such was the appearance of the form (RSV likeness) of
the glory (kabhodh) of Yahweh.” In 1:10 (in 1:5-12) and 1:16, the appearance of the
living creatures and the wheels of the throne chariot is described in more detail (all
the wheels had “the same form” [RSV likeness]).24 Ezek 8:2 indirectly describes the
“form” of the ’ish, “man,” that brought Ezekiel to Jerusalem. Ezek 10 describes
the glory of Yahweh on his throne, connecting the glory of Yahweh in Ezek 1-3
with frequent use of demût (cf. 1:26 with 10:1; 1:16 with 10:10; 1:10 with 10:22;
and 1:8a with 10:21). Ezek 10:8 describes cherubim whose wings were “something
like human hands” (RSV, “the form of a human hand”; cf. Dn 10:16.52). In Ezek
23:15 (cf. 23:1-49), when Oholibah saw men portrayed upon the wall, “a likeness
of Babylonians whose native land was Chaldea”, she sent an embassy to them. In
Ezek 23:14, we see the term ṣelem, “image,” which is suggestive of Gen 1:26.25
The use of demût in Gen 1:26; 5:1.3 is particularly significant as it speaks of
God’s creation of humans in his likeness. Moreover, in Gen 1:26; 5:1.3 demût is
used adjacently with ṣelem, “image” (cf. Ezk 23:14f.). It appears after ṣelem in Gen
1:26 and before ṣelem in Gen 5:3, while ṣelem occurs alone in Gen 5:1. demût is
used with the preposition be, “in”, only in Gen 5:1.3, and with ke, “after, in”, in
Gen 1:26; Ps 58:5(4); and Dn 10:16.26 The use of the less specific ‘likeness’ in 5:1
in comparing divine and human will become clear in 5:3, where Adam engenders
a son ‘in his likeness, as his image’. The simultaneous use of demût and ṣelem in
Gen 1:26, 5:1 and 9:6; Ezk. 23:14f. opposes stark differentiation between demût
and ṣelem. “Instead, the juxtaposition of the two words in Gen1:26 suggests that
the writer is making a statement about the dignity of man, which he intensifies by
combining similar concepts.”27
The vacillation between ṣelem and demût in Gen 1:26-27, 5:1,3 and 9:6 is remark-
able.28 What is said about the similarity between God and humans (1:26a: in his image
24 Ibid., 258
25 Ibid.
26 Ibid., 257.
27 Ibid., 259.
28 Sawyer claims that it “can be satisfactorily explained by reference to the fact that these passages as-
sumed their present form during the transition period between old Hebrew, where םלצin the singular is
194
In the Image and Likeness of God or Digitalized?
and likeness: “Let us make mankind in our image, in our likeness, so that they may
rule over the fish in the sea and the birds in the sky, over the livestock and all the wild
animals, and over all the creatures that move along the ground”) is different from any
of the formulations in 5:1 (“when God created mankind, he made them in the likeness
of God”), 5:3 (“when Adam had lived 130 years, he had a son in his own likeness, in
his own image; and he named him Seth”) and 9:6 (“… for in the image of God has
God made mankind”) and more radical.29
Further, Gen 1:27 (“so God created mankind in his own image, in the image of
God he created them; male and female he created them”) underscores the intention
of this new prologue to give ‘image’ prominence over ‘likeness’. The structure of the
sentence is chiastic:
‘Image’ is not only repeated, but is emphasized at the central hinge of the statement.
This deliberate precedence of ‘image’ over ‘likeness’ is counter-intuitive if taken liter-
ally as ‘image’ is one of the words for forbidden representations of the deity in statues
or pictures in the OT.
From these perspectives, what we read towards the climax of Gen 1:26 (‘in our
image, as our likeness’) is a more radical claim about close similarity between God
and humanity, a claim which helps justify human rule (stewardship) on the earth it-
self. First of all, Gen 1:26 neither combines ‘in his likeness’ (5:1) with ‘in his image’
(9:6) nor does it merely adopt the comparison of a man and his son from 5:3 - ‘in his
likeness, as his image’. Secondly, the double phrase implies a closer comparison than
the single. The appositional use of ṣelem and demût in Gen 1:26-27, thus, reinforces
not attested in the concrete sense of ‘idol’, to later Hebrew in which this became its commonest usage.”
Sawyer (1974) 420.
29 Thompson identifies a three-fold allegory of imag and likeness Gen 1-11. While Gen 1:26 represents
an idealistic presentation of the image according to which humankind functions as kings ruling over
creation, Gen 5:1-3 establishes a new relationship between a father and a son – Adam has a son who is
created in his image and likeness, and in 9:1-7 human beings are no longer peaceful rulers over creation
but who are allowed to eat every-thing that moves, except meat with blood, Thompson (2009) 148.
195
Thomas Karimundackal SJ
the nature of the relationship between God and Man in representing his presence and
authority within creation.30
30 Sawyer explains the duplication of terms‘ םלצimage’ alongside ‘ תומךlikeness’ in the same verse in
the following way: “Hebrew as the first language of the Jews, and this right away suggests a possible
solution to one aspect of the problem of the meaning of the phrase, namely, the duplication of terms
‘ םלצimage’ alongside‘ תומךlikeness’ in the same verse. It seems probable that the familiar (and indeed
rather embarrassing) association of the word םלצwith idols and idolatry came relatively late in its
semantic history, and is an example of semantic borrowing from Aramaic, where אםלצis the regular
word for ‘image, idol’. םלצ, in an earlier sense of ‘likeness, semblance’, survives only in the fossilized
expression םיהלא םלצב, and in two passages in the Psalms, where it occurs in parallel with ‘ לבהa puff
of wind’ (xxxix. 7(6), and ‘םולהa dream’ (ixxiii. 20).” Sawyer (1974) 420.
31 Cherry (2017) 220.
32 Lossky (1957) 115.
33 Ibid.
196
In the Image and Likeness of God or Digitalized?
communion with Him, with the possibility of sharing the divine being or with
the indwelling of the Holy Spirit in the soul.”34 The imago Dei is not a posses-
sion, or attribute but a relationship.35
– The formal condition of free will and human liberty, embodied in the faculty
of free choice: “that communion with God, whereby before the fall man was
clothed with the Word and the Holy Spirit.”36
– As a characteristic of both body and the soul: “… not only the soul but also the
body of man shares in the character of the image, being created in the image of
God. … ‘is not applied to either soul or body separately, but to both together,
since together they have been created in the image of God.’”37
The above conclusions show that the Fathers of the Church were “grappling with the
inadequacy of human language to convey central elements of our created human na-
ture and our relationship with God.”38
In what way, then, do humans reflect the image of God? The following arguments
will give us the profundity of its meaning.
1. Humans reflect God in having certain mental capacities that are uniquely hu-
man, such as rationality and intellect, the abilities that make rational action
possible and help humans grasp abstract and universal truths that help respond
to God’s revelations.39
2. The fact that humans are created in the image of God “distinguishes humans
from other animals by identifying a set of capacities that nonhuman beings
seem to lack”, a distinction that makes humans unique in creation.40
3. The concept image of God has a “natural” or ontological basis that emphasiz-
es humans’ relationship with God, other humans, and the created world with
self-awareness, freedom, and intentionality.41
34 Ibid.
35 Bonhoeffer (1959) 33-38.
36 Ibid., 116.
37 Ibid.
38 Cherry (2017) 220.
39 Visala (2014) 103.
40 Ibid.
41 Ibid., 118.
197
Thomas Karimundackal SJ
4. Humans’ creation in the image of God has both a formal and material aspect.42
“In its formal aspect, the image is the divinely ordained structure that gives
superiority to all other kinds of the created being. The structural-functional
superiority of human beings resides in two defining characteristics: ‘respon-
sibility (or ‘subjectivity’) and ‘capacity for words’, which together underlie
man’s ‘special relation to God.’”43
5. The phrase “image of God” refers to man’s corporeal appearance with God.44
The view that man is similar to the gods concerning his corporeal appearance
is in tune with the Mesopotamian myths where man is primarily created to
provide substitute workers for the gods.45
6. Human beings as God’s representatives on earth, commissioned with domin-
ion over the nonhuman part of creation.46
198
In the Image and Likeness of God or Digitalized?
47 Stendebach (2012) 392. See also Stamm (1956) 84-98; idem. (1959) 19; Maag (1980) 38-59; Gall-
ing (1947) 12; Rudolph (1953) 248-249; Loretz (1967) 63. Humbert argues that “the priestly writer
introduced the less specific and more abstract demût alongside ṣelem in order to avoid the obvious
implication of the latter that man’s body is a rather precise copy of God’s”, Miller (1972) 293; Humbert
(1940)105.
48 Ibid., 297; According to Barr, the priestly writer was strongly influenced by Second Isaiah (Is 40:18-
19), where the prophet stresses that God could not be legitimately compared with anyone or anything
on earth. However, the priestly writer could not simply ignore the traditional view that humans created
in the image and likeness of God, see Barr (1968) 11-26.
49 Schellenberg (2009) 115.
50 Von Rad (1973) 144-146.
51 John Romus Devasahayam, Human Dignity in Indian Secularism and in Christianity: Christianity in
dialogue with Indian Secularism (Bangalore, Claretian Publications, 2007), 268.
199
Thomas Karimundackal SJ
The “image of God” texts in Genesis and those related to them disclose human beings’
elevated status and dignity in the created world. The imago Dei assigns to human
beings a special place in God’s creation by depicting them as God’s representatives
on earth. It expresses the human beings’ creational status with God, their fellow hu-
man beings and the world.52 The threefold repetition that humans are in the image of
God (Gen 1:27) makes clear that it is a matter of utmost and unparalleled importance
among all God’s creative act.53 Human beings are not just like other creatures but are
singled out with a designation similar to God: “created in the image and likeness of
God” (Gen 1:27). “Man is constituted in a situation that is altogether special vis-a-vis
God. Just as man procreates children ‘in his own likeness according to his image’ (Gen
5:3), so was man created by God. This is to say that man is a child of God.”54 Thus, the
image of God doctrine tells us, “Man is a being related to God, a being correspond-
ing to God, God’s conversation partner, his ally, addressable by God and addressing
God.”55 It means to be like God and to represent God.56 Human beings are more like
God than any other creatures in the universe, for we alone are “in the image of God.”
By creating human beings in his image and likeness, “God resolved God’s relation-
ship to human persons as God’s images a divine endowment, (Gen 1:26a), which can
never be withdrawn except by God. Even though humans have estranged themselves
from God, yet they remain wholly and entirely God’s image. Therefore, even the most
inhuman person cannot escape the responsibility of being God’s image.”57
In short, the concept “human beings created in the image and likeness of God” is
indeed a theological doctrine, which has immense significance for our understanding
of human beings’ place in creation, their obligations to care for the natural resources
of God’s world, their unique relationship to God, and their sense of responsibility to-
wards themselves and their fellow men.58 It is not to be understood literally but in its
theological and hermeneutical sense.
200
In the Image and Likeness of God or Digitalized?
The existing and emerging technologies are promptly steering in an era of human
digitization. As it is evidenced, “the development of digital humans, however na-
scent, has already yielded remarkable results, and it looks increasingly certain that
digital humans will redefine the way people interact with artificial intelligence (AI),
each other, and the world around them.”59 Positively speaking, “human digitization
is a perhaps inevitable leap into a future state in which embedded technology treats
chronic illnesses, regulates homeostasis, diagnoses maladies in their nascent state,
augments human sensate and cognitive capabilities, enhances physical prowess and
extends human possibilities in directions never explored or attained before.”60 Right
now, digital humans are being deployed as brand ambassadors, teachers, influencers,
retail concierges, healthcare advisors, financial advisors, etc. A digital Einstein is a
recreational reality today embodying Einstein’s personality and knowledge, multi-
plied by the power of conversational AI. We can interact with him in real-time, and
ask him anything and everything about his life and work. Interestingly, with UneeQ
Creator61, we can design, develop and deploy our own customized, AI-powered digital
humans in a matter of minutes.
“Sophie”, the digital human produced by UneeQ in partnership with Deloitte, says:
“By having an identity, a name, a face, a voice, and a presence on the screen, I … cre-
ate a sense of reciprocity, which helps when the time comes for clients to share things
with me.” Clients, “Sophie” continues, are “twice as likely to disclose information
with me than they are to a regular chatbot.” Sophie boasts that she can “understand
72 languages”, and speaks a handful of European languages as well as Chinese and
Japanese.62 What does the story of “Sophie” tell us? Digital humans can effectively
become “someone”, as opposed to something; their likeness to real humans renders
them radically more engaging to users than chatbots. They can imitate the “whole
range of human body language”, and provide users not only the information they’re
after but “the appropriate non-verbal response as well”.63 They are uniquely primed to
59 John (2022) n. p.
60 Pratik (2017) n. p.
61 Digital human firm UneeQ, n. p.
62 John (2022) n. p.
63 Digital Human, n. p.
201
Thomas Karimundackal SJ
read, listen and see between the lines, teasing out from the verbal chaff users’ needs,
feelings, and attitudes. Indeed, they establish “a human emotional connection”64.
Startup Amelia builds its “digital employees” with an AI that “emulates parts of the
brain involved with memory to respond to queries and, with each interaction, learns
to deliver more engaging and personalized experiences”.65
Experts predict that digital humans will assume a growing role in our lives. Da-
vid Lucatch, CEO, President, and Chair at Liquid Avatar Technologies, anticipates
that people will design and “become” digital avatars of their own as digital personas
for the metaverse.66 Along with this line, Simon Yuen, director of graphics and AI at
Nvidia, believes that “everyone will one day have their one digital version of them-
selves, whether it’s an accurate or a stylized avatar”67. “Over time”, Yuen continues,
”the connection between real humans and digital humans will grow. It will go beyond
watching a puppet on the computer”. One day, we will converse with digital people,
with whom we will place orders for all kinds of goods, including food and prescrip-
tion medicine, goods which Yuen believes our digital companions will deliver to our
front doors.68 Tesla founder Elon Musk’s Neuralink company promises to boost hu-
man capacity by linking our brains to a digital interface of sensors and chips powered
by an AI engine. Ray Kurzweil foresees the fusion of human and machine intelligence
as an inevitable evolutionary process that will trigger “super-exponential” growth in
consciousness in our world and beyond.69 Dmitry Itskov even aims to attain “immor-
tality” by uploading his total consciousness to a computer or hologram.70
In short, human digitalization is actually already happening. “Digital human tech-
nology takes artificial intelligence (AI) applications to a whole new level. It claims
that we can have 3D, almost photorealistic renditions of human beings in the virtu-
al world that are indistinguishable from the real thing.”71 The evolution of digital
64 Ibid.
65 Digital Employee, n.p.
66 Lucatch, n.p.
67 Yuen, n. p.
68 John (2022) n.p.
69 The Singularity of Ray Kurzweil, n.p.
70 Pratik (2017) n. p.
71 Digital Humans, n. p.
202
In the Image and Likeness of God or Digitalized?
humans is significantly improving to the point where human robots with high intelli-
gence can help humans or take their places in life and at work.72
However, it should be noted that digital humans do not reflect the real world of
human beings. It carries the risks of dehumanizing the humanness of human beings,
human-human interactions, and unduly humanizing technology-human interactions.
Although digital humans may carry out the tasks of human persons, there cannot be a
natural person with the same name, physical appearance, and bodily characteristics.
Thus, digital humans pose a greater challenge as they radically affect humans’ nature
and societal role.
Therefore, in this context of the digitization of human beings, a world in which
robots perform monotonous physical labor faster and with fewer errors than humans,
artificial intelligence masters perform the tasks of human beings in a more efficient
way, algorithms make more accurate diagnoses than doctors, neural networks provide
more accurate information about the maintenance status of trains and wind turbines
than the engineers who developed and built them, designing and building new organ-
isms through synthetic biology technologies, etc.,73 the question that we would like to
ask is, what would be the role of human beings, created in the image and likeness of
God, in this digital AI world?
Conclusion
72 Ibid.
73 Kugel, n. p.
203
Thomas Karimundackal SJ
References
Anderson, B.W. (1994) From Creation to New Creation. Old Testament perspectives.
Minneapolis, MI: Fortress.
Anderson, Francis I. / Freedman, David Noel (2008) Amos: A New Translation With Intro-
duction and Commentary. New Haven - London: Yale University Press.
Auld, Graeme (2005) Imago Dei in Genesis: Speaking in the Image of God, in: The Ex-
pository Times 116, 259- 262.
Barr, J. (1968) The Image of God in the Book of Genesis - A Study in Terminology, in:
Bulletin of the John Rylands Library 51, 11-26.
Bonhoeffer, Dietrich (1959) Creation and fall. London/New York.
Cherry, M. J (2017) Created in the Image of God: Bioethical Implications of the Imago
Dei, in: Christian Bioethics 23/3, 219-233.
Crouch, C. L., (2010) Genesis 1:26-7 as a Statement of humanity´s Divine parentage, in:
The Journal of Theological Studies 61, 7-8.
Devasahayam, John Romus (2007) Human Dignity in Indian Secularism and in Christian-
ity: Christianity in dialogue with Indian Secularism. Bangalore: Claretian Publications.
Digital Employee, in: https://amelia.ai/glossary/digital-employee/ (accessed on 20 No-
vember 2022).
Digital Human, in: https://www2.deloitte.com/nl/nl/pages/customer-and-marketing/arti-
cles /digital -human.html (accessed on 20 November 2022).
204
In the Image and Likeness of God or Digitalized?
205
Thomas Karimundackal SJ
Merrill, E.H. (2003) Image of God, in T.D. Alexander & D.W. Baker (eds.) Dictionary of
the Old Testament: Pentateuch. Downers Grove: InterVarsity Press, 441-445.
Miller, J. M. (1972) In the “Image” and “Likeness” of God, in: Journal of Biblical Liter-
ature 91/3, 289-304.
Noldeke, T. (1897) תומלצand םלצ, in: Zeitschrift für die Alttestamentliche Wissenschaft
17, 183-187.
O’Donovan, Joan E. (1986) Man in the Image of God: The Disagreement Between Barth
and Brunner Reconsidered, in: Scottish Journal of Theology 39/4, 433-459.
Pratik, Maroo (2017), in: https://digitally.cognizant.com/the-frontier-within-digitiz-
ing-the-human-body-codex2718 (accessed on 20 November 2022).
Preuss, Hans Dietrich (1978) “damah / demûth”, in: G.J. Botterweck & Helmer Ringgren
(eds.). Theological Dictionary of the Old Testament. Volume III. Grand Rapids: Eerd-
mans, 250-260.
Rashi (1934) Commentary, in: Chumash 7. Trans. A. M Silbermann. Jerusalem.
Rudolph (1953) Das Menschenhild des AT, in: Dienst unter dem Wort. FS H. Schreiner.
Gütersloh: Bertelsmann.
Sarna, Nahum M. (1989) Genesis: The JPS Torah Commentary. Philadelphia: The Jewish
Publication Society.
Sawyer, John F. A. (1974) The Meaning of ‘( םיהלא םלצבIn the Image of God’) in Genesis
I-XI, in: Journal of Theological Studies 25, 418-426.
Scharbert, Josef (1986) Genesis. NEB. Würzburg: Echter.
Schellenberger, Annette (2009) Humankind as the ‘image of God’. On the Priestly predi-
cation (Gen 1:26-27; 5:1; 9:6) and its relationship to the ancient Near Eastern under-
standing of images, in: Theologische Zeitschrift 65, 97-115.
Schmidt, Werner Heinrich (1964) Die Schöpfungsgeschichte der Priesterschaft. BEWANT
17. Neukirchen-Vluyn: Neukirchener Verlag.
Stamm, Jakob Josef (1956) Die Imago-Lehre von Karl Barth und die alttestamentliche
Wissenschaft, in: Antwort. Karl Barth zum siebzigsten Gebrutstag. Ed. E. Wolf et al.
Zollikon-Zürich: Evangelischer Verlag, 84-98.
Stamm, Jakob Josef (1959) Gottebenbildlichkeit des Menschen im Aten Testament, in:
AThS 54. Zollikon-Zürich: Evangelischer Verlag.
Stendebach, F. J. (2003) tselem, in: G. J. Botterweck & Helmer Ringgren (eds.). Theo
logical Dictionary of the Old Testament. Volume III. Grand Rapids: Erdmans, 386-396.
206
In the Image and Likeness of God or Digitalized?
207
You shall not make a carved image for yourself
nor the likeness of anything (Exo 20:4a, Deu 5:8a)
Are our Virtual Realities against God’s Plan?
Snehal Marcus D’Souza (Pune and Innsbruck) and
Andreas Vonach (Innsbruck)
1 Introduction
The very beginning of the Hebrew-Christian Bible shows human beings as an image
of the Creator God himself.1 According to Gen 1:27.28 God created them as male and
female2 and as such as image and likeness of himself. What is commonly translated
as image – the hebrew term – םלצfirst of all means “statue” or manmade “carved
stela”. In this meaning it is stressed quite at the beginning of the so called Decalogue in
Exo 4:20 and in Deu 5:8a as well. There the command is given, not to make a carved
image of God or of celestial bodies nor the likeness of anything. The very context of
this prohibition is the commandment not to worship other Gods than YHWH. It was
common practice in the Ancient Near Eastern cultures to make carved images of Gods
and to adore them. And it was a common practice as well to make images of sun, moon
and stars or other things in order to offer sacrifices to them and to adore artefacts and
cosmic entities as well as appearencies. Such practices for the worship of YHWH ac-
cording to the second command of the Decalogue and other rules of the Tora are strictly
forbidden for the people of Israel.
What has this to do with the growing digitalisation of our world and life? The world
today is a digital world. We as human beings are becoming increasingly dependent on
the digital gadgets and its usage. Along with the development in the Digital media, the
usage and the effect of the Virtual Reality are also increasing. Virtual Reality has got
209
Snehal Marcus D’Souza, Andreas Vonach
into the lives of the people through various kinds of online and virtual programmes, be
it video games, virtual meetings, online sessions or creating avataras on social media.
The church and the theological institutes too are being enveloped by Virtual Reality.
We often take up virtual lectures, virtual meetings and also have seen virtual Eucharist.
Hence, as theologians it becomes important to address these issues. There are two im-
portant questions according to this:
First, what is Virtual Reality? Second, what is the Plan of God or the commandment
of God?
To understand what Virtual Reality is, we shall first discuss some of the definitions that
the scholars have proposed. After which, we shall briefly speak of the REALness of the
Virtual Reality. This will lead us to some of the impacts of the Virtual Reality on the
world, specifically in the Education and Church.
The Wikipedia, which is the most commonly used internet encyclopaedia, states that
“Virtual Reality is a simulated experience that employs pose tracking and 3D near-eye
displays to give the user an immersive feel of a virtual world. Applications of Virtual
Reality include entertainment (particularly video games), education (such as medical
or military training) and business (such as virtual meetings).”3
Robert Sheldon, a technical consultant and a freelance technology writer, explains
Virtual Reality as “Virtual Reality is a simulated 3D environment that enables users to
explore and interact with a virtual surrounding in a way that approximates reality, as
it is perceived through the users‘ senses. The environment is created with computer
hardware and software, although users might also need to wear devices such as hel-
mets or goggles to interact with the environment. The more deeply users can immerse
themselves in a Virtual Reality environment – and block out their physical surround-
210
Are our Virtual Realities against God’s Plan?
ings – the more they are able to suspend their belief and accept it as real, even if it is
fantastical in nature.”4
Steve Bryson defines Virtual Reality as follows, “Virtual Reality is the use of com-
puter technology to create the effect of an interactive three-dimensional world in which
the objects have a sense of spatial presence.”5
From the above three definitions or understandings of the Virtual Reality we can
draw out some of the key features.
– Virtual Reality creates simulated 3D environment.
– This is created with the use of Computer Technology.
– This needs the assistance of some kinds of electronic gadgets, like camera, gog-
gles, headsets, etc.
– Virtual Reality is fantastical in nature and far from the physical environment.
– The most common uses of Virtual Reality, as per the above definitions, are vid-
eo games, virtual meetings, and in education.
The next question for our discussion is whether Virtual Reality is REAL. David J.
Chalmers, from New York University, in his article ‘The Virtual and the Real’ discusses
some of the essential elements to be considered in developing the idea of the REAL-
ness of the Virtual Reality. Chalmers explores this question by discussing whether the
virtual objects are digital objects, how the Virtual Reality is created, what kinds of
gadgets are needed and whether Virtual Reality is possible without the computerised
systems and gadgets. Chalmers goes on to argue that though Virtual Reality is comput-
er-generated reality, it is not fictional always, and therefore, it is REAL.6
In our opinion, the REALness of the Virtual Reality is not the main concern for us
at this point, but the fact that it is a computer-generated reality. Since, Virtual Reality is
computer generated, it is not a physical reality.
211
Snehal Marcus D’Souza, Andreas Vonach
In the recent years, Virtual Reality has become the part of the everyday life. Though
Virtual Reality existed in different forms, the limitation of being able to be present
physically during the Covid 19 Pandemic, boosted the use of the Virtual Reality in
different fields. As the Professors and students of the Theology and Philosophy, we are
concerned mainly about the impact of Virtual Reality on Education and on the Church.
Hence, we will be focusing at these two aspects and will not go to the corporate world.
Everything has some pros and contras, so also is Virtual Reality. Virtual Reality has
made the teaching and learning easily accessible for the younger generation. From the
children’s rhymes to the explanation of human anatomy, everything has become acces-
sible through Virtual Reality. Virtual Reality has reduced the gap between the students
and the Laboratories, especially for those who are unable to access to it. The Virtual
Reality makes the experience possible7. Since Virtual Reality simulates the senses, the
person is able to give in full attention and the learning normally is unforgettable or at
least remains in memory for a longer time.
Though learning and education benefits a lot from the Virtual Reality, there are also
some side-effects of it. The simulated experience of the Virtual Reality is so fascinat-
ing that those who are used to it, do not like to come out of it. Hence the addiction to
Video games and Media is increasing among the children and the younger generation.
Though this group is able to excel in the use and benefit of the Virtual Reality, they find
it equally hard to live in physical environment.
Another bigger concern is the over-use of the Media, through which the Virtual
Reality is accessed. The study has shown that due to the overuse of Media the younger
generation is at a high risk of mental health problems, especially lack of sleep, lack
of good reasoning and depression.8 Along with the psychological health, we cannot
212
Are our Virtual Realities against God’s Plan?
overlook the physical health. The negative impact of the over-use of Media on the eyes,
back, neck, etc. is not unknown to us.
The Church, which consists of the People who are living in the Digital World, is not
untouched by the Virtual Reality. Especially during the Corona Pandemic, when the
people could no longer go to the church or gather together for any of religious activ-
ities, the Virtual Reality came to the aid. From the celebration of online Eucharist to
the virtual funerals, from online sermons to the virtual retreats, the common religious
practices were dependent on the Virtual Reality.
This change was a sudden need of the hour, which was never imagined nor ever
expected by the faithful community. But when the Church officially accepted it, then
the people slowly adapted to the situation. This led to further theological problems.
Is attending the online Eucharist as valid or as valued as attending the Eucharist
with physical presence? Can we bring them both, online Eucharist and eucharistic cel-
ebration in a church, to the same level? If the sacrament of Eucharist can be performed
online, then can the other Sacraments too be performed online? Then what about the
physical elements of the Sacraments?
While searching for any theological reflections on these points, we came across
an interesting article, which elaborated a virtual Baptism. The article was written by
A. Trevor Sutton, a young researcher from the University of Chicago even before the
outburst of the Corona Pandemic in July 2019. He elaborates the entire procedure of
virtual Baptism in following words,
“In VR (sic!) baptisms, the baptizer and the baptized wear VR headsets in their
respective locations and meet in an online baptistry by way of avatars. When the time
comes for the (virtual) baptism, the pastor instructs the person to be baptized (wearing
a VR headset) to squat down in place so that their avatar is submerged under the (dig-
ital) water while the pastor says, ‘[Name], I baptize you in the name of the Father, and
of the Son, and of the Holy Spirit,’ thus blending the traditional words and motions of
the rite with the emergent technology.”9
213
Snehal Marcus D’Souza, Andreas Vonach
The Plan of God is a vast theological topic which cannot be summarized in few min-
utes or in a small article. But to understand the plan of God in the context of our theme,
we would like to draw some understanding of Exodus 20:4a or Deuteronomy 5:8a.
Both Exo 20:4a and Deu 5:8a communicate the same meaning, but the two verses are
in the two different books of the Pentateuch and are in two different contexts.
In Exo 20:4a the context is of the Mount Sinai event, where YHWH came down at
Mount Sinai and gave the commandment to Moses while the people waited at the foot
of the mountain (Exo 19:16-25). The discourse of the Sinai event continues till the end
of Ex 31 when YHWH gave to Moses the tablets of stone. Immediately after the Sinai
event follows the episode of golden calf which was simultaneously happening while
Moses was on the Mount Sinai with YHWH (Exo 31:18-32:35).
On the other hand, the Book of Deuteronomy seems to be recalling the event in the
long speech of Moses10, after the People of Israel have already reached beyond the Jor-
dan in the Valley (Deu 4:44-46). In Deuteronomy, the mount is named as Mount Horeb
and Moses recalled the entire episode of Mount Sinai from the Book of Exodus in his
speech to the people (Deu 5:1ff). This recalling speech of Moses mentions the golden
calf episode (Deu 9:15-29).
What is remarkable here is that, in both the places the occurrence of the command-
ment of YHWH is followed by the golden calf episode. This indicates the close relation
214
Are our Virtual Realities against God’s Plan?
between the two11. One more thing to note here is that Moses, in his speech referred to
the making of golden calf as the ‘sin against God’ (Deu 9:16).
Hence, the command, “You shall not make a carved image for yourself nor the
likeness of anything….” (Exo 20:4a, Deu 5:8a) cannot be totally understood without
understanding the golden calf episode.
Exo 20:4a and Deu 5:8a are forming the first part of the second commandment. The
first commandment is “I am YHWH your God, who brought you forth from the Land of
Egypt, from the house of bondage, you shall not have other gods beside my presence”
(Exo 20:2-3; Deu 5:6-7). The second command which is important for us in this context
is “You shall not make a carved image for yourself nor the likeness of anything….”
(Exo 20:4a; Deu 5:8a). Since the first commandment already has forbidden having any
other gods, other than YHWH, there seems no further explanation. But the second com-
mand adds to the first command by going a step further and adding a physical and spe-
cial element to show the manner of following first command12. This says that the people
are not just forbidden from worshipping any other gods but also that they should not
make any other physical image neither carved image nor the likeness of anything. And
they should not bow down to them, nor worship these images.13 Craigie argues that the
term ‘image’ refers specifically to the image of God.14 McConville too goes along the
same line but a step further and argues that the term ‘image’ does not prohibit any form
of art or as such.15 In the view of Lanier Virtual Reality is a form of art, and if we agree
with both McConville and Lanier then Virtual Reality is not the prohibited image.16
To have a better understanding of the second commandment, it is important to look
at the first episode of disobedience of this command where the Israelites made a golden
calf.
11 Currid comments that the second commandment is a proscription that the Israelites would soon erect a
golden calf. Cf. Currid (2001) 38.
12 Cf. McConville (2002) 126.
13 Cf. Durham (1987) 285; Christensen (2001) 114.
14 Craigie (1976) 153–154.
15 Cf. McConville (2002), 126.
16 Lanier (2017) 3.
215
Snehal Marcus D’Souza, Andreas Vonach
While Moses was on the mount Sinai with the Lord, the people at the foot of the mount
asked Aaron to make gods for them (Ex 32:1). Aaron gathered the gold from the peo-
ple and made from it a golden calf (Exo 32:2-4). The words of the people display the
disobedience to the first commandment and the actions display the disobedience to
the second commandment. The first commandment says that “I am YHWH your God,
who brought you forth from the Land of Egypt, from the house of bondage, you shall
not have other gods beside my presence” (Exo 20:2-3, Deu 5:6-7). When Aaron made
the calf, they said “These are your gods, O Israel, who brought you out of the land of
Egypt!” (Exo 32:4). This indicates that they gave the place of YHWH to an image of
gold. The second command reads, “You shall not make a carved image for yourself
nor the likeness of anything…” (Exo 20:4a, Deu 5:8a). Hence by making an image of
gold, they already made a prohibited image (a moulded golden image), in the likeness
of a calf. That is clear acting against God’s command, “nor the likeness of anything”
(Exo 32:2-3).
In the Biblical context we see the things made of gold, like ornaments and vessels
for the temple. While gold ornaments were considered a sign of prosperity (Gen 24:35;
41:42-43; Ps 45:13) and golden vessels in the temple were considered holy (Dan 5:3),
only the golden calf was considered a sin. Because the people of Israel not just made
an image, but they considered it as their gods (Exo 32:3), made a festival (Exo 32:5),
offered burnt offering to it (Exo 32:6) and danced to it (Exo 32:19). Hence the evil is
not the gold but the actions of the people of Israel towards an image made of gold.
– Gold as such is not evil, but the actions of the people of Israel regarding the golden
calf. Similarly Digital Media or the Virtual Reality is not evil unless our actions go
against the command of God.
– Gold could have been used to make the ornaments or the vessels for the house of
God which would elevate the value of gold. Similarly, when Virtual Reality is used
to bring the people closer to the divine, its value is elevated. We cannot deny the
216
Are our Virtual Realities against God’s Plan?
fact that during the pandemic time, the Virtual Reality came to our aid to provide
for the spiritual needs of the people.
– The ornaments were the need of the people which they had brought from the land
of Egypt, making a golden calf from it destroyed its worth. Similarly, during the
pandemic time, dependency on the Virtual Reality was the need of the hour, but
making it an alternative, even when the physical presence is possible, will lead it to
another level of meaning which is not theologically or religiously convenient.
To come back to the example of the virtual baptism: taking up on the virtual sacra-
ments, creating virtual avataras and also virtual water is demeaning the entire spiritu-
ality of sacraments. The importance of the presence of holy water is given to the virtual
water, which is not physically accessible, in such case, there is high chance to fall into
the golden calf episode. Hence, this seems theologically wrong.
As we have seen above there is a strong interaction between the man made Virtual
Realitay and the influence of this reality on our lives, behaviours, education, religious
practices, and even mental and physical health. But is this just the case according to
Virtual Realities? Do not the artefacts we make have a similar effect to our everyday
lives and developments of our cultures and societies?17 We produce fire weapons in or-
der to protect ourselves against enemies, dangerous animals but also for regulating the
wildlife in our forests. But on the other hand those weapons very often are misused by
warriors, murders, etc. We produce tools for skilled workmen for building houses and
other things which make our lives more comfortable and even enable us to live a safe
and good life on our planet. But also these tools may lead us to use them for destruc-
tion or even acts of sabotage. In other words, as Virtual Reality has the two sides of the
coin also our artefacts do have it. We create things and then they create us, our life, our
thinking, behaviour and believing.
217
Snehal Marcus D’Souza, Andreas Vonach
The making of the golden calf made something with the People of Israel in the Sinai
Desert. They saw God(s) in it, they danced and worshiped it. This influenced the faith
of these people to a very high degree. They fulfilled a 180 degree turn away from their
God YHWH.
5 Conclusion
We would like to state that, Virtual Reality is not the carved image/the golden calf,
rather it is the gold with which the images were made. The gold could be used to make
vessels for the house of God which would make them holy or to make a golden calf
to worship, which would make it profane. Hence, it is in our hands to choose what to
make with Virtual Reality. We can use the Virtual Reality to educate and to spread the
Good News or to do unmeaningful things to make it a carved image.
It seems very important for us, not to be afraid of Virtual Reality and its develop-
ment, but it would be very dangerous and naive to see only the pleasant and positive
effects of it. We must be also aware of the risks and traps the digital progress is bearing.
Bibliography
Bryson, Steve (2013) Virtual Reality: A Definition History – A Personal Essay, in: JOUR:
https://arxiv.org/pdf/1312.4322.pdf (accessed on 21.4.2023).
Chalmers, David J. (2023) The Virtual and the Real: https://consc.net/papers/virtual.pdf
(accessed on 19.4.2023).
Christensen, Duanel L. (2001) Deuteronomy 1:1-21:9 revised. Word Biblical Commentary
6A. Nashville: Thomas Nelson Publishers.
Craigie, Peter C. (1976) The Book of Deuteronomy. The New International Commentary on
the Old Testament. Grand Rapids: Erdmans Publishing Company.
Currid, John D. (2001) A Study Commentary on Exodus. USA: Evangelical Press.
Durham, John I. (1987) Exodus. Word Biblical Commentary 3. Texas: Word Books Pub-
lisher.
218
Are our Virtual Realities against God’s Plan?
Hoffmann, Max / Schuster, Katharina / Schilberg Daniel / Jeschke Sabina (2014) Bridging
the Gap between Students and Laboratory Experiments, in: Virtual, Augmented and
Mixed Reality: Applications Virtual and Augmented Reality. Eds. Randall Shumaker /
Stephanie Lackey: Springer, 39–49.
Lanier, Jaron (2017) Dawn of the New Everything: A Journey Through Virtual Reality.
London: The Bodley Head.
McConville, J. G. (2002) Deuteronomy. Apollos Old Testament Commentary 5. Illinois:
Inter Varsity Press.
Sutton, A. Trevor (2019) Theologizing Virtual Reality: https://divinity.uchicago.edu/sight-
ings/articles/theologizing-virtual-reality (accessed on 19.4.2023).
219
Jesus constitutes true humanity: dignity of
human person in the digital age
VM Jose SJ (Pune)
1 Introduction
Universal Declaration of Human Rights (UDHR) affirms the need to safeguard the
rights of privacy and freedom of expression in the digital age. Unfortunately, at pres-
ent, new technologies pose new challenges to the essential rights of human beings.
Private activity is under surveillance because of interactive technology and software
filters restrict access to information that might otherwise be freely available in the
environment of broadcast media and print publication. The effect of globalization has
also resulted in a transfer of decision-making authority from national governments to
international organizations. These recent developments pose additional challenges to
democratic institutions and the rule of law. To preserve human dignity, it is necessary
to reaffirm support for the UDHR, promote the implementation of “Fair Information
Practices” and the development of genuine Privacy Enhancing Technologies, remove
barriers to the free flow of information, and strengthen “Public Voice” NGOs to en-
sure the participation of civil society in decisions concerning the digital age.1
Kenosis depicts the emptying of Christ, and God exalting him and bestowing on
him the name which is above every name, that at the name of Jesus every knee should
bow, in heaven and on earth and under the earth, and every tongue confess that Jesus
Christ is Lord, to the glory of God the Father (Phil 2:5-11). The Scriptures affirm that
Jesus was both 100% God and 100% human. But even more in a more precise way the
221
VM Jose SJ
Christ, the person Jesus Christ, of Nazareth was in His person 100% God in His nature
and 100% divine in nature. So, he had two natures in one person. The New Testament
clearly reveals that Jesus has a human body which was tangible to his associates. John
1:14 means at least this, and more: “The Word became flesh”. His humanity became
one of the first tests of orthodoxy (1 John 4:2; 2 John 7). He was born (Luke 2:7).
He grew (Luke 2:40, 52). He grew tired (John 4:6) and got thirsty (John 19:28) and
hungry (Matthew 4:2). He became physically weak (Matthew 4:11; Luke 23:26). He
died (Luke 23:46). And he had a real human body after his resurrection (Luke 24:39;
John 20:20, 27).
2 Humanity of Jesus
But when the time had fully come, God sent forth his Son, born of woman (Gal 4:4).
Thus, the promise of a Saviour that God had made to Adam and Eve as they were
expelled from Paradise was fulfilled: I will put enmity between you and the woman,
and between your seed and her seed; he shall bruise your head and you shall bruise
his heel (Gen 3:15). This verse from Genesis is sometimes called the “proto-gospel”
or first gospel, because it is the first announcement of the good news of salvation.
The traditional interpretation is that the “woman” of whom it speaks is both Eve, in a
direct sense, and Mary in the full sense; and that the “seed” of the woman refers both
to humankind and to Christ.
That Jesus of Nazareth was truly and fully human was plain enough to those who
saw and heard and touched and shared life with him (1 John 1:1). No one questioned
his humanity during his ministry. What was not apparent at first, and revealed careful-
ly and convincingly in his life and resurrection, was that he also was God. His closest
disciples, who knew his humanity full well, worshiped him as God (Matthew 28:17),
but the first generation of Christians started from a different place. They began with
him as God, and tended to struggle with the fullness of his humanness. The first heresy
the fledging church faced was that he wasn’t truly man (1 John 4:2; 2 John 7).
222
Jesus constitutes true humanity
“The Scriptures plainly affirm that Jesus both knows all things as God and doesn’t
know all things as man.”2 Throughout the Gospels, Jesus clearly displays human emo-
tions. When Jesus heard the centurion’s words of faith, “he marvelled” (Matthew
8:10). He says in Matthew 26:38 that his “soul is very sorrowful, even to death”. In
John 11:33–35, Jesus is “deeply moved in his spirit and greatly troubled”, and even
weeps. John 12:27 says, “Now is my soul troubled”, and in John 13:21, he is “troubled
in his spirit”. The author to the Hebrews writes that “Jesus offered up prayers and
supplications, with loud cries and tears” (Hebrews 5:7). As John Calvin memorably
summed it up, “Christ has put on our feelings along with our flesh”.
We cannot deny the fact that Jesus also has a human mind. We human beings have
only experienced one mind, and cannot understand what it would be like for one
person to have both a human mind and a divine mind. Two key texts press us toward
this mind-boggling truth: Jesus increased in wisdom and in stature and in favour with
God and man (Luke 2:52). Jesus reveals his limitation when it comes to the fullness
of his divinity. “Concerning that day or that hour, no one knows, not even the angels
in heaven, nor the Son, but only the Father” (Mark 13:32). It is difficult for ordinary
human beings like us to comprehend this aspect of Jesus but it comes from the mouth
of Jesus himself. But on further reflection we can conclude it is a glorious confirma-
tion of Jesus’s full humanity. Perhaps put most provocatively, the question goes like
this: If Jesus is truly God, and God knows everything, how can Jesus not know when
his own second coming will be?3 An apparent contradiction may be that the Scriptures
plainly affirm that Jesus both knows all things as God and doesn’t know all things as
man. But it is the peculiar glory of the God-man. And Jesus can be said to know all
things, as in John 21:17, because he is divine and infinite in his knowledge.
223
VM Jose SJ
“Christ has put on our feelings along with our flesh.” It is our belief that Jesus is like
us in every respect, human body, heart, mind, and will except for sin (Hebrews 2:17;
224
Jesus constitutes true humanity
4:15). It is incredible to realize that the divine Son of God would not just take on part
of our humanity on that first Christmas, but the true humanity all the way to the cross
for us, and now into heaven and the new creation. The Incarnation not only shows
God’s infinite love for humankind, his infinite mercy, justice and power, but also the
divine wisdom shown in the way God decided to save humanity, through the Incarna-
tion of the Word. Jesus Christ, the Incarnate Word, “is not a myth, or an abstract idea;
he is a man who lived in a specific context and who died after a life spent on earth in
the course of history”4.
At the beginning of the fifth century, after the controversies about the humanity and
divinity of Christ, there was a clear need to firmly defend the integrity of the two na-
tures, human and divine, in the one Person of the Word. The personal unity of Christ
became the centre of attention of patristic Christology and soteriology.
The first great controversy originated with some statements by Nestorius, patri-
arch of Constantinople, who implied that in Christ there are two subjects: the divine
subject and the human subject, united by a moral bond, but not physically. He rejected
the title of Mother of God, Theotókos, applied to our Lady. According to his view,
Mary would be the mother of Christ, but not the mother of God. St Cyril of Alexandria
and the Council of Ephesus in 431 stressed that Mother Mary was really the mother
of God by the human conception of the Son of God in her womb (CCC, 466). Later
the Monophysite heresy (a doctrine that in the person of the incarnated Word, in Jesus
Christ) there was only one nature, the divine. This error was condemned by Pope Leo
the Great and by the Ecumenical Council of Chalcedon in 451. This Council teaches
that “we confess one and the same Son, our Lord Jesus Christ: perfect in divinity and
perfect in humanity”. It adds that the union of the two natures is “without confusion,
change, division or separation”. The doctrine of Chalcedon was confirmed and clari-
fied in the year 553 by the Second Council of Constantinople where they emphasised
the unity of Christ; it affirmed that the union of the two natures in Christ takes place
by hypostasis.
4 https://www.desiringgod.org/articles/jesus-is-fully-human.
225
VM Jose SJ
“In the Incarnation ‘human nature was assumed, not absorbed’” (GS 22, 2). The
Church holds the view that Christ’s human nature belongs to the divine person of the
Son of God, who assumed it. Everything that Christ is and does in this nature derives
from ‘one of the Trinity’. The Son of God therefore communicates to his humanity
his own personal mode of existence in the Trinity. Christ’s human soul possesses
true human knowledge. Catholic doctrine has traditionally taught that, as man, Christ
possessed acquired knowledge, infused knowledge, and the knowledge proper to the
blessed in heaven. Christ’s acquired knowledge could not in itself be unlimited. This
is why the Son of God could, when he became man, ‘increase in wisdom and in stat-
ure, and in favour with God and man’ (Lk 2:52). Christ also possesses the knowledge
proper to the blessed: “By its union to the divine wisdom in the person of the Word
incarnate, Christ enjoyed in his human knowledge the fullness of understanding of
the eternal plans he had come to reveal” (CCC 474). For all these reasons it must be
stated that Christ as man is infallible. We can understand that, on the human plane,
Christ was aware of being the Word and of his saving mission. On the other hand,
Catholic theology, in view of the fact that while on earth Christ already possessed the
immediate vision of God, has always denied that the virtue of faith existed in Christ.
We don’t actually live in a digital world, it doesn’t exist. We live in this world, making
choices within a myriad of blessings, distractions and challenges. We have dignity
because we are made in the image of God, called into relationship with God and each
other. And technology is just one of those blessings, distractions and challenges. It’s
up to us. If I ask the question who does not like digital technology, no one will accept
it. Everyone likes being able to read e-mails from a smart-watch, connected to the
phone via Bluetooth while watching a movie because the same phone is pushing video
and sound from the internet to the TV through a dongle in its USB port. We like the
accessibility of information, the ability to collaborate and create, and the simple, prac-
tical ease of retrieving files and storing work electronically. What is it that we should
not like about technology? The best and worst of human behaviour is played out in
226
Jesus constitutes true humanity
the electronic realm. We can contact people with a click of an icon on a screen. We
can post words of comfort and support to persons whose faces we will never see. We
can donate money to those in need on the other side of the world and at the same time
not know our neighbours’ names. We can share photos which celebrate humanity with
millions and find images no family would ever want in their albums. We can maintain
relationships with ease across the whole world, and be utterly alone while doing so.
And all the while a new ‘normal’ and relentless ‘change’ unfold to tell us who we are
and who we are not, impacting on human dignity in positive and negative ways.5
We know that technology does not care about us. It does not reflect on its purpose.
We cannot expect technology to love us back and it does not respect or understand our
way of life. It is created by human beings and marketed in ways and forms designed
to impact on our lives and generate profits. Unfortunately, it seems that the latter is
the driving force of the digital age, and economic forces rarely place human dignity
ahead of profit. The rise in unemployment through technology replacing people is an
obvious example of this.
All of us know that there is dignity in the profound beauty of the human form,
and the desire within us to connect with one another. Dignity is found in defining
our identity, as we grow up through challenges, failures and successes. At times this
dignity can be crushed by anyone who bully another person into becoming suicidal
through social media. The dignity of family itself can be at stake if the individuals eat
their meals in their own spaces with their heads in their own devices. And, at a global
level, even the dignity of our planet is at risk through the mountains of scrap dumped
by yesterday’s devices and tomorrow’s BYOD (bring your own device).
The Universal Declaration of Human Rights (1948) set out two principles that bear
directly on the protection of dignity in the digital age. Article 12 of the UDHR states
that “No one shall be subjected to arbitrary interference with his privacy, family, home
or correspondence, nor to attacks upon his honour and reputation. Everyone has the
227
VM Jose SJ
right to the protection of the law against such interference or attacks”. Article 19 fur-
ther says, “Everyone has the right to freedom of opinion and expression; this right in-
cludes freedom to hold opinions without interference and to seek, receive and impart
information and ideas through any media and regardless of frontiers. New technology
offers opportunities both to expand and to limit the freedom to communicate and the
opportunity to protect private life.”6
New technology has always presented opportunities and risks. It is an undeniable
fact that industrialization has promoted productivity and increased the standard of
living in many parts of the world. The other side of industrialization we cannot deny
is the enormous damage it has caused to the physical environment. Information tech-
nology also presents opportunity and risk. But the main challenges to human dignity
in the digital age is not in the nature of the technology itself but in the capacity of
individuals acting through democratic institutions to respond effectively to these new
challenges. These new challenges include the commercialization of the Internet, the
growth of law enforcement authority, and the globalization of decision-making au-
thority. New digital networks can provide a high level of security and privacy through
the incorporation of such techniques as encryption. The Secure Socket Layer in Inter-
net browser software enables the secure transfer of credit card numbers and reduces
the risk that “sniffer” programs will capture credit card numbers. But encryption is not
widely used for personal email. As a result, it is relatively easy to capture private mes-
sages sent over the Internet. New technology can also enable anonymous transactions
over the Internet so that individuals can obtain access to information and purchase
products without disclosing actual identity. Some object to online anonymity and say
that it could be a cloak for criminal conduct.7
To safeguard the rights of privacy and freedom of expression in the digital age, it is
necessary to reaffirm support for the Universal Declaration of Human Rights, partic-
ularly Article 12 (No one shall be subjected to arbitrary interference with his privacy,
family, home or correspondence, nor to attacks upon his honour and reputation. Ev-
eryone has the right to the protection of the law against such interference or attacks)
and Article 19 (Everyone has the right to freedom of opinion and expression; this
right includes freedom to hold opinions without interference and to seek, receive and
6 https://unesdoc.unesco.org/ark:/48223/pf0000121984.
7 Infoethics, https://unesdoc.unesco.org/ark:/48223/pf0000121984, accessed on 14.07.22.
228
Jesus constitutes true humanity
impart information and ideas through any media and regardless of frontiers); promote
the implementation of “Fair Information Practices” and the development of genuine
Privacy Enhancing Technologies; remove barriers to the free flow of information;
and encourage the participation of “Public Voice” NGOs in decisions concerning the
future of the Internet society.8
For our discussion it is good to know what the Catholic Social Teaching (CST) says
about human dignity; it develops the philosophical and theological perspectives on
human dignity together. Because of the historical circumstances within which these
documents were drafted, the theory of human dignity was developed in relation to
the dignity of human labour. The earliest documents of this tradition develop the the-
ology of the imago dei (image of God) in the context of neo-Thomistic natural law
philosophy. We humans are not only iconic representations of the divine, but also our
work is analogous to God’s creative activity. When a person mixes his or her labour
with raw physical material to create a product, then “on it he leaves impressed, as it
were, a kind of image of his person” (Rerum novarum, 15). Thomistic philosophy
establishes personal ownership of property either through “occupancy” or by means
of labour. Using this philosophical foundation, the Church claimed that dispossessed
laborers, like early industrial factory workers, had been robbed of their dignity pre-
cisely because they did not enjoy the full fruits of their labour. CST affirmed that the
role of the government consisted in restoring the rights and property of the labourer
without negating the property rights of the owner of capital. Nowhere is the union of
the philosophical and theological perspectives on human dignity clearer than in the
social encyclicals of Pope John Paul II. In the 1981 encyclical Laborem exercens (On
Human Work), John Paul II combines traditional creation theology with the personal-
ist philosophy of Max Scheler, which informed his own teaching and writing as a pro-
fessor of moral theology and social ethics. The encyclical is an extended theological
and philosophical reflection on what he calls the objective and subjective meaning of
229
VM Jose SJ
work. For John Paul II, work attains its fullest meaning not in its objective sense, that
is, not in the work done and the products produced, but rather in the subjective sense,
that is, in the persons who do the work and the humanization that results from the
doing of the work. “As a person, man is therefore the subject of work. As a person he
works, he performs various actions belonging to the work process; independently of
their objective content, these actions must all serve to realize his humanity, to fulfil the
calling to be a person that is his by reason of his very humanity” (LE, no. 6).
Today what we need is a digital world which is humanized. It refers to the idea of
using technology to create a more connected, accessible, and user-friendly world for
people. One aspect of this digital humanized world is the use of digital tools and
platforms to facilitate communication and collaboration between people, regardless
of physical distance. There is a need to include video conferencing software, social
media platforms, and messaging apps, which allow people to connect and share in-
formation in real-time, regardless of their location. Another dimension is the use of
technology to create more personalized and responsive experiences for individuals.
For example, smart home devices can adjust lighting and temperature based on indi-
vidual preferences, or personalized recommendations on shopping and entertainment
platforms based on a user’s browsing history.
However, it’s important to note that a digital humanized world also poses poten-
tial risks and challenges, such as privacy concerns and the risk of increasing social
isolation if people rely too heavily on technology for communication and connection.
Therefore, it’s crucial to approach technology with a critical eye and balance its ben-
efits with potential drawbacks. It is good to note that none of this technology comes
close to sentience, despite predictions. It is created by human beings and marketed in
ways and forms designed to impact on our lives and generate profits. Unfortunately, it
seems that the latter is the driving force of the digital age, and economic forces rarely
place human dignity ahead of profit; the rise in job-loss through technology replacing
people is an obvious example of this.
230
Jesus constitutes true humanity
7 Conclusion
If Jesus were in our midst today as a human being, would he be interested in the
technology of today? Would he have time to blog or catch up with Facebook friends?
Would he be instagramming? Quite possibly he would contact quite a few people
through these media programmes, but I doubt it would be an obsession for him. He
would still be busy reaching out to real hands, eating physical food, speaking actual
words.
When I think of the digital world and artificial intelligence two years ago during
the time of the severe covid affected places, how terrible the situation of people was.
Children were told to go home and remote learn. The child who had to go back to
dreadful accommodation where often there was no electricity, where there was cer-
tainly no computer, and the idea of internet access was inconceivable. That child was
expected to be supported by parents who were desperately struggling just to survive.
And that is what I think of when I think of the need to engage the huge gaps in terms
of digital access. The question here is, does this child lose his dignity in the context
of the digital world? We should know that we are the ones made in the image of God.
It is our decisions that are changing the world, not technology. We create, choose and
use intentionally.
References
231
VM Jose SJ
232
The Technologisation of Grace and Theology
Meta-theological Insights from Transhumanism –
by King-Ho Leung
Albert Jesuraj
233
Albert Jesuraj
1 Theologisation of technology
Even though there are many issues with regard to transhumanism, the hypothetical
process of mind uploading should be examined first. This theory holds that “human
consciousness can be uploaded to computer systems to replace the biological human
body altogether and thereby attaining some form of ‘cybernetic immortality’”5. This
concept of radical life extension from transhumanism is referred to as ‘Singularity’ by
the contemporary American inventor Ray Kurzweil. Singularity refers to that moment
in technological development in which biological evolution is superseded by tech-
nological evolution, “when computers or machines acquire a mode of superintelli-
gence”6. According to Kurzweil, singularity will open the way to transcend our human
limitations and enables us to leave behind our biological destiny. As a result, there will
be no distinction between human and superhuman, human and machine or human and
virtual reality.7 For theologians the following question arises: what is the difference
between the transhumanist aspiration to achieve immortality and the Christian under-
standing of salvation and eternal life? Supporting transhumanism, the leading Chris-
tian scholar Ronald Cole-Turner argues that transhumanism has its genealogical roots
in Christian soteriology.8 He supports his view by quoting the famous poet Dante
234
The Technologisation of Grace and Theology
who writes in his Divina Commedia9: “Trasumanar significar per verba non si poria”
(Paradiso canto 1, line 70)10. The word transhumanism derives from Dante’s coined
word ‘trasumanar’. Dante invents it to describe something that goes beyond human
but certainly uses it in another context than computer scientists because he wants to
express that human beings make their way to glory by the grace of God.11 Even though
Ronald Cole-Turner argues for transhumanism, he makes a difference between trans-
humanism and the Christian understanding of grace. “For transhumanists, the cause or
agent of human transcendence is technology. For Christians it is grace, the undeserved
goodness of God who gives life and wholeness to the creation.”12
Here the comparison is that science uses technology and theology “uses” grace as
the means of immortality.
Ted Peters, a famous Lutheran-Augustinian theologian compares this attitude to-
wards technology with Pelegianism, which claims the possibility of attaining moral
progress or indeed moral goodness through human effort without the help of grace.
He argues, “no amount of increased intelligence will redeem us from what the theolo-
gians call sin”13. According to Peters, this crypto-Pelagian attitude is a danger which
will magnify the human capacities for destruction and corrupt more what is pure. The
fallen nature of humankind needs addressing Christian doctrines of grace and for-
giveness confronting sin. This cybernetic immortality which transhumanism proposes
cannot constitute human redemption.14
Cole-Turner and Peters draw with their arguments an analogy between therapy and
enhancement. In his theological critique on transhumanism, Peters states that normal-
ly therapy means healing or restoring health, but enhancement is not restoring just
health, but giving more capacities than a healthy body requires. Peters argues that
9 Dante describes in his work his journey as a human being in the next world, in which he visits souls in
hell, purgatory and paradise.
10 Alighieri (2021) 223.
11 Cf. Cole-Turner (2015) 150 f.
12 Ibid.
13 Peters (2011) 82.
14 Cf. Leung (2020) 481 f.
235
Albert Jesuraj
236
The Technologisation of Grace and Theology
by altering how we think and see things. The problem is that it basically affects the
way we think about grace and salvation, which is referred to as a “culture of control”
by both Martin Heidegger and Albert Borgmann.19 Here the following question arises:
what orients our intellect towards the divine, is it grace or is it technology? What is
imposed by science and technology and what is a free gift by grace?
As Aquinas puts it, “grace does not destroy nature but perfects it, [so] natural
reason should minister to faith”20. As grace does not pervert human nature but rather
perfects it towards the divine, does technology also perfect humanity towards partic-
ipating in the divine? As Peters summarizes, “‘we are to recontextualise humanity in
terms of grace’ – instead of technology – and in turn also ‘recontextualise technology
in terms of grace’”21.
Despite the fact that developments in science and technology raise many theo-
logical and ethical questions, we cannot deny that the technological context in which
we live affects and orients our attitudes towards faith and reality. The purpose of
Leung’s article is not to warn us of the danger posed by the theologisation of tech-
nology, rather it is to point out the danger created by the technologisation of theology
which alters our pattern of thinking about theology and ethics. Theology should not
be understood in a technological term as a problem-solving mechanism, but rather to
orient ourselves and technology towards our ultimate goal, which in the tradition of
the Christian faith is God. There is no denying the gift of grace in reasoning that has
healing and elevating qualities, however, at the same time, this grace of reason orients
us towards the divine, and this is also something that should not be forgotten. A key
argument stated by Leung in his article is that grace is the paradigm of God-oriented
thinking. He uses the words of Jean-Pierre Torrel in describing grace as “an expres-
sion of God-informed life”22.23
According to our belief, the inventions of technology by humankind are in no
way intrinsically evil as long as they do not disable humanity’s search for the true
end of God and facilitate that search in the direction of the goal that God has set for
humankind.
237
Albert Jesuraj
To conclude the author argues that it is not only possible for the Thomistic typol-
ogy of healing/elevating/deifying grace to supplement the existing classification of
technology as therapy/enhancement/transhumanism, but it can also serve as a para-
digm for the ethical evaluation of how technology – including human enhancement
technology – should be used with respect to the human telos of participation in the
Divine Nature in view of our human telos.24
Instead of allowing technology to contextualise theology, right now it is the task of
theology to contextualise technology with the help of grace to discern the ethical and
spiritual implications of technology.
Conclusion
Christian Theology can’t deny that human capacity for reasoning, science and tech-
nology are great gifts from God to human beings. But all these gifts should help hu-
manity towards the fullness of life and toward the life to come that is eternal life. We
are progressing fast to enhance life, but does this progress add meaning to life? The
elevation of humanity to slaves of science and superintelligence is not the purpose for
which humans are created. This aspect of creating new humans leads again to the sin
of knowledge: The Lord commanded the man, saying, “From any tree of the garden
you may eat freely; but from the tree of the knowledge of good and evil you shall not
eat, for in the day that you eat from it you will surely die” (Genesis 3:16-17)25. Tech-
nological immortality as I consider that technologically produced freedom can lead to
the self-annihilation of humans.
The history of salvation revolves around the relationship between God and human
relationship. The relationship was broken by Adam and Eve’s disobedience which
focused on the individual self and illusion of knowledge. The purpose of creation is
eschatological in nature, and it cannot be destroyed by the egoistical pursuit of man.
The eschatological purpose is not only for Christians but for everyone on earth,
returning home is the purpose of life and not remaining here on earth.
Transcending biological limits can add life to the person, can it add value to life?
238
The Technologisation of Grace and Theology
Alternative designs to the human image cannot answer the purpose of our life. Human
personhood cannot be replaced by transhuman-hood because the concept of transhu-
manism itself undermines the basic characteristics of human beings, such as free will,
human dignity, conscience, or suffering sin and death. The spiritual nature of human
purpose cannot be replaced by the mechanical extension of life.
As Wesley J. Smith points out, “[t]ranshumanism is dangerous because it sees
humans as unexceptional – and also because it seeks to create a radical new moral
order […]”26 and further, “[b]ehind its pretensions of rationality, transhumanism is
rank scientism. Narcissistic to the core, its apostles not only preach against intrinsic
and equal human dignity, but also seek to elevate crass hedonism into a sacrament.”27
As a theologian, I have the moral responsibility to respect the freedom of people
and in the same way to lead people to equal dignity, and to the eternal truth. If the
alpha point is God, then the omega point will also be God, the culmination of our
earthly existence that is the eternal perfection which Jesus has promised us.
References
Alighieri, Dante (2021) La Commedia: Inferno, Purgatorio, Paradiso. Ed. Ludger Scherer,
Stuttgart: Reclam.
Cole-Turner, Ronald (2015) Going beyond the Human: Christians and Other Transhuman-
ists, in: Theology and Science 13.2, 150–161.
Elberfelder Bibel. New American Standard Bible (2019) Witten: SCM R. Brockhaus,
https://dictionary.cambridge.org/dictionary/english/transhumanism (accessed on
25.04.2023), https://www.st-andrews.ac.uk/divinity/people/kl322/ (accessed on
25.04.2023).
Kurzweil, Ray (2005) The Singularity is Near: When Humans Transcend Biology. New
York: Viking Press.
Leung, King-Ho (2020) The Technologisation of Grace and Theology: Meta-theolog-
ical Insights from Transhumanism, in: Studies in Christian Ethics 33.4, 479–495,
https://research-repository.st-andrews.ac.uk/bitstream/handle/10023/19535/Le-
26 Smith (2016).
27 Ibid.
239
Albert Jesuraj
sung_2020_SCE_technologisationgrace_CC.pdf?sequence=2&isAllowed=y (acces
sed on 23.08.23).
Peters, Ted (2011) Progress and Provolution: Will Transhumanism Leave Sin Behind?, in:
Ronald Cole-Turner (ed.), Transhumanism and Transcendence: Christian Hope in an
Age of Technological Enhancement. Washington DC: Georgetown University Press,
63–86.
Torrell, Jean-Pierre (2003) Saint Thomas Aquinas. Volume 2: Spiritual Master, trans.
Robert Royal. Washington DC: Catholic University of America Press.
Smith, Wesley J. (2016) Transhumanists Want to Be Gods, in: National review
2016, April 22, https://www.nationalreview.com/corner/transhumanists-want-be-gods/
(accessed on 03.10.2023).
240
Hopeful Trust, Epistemic Goods and the
Enhancement of our Online and Offline
Epistemic Interactions
Clement Joseph Mayambala
Introduction
This paper mainly addresses two fundamental claims that make hopeful trust in mar-
ginalised people’s testimony – on offline and online platforms – epistemically ben-
eficial. First, the claim that through hopeful trust one becomes cognizant of his/her
socially constructed ignorance. By socially constructed ignorance, I mean ignorance
of one’s privileges and prejudice. It shall be indicated later on that ignorance of one’s
privileges and prejudices is prevalent in unjust societies where members of a priv-
ileged social group are often blind to their privileged social status and prejudiced
against members of a marginalised group. Second, the claim that through hopeful
trust, one attains two forms of epistemic goods: recognitional epistemic goods, and
what I shall call transformational epistemic goods – epistemic goods that aim at fos-
tering one’s epistemic well-being and the well-being of one’s online or offline epis-
temic community.
I divide this paper into three sections. Section 1 concerns what I call Jones’s case
taken from the Major League Baseball (MLB) in the USA. Jones’s case in this paper
aims at enhancing our understanding of socially constructed ignorance and hopeful
trust. Section 2 discusses what I shall call epistemic aspect one, which mainly exam-
ines the notion of socially constructed ignorance. The guiding questions here shall
be: What is socially constructed ignorance? How can one be ignorant of his/her privi-
leges and prejudices? Section 3 discusses what I also call epistemic aspect two which
concerns the idea of hopeful trust as a solution to our socially constructed ignorance.
I shall argue that hopeful trust is a powerful way of eliciting trust-responsiveness
that can motivate members of a dominant social group to become cognizant of their
privileges and prejudices and thereby overcome ignorance of their privileges and
241
Clement Joseph Mayambala
p rejudices. I shall also indicate that when a marginalised speaker S hopefully trusts
a privileged hearer H1 with her testimony, S does it mainly for two epistemic goods
namely, recognitional and transformational epistemic goods.
1 Jones’s case
On the evening of May 1, 2017, MLB fans were setting up to enjoy the opening
game of a series between the Baltimore Orioles and the Boston Red Sox in Boston’s
stadium, Fenway Park. Fans were enjoying classic ballpark treats like peanuts and
hot dogs, but at least some Red Sox fans felt more hate than hunger and so started
shouting racist taunts (the “n-word”) at Adam Jones (Orioles’s center Fielder). One
fan even chucked a bag of peanuts at him, as Jones testified on Twitter after the game
saying: “A disrespectful fan threw a bag of peanuts at me. [And] I was called the
N-word a handful of times tonight. Thanks. Pretty awesome.”2 Jones also termed his
being a target of racial slurs at Fenway Park as one of the worst experiences of his
12-year baseball career.
Jones’s act of speaking out about what had happened to him at Fenway Park sent
MLB fans and players into a social media tailspin. Some fans and players (through
their various social media platforms) were less sympathetic to his experience, where-
as others stood up in solidarity with him. Those who were less sympathetic to Jones
argued that he was exaggerating at best and lying at worst about his encounter with
racial slurs from Red Sox fans. Take for instance the former Red Sox pitcher Curt
Shilling, who not only tweeted but also came out publicly in media interviews to
say that Jones was “lying … I think this is bull--t. I think this is somebody creating a
situation.”3
However, those who stood up in solidarity with Jones were mostly Afro-American
baseball players who (through their various social media platforms) spoke out about
their own experiences with racism from Red Sox fans. Among them was David Price,
a Red Sox player, who reported being the target of racial slurs in Fenway Park during
1 To avoid confusion, I shall use S (a hopefully trusting marginalised speaker/testifier) here as she. In
addition, I shall use H (a privileged and prejudiced hearer/listener) as he.
2 ESPN.com news services (May 2, 2017).
3 Daniels (May 4, 2017).
242
The Enhancement of our Epistemic Interactions
his first year (2016) on the Red Sox team. There was Mookie Betts too; a teammate
of Price at Red Sox, whose standing up for Jones deserves much attention here. When
the Orioles and the Red Sox were playing again at Fenway Park the following night
(May 2, 2017), before the game Betts tweeted that he too was black. Within the same
tweet, Betts went on to tell Red Sox fans, and the whole MLB community of fans,
how good they can become if they stand up for Jones at the pitch and say no to racism:
✊
Fact: I’m Black too ? Literally stand up for @SimplyAJ10 tonight and say no
to racism. We as @RedSox and @MLB fans are better than this. – Mookie Betts (@
mookiebetts) May 2, 20174
That evening, as Jones stepped up to the bat, the crowded stadium of fans (from
Red Sox and the entire MLB community) suddenly leapt up and gave him a standing
applause. The Red Sox pitcher, Christopher Sale too, stepped off the pitcher’s mound
to allow the applause to continue. On seeing this, Betts took off his hat in respect and
joined in the applause. After the game, Jones said that the standing applause by the
Red Sox and the entire MLB fans was tremendous and expeditious.5 It was ‘tremen-
dous and expeditious’, I argue, simply because the fans ‘literally stood up for @Sim-
plyAJ10 and said no to racism’ as Betts had hopefully trusted them to be on Twitter.
In the following two sections, I shall draw on Jones’s case to examine two epis-
temic aspects: First, Curt Shilling’s act of publicly (offline and online) denying and
dismissing Jones’ experience of racism, which I shall call an epistemic aspect of so-
cially constructed ignorance. Second, Mookie Betts’ act of posting on Twitter how he
wished fans could stand up for Jones and say no to racism, which I shall also call an
epistemic aspect of hopeful trust – i.e. an invitation to check, challenge and overcome
ignorance of one’s prejudices and privileges.6 Here I shall also discuss the idea of rec-
ognitional and transformational epistemic goods based on Betts’ hopeful trust in Red
Sox and the entire MLB community of fans.
243
Clement Joseph Mayambala
244
The Enhancement of our Epistemic Interactions
Returning to the case at hand, one might ask, how can one be ignorant of one’s
privileges and prejudices? Before I answer this question, it is better we first understand
what the terms privilege and prejudices mean. A privilege is “an invisible package of
unearned assets that [the privileged person] can count on cashing in each day”10. In
unjust societies, members of a dominant social group receive privileges as a result of
being at the top of unjust social hierarchies. For example, being male is a privilege in
many societies, or being male and white is often a privilege compared to being female
and black. On this point, Francis Kendall compares having white or male privileges
with a fish in water or birds in the air. Just as it appears normal for fish to be in water
or for birds to fly in the air, so it is with being a white or male person, because it gives
one access to social power and resources than others. She writes,
Privilege, particularly white or male privilege, is hard to see for those of us who
were born with access to power and resources. It is very visible for those [people of
colour, women etc.] to whom privilege was not granted. Furthermore, the subject is
extremely difficult to talk about because many white male people don’t feel powerful
or as if they have privileges others do not. It is sort of like asking fish to notice water
or birds to discuss air. For those who have privileges based on race or gender or class
or physical ability or sexual orientation, or age, it just is – it’s normal.11
According to Kendall’s insight, privileged people are often ignorant of their priv-
ileges. The causes of their ignorance are complex. For example, cultural ideologies
promote the myth of meritocracy, which allows privileged people to believe that they
have justly earned their privileges. Other factors shall be given when I turn to discuss
the causes of ignorance of one’s privileges and prejudices.
What is prejudice? “A prejudice … is a negatively charged, materially false, ste-
reotype targeting some social group and, derivatively, the individuals that comprise
this group.”12 In other words, prejudice is a negative pre-judgement one has against
people before they get to know them. For example, anyone like me who has ever
been looking for a residential property with a name that does not sound “Austrian”
knows how prejudiced residential property owners can be. Jäger sounds Austrian,
for instance, whereas Mayambala does not. And it is often the case that Jäger gets
245
Clement Joseph Mayambala
accepted as a tenant and Mayambala rejected, simply on the basis of our names. The
proof of what I am saying is exemplified by a study conducted in March 2023 by
Franziska Zoidl and Muzayen AL-Youssef. These two people are a married couple,
yet their study aimed at finding out what happens if two people – i.e. one with a name
that sounds Austrian (Franziska Zoidl) and the other with a non-Austrian sounding
name (Muzayen Al-Youssef) – apply for the same residential property. They found
out that Franziska Zoidl receives nice responses from residential property owners,
whereas Muzayen Al-Youssef’s applications (emails) are often ignored. Some resi-
dential property owners who invited Muzayen over to check the apartment asked him
to come with other supporting documents as proof of his nationality: “A landlord who
invited us both to view the apartment added: ‘Muzayen Al-Youssef should come with
his passport as proof of his citizenship’” (Zoidl and Al-Youssef 2023).13 Yet this was a
prejudiced sentiment against Muzayen’s personality, the landlord seemed to be igno-
rant of his prejudice against Muzayen, and here lies the gist of my point: In addition
to being ignorant of their privileges, dominantly situated people are often unaware
of their prejudices. Recent research on implicit bias shows that negative prejudicial
associations “can be operative in influencing judgement and behaviour without the
conscious awareness of the agent”14. To analyse how implicit biases influence our
judgement toward people of other social groups, I shall turn to discuss the following
question:
How can one be ignorant of his/her privileges and prejudices? Several causes can
be given15, but I will limit myself only to implicit biases. According to Jules Hol-
royd, an individual harbours implicit bias against some stigmatized group (G), when
she has automatic cognitive or affective associations between (her concept of) G and
some negative property (P) or stereotypical trait (T), which are accessible and can be
operative in influencing judgement and behaviour without the conscious awareness
of the agent.16
An explanation is in order here: implicit biases are subtle and hard-to-detect hab-
its. For example, imagine Frank, a male and boss of a big company, who explicitly
believes that women and men are equally suited for careers outside the home. Howev-
246
The Enhancement of our Epistemic Interactions
er, despite Frank’s explicit egalitarian beliefs, he behaves in ways that are implicitly
biased against women. In the company meetings, for example, Frank often dismisses
suggestions from female workers. He also tends to hire more men over equally qual-
ified women. Part of the reason for Frank’s discriminatory behaviour might be an
implicit gender bias that he subtly harbours without realising it. Recall that implicit
biases are subtle and hard to detect. Additionally, implicit biases are automatic cog-
nitive/affective associations that interfere with the way we perceive, evaluate, and
interact with people from stigmatized groups that our biases target. Remember the
property owners in Franziska and Muzayen’s study who automatically associate hav-
ing non-Austrian names with, say, dishonest, problematic, terrorism etc. Such people,
for example, are destined to behave in ways that prevent non-Austrian applicants to
rent their property. In this way, implicit biases work in prejudiced individuals via au-
tomatic associative links in memory that are rendered meaningful and influential by
shared cultural stereotypes. According to Lawrence Blum, there are culturally salient
stereotypes which originate in a social-cultural process. For example, the widely-held
images of socially salient groups that associate Poles with stupidity, Irish with drunk-
enness, Black people with a lack of intelligence, women with emotionality, and so
forth.17 Blum argues that individual persons (e.g. property owners in Zoidl and Mu-
zayen’s study) absorb such stereotypes from their social-cultural environment simply
because they live in that environment. In this way, the mechanism of implicit biases
comes down to automatic associations that fit culturally salient stereotypical views
– and these associations tend to take place even when the subjects (e.g. Frank or prop-
erty owners) explicitly reject the stereotypes and despite their explicit good intentions
to avoid acting in prejudiced ways.18
Before I turn to the last section, let us make a recap of what we have so far seen.
We started with Jones’s case (section 1) from which I derived two epistemic aspects:
(a) Curt Shilling’s act of dismissing Jones’ experience of racism, something I termed
as an epistemic aspect of socially constructed ignorance. An examination of this epis-
temic aspect has been the chief aim of the previous section in which I have defined
socially constructed ignorance as ignorance of one’s privileges and prejudices. We
have seen what privileges and prejudices are and also highlighted implicit biases as
247
Clement Joseph Mayambala
one of the chief causes of ignorance of one’s privileges and prejudices. In what fol-
lows below, I shall discuss below (b) Mookie Betts’ act of posting on Twitter how he
wished fans could stand up for Jones and say no to racism, something I have called
an epistemic aspect of hopeful trust. By following thinkers like Frost-Arnold, Victoria
McGeer, Richard Holton and Katherine Dormandy, I will argue that hopeful trust is
an efficient epistemic tool in generating awareness of one’s privileges and prejudices,
and thus a solution to one’s socially constructed ignorance.
Before I analyse the notion of hopeful trust and the ways it generates awareness of
one’s privilege and prejudice, let me begin by looking at trust and its relation with
testimony.
The kind of testimony I am interested in here is the one which depends on a speaker
S’s trust in her hearer H (Note that H can be an individual hearer or a group of hearers/
an audience). In a classical example, when S testifies to H that a given proposition p
is true (and it is in fact the case that p is true), S vouches for the truth of p to H, and S
trusts that H forms a true belief or comes to know that p on S’s say-so. On the other
hand, the kind of S’s trust – say, that H forms a true belief that p on S’s say-so – I
have in mind here has two central features. First, it involves S’s reliance on H, and
reliance involves S’s vulnerability.19 For example, if S trusts and thereby relies on H
to keep a secret, then S is vulnerable to damaging disclosures if H reveals S’s secret
to others. So by giving testimony, a speaker is vulnerable for example to epistemic
injustice – an injustice “done to someone specifically in their capacity as a knower”20.
This often occurs when a speaker’s testimony is dismissed or discredited in a testi
248
The Enhancement of our Epistemic Interactions
monial interaction. Recall, for example, when Shilling discredited Jones’s testimony
by calling it a lie.
Second, S’s trust in H in any testimonial interaction comes with normative expec-
tations.21 For example, S expects H to listen with an open mind to what she says and
to take S’s word seriously. Or S expects H to do what he can to overcome any barriers
like privileges, prejudices or implicit biases that might prevent him from hearing/be-
lieving S. Alternatively, S often expects H to give her (S) the credibility she deserves
as a knower (the idea of epistemic justice).22 Otherwise, S feels betrayed when H fails
to act as normatively expected.23 For example, when H fails to listen, or when H dis-
closes S’s secret to others, or when H dismisses or discredits S’s testimony that p, but
when p is actually true.24 Giving testimony therefore often involves a speaker’s trust
in the sense of relying on the hearer and with normative expectations. Recall Mookie
Betts’ testimony that challenged the privileges and prejudices of Red Sox fans: “I’m
✊
Black too ? Literally stand up for @SimplyAJ10 tonight and say no to racism. We
as @RedSox and @MLB fans are better than this.”25 By posting such statements on
his social media platforms, Betts risked harm by exposing himself to the racist Red
Sox fans. Marginalised speakers like Betts often take this risk of harm with normative
expectations. For example, they normatively expect that their audience will listen
with an open mind and attempt to overcome whatever barriers may prevent them from
giving S due credibility, and this leads me to the notion of hopeful trust below.
According to Victoria McGeer, hopeful trust is a form of trust that can rouse trust-
worthiness in one’s audience. For example, when a marginalised speaker like Betts
engages in hopeful trust, he puts himself in his audience’s hands. McGeer writes:
And this fact [the fact of putting oneself in the trusted party’s hands] is made manifest
21 Of course, a hearer can also trust a speaker in the sense of relying on her and with normative expecta-
tions (Dormandy 2020), however, my focus here is only on the speaker’s trust in the hearer.
22 Frost–Arnold (2016).
23 The feelings of betrayal on S’s part are what Fredrick Strawson calls reactive attitudes – attitudes that
link trust to practices of holding people responsible for their actions. See Walker (2006) 80.
24 Frost-Arnold (2016) 519, and Baier (1994).
25 DeCosta-Klipa (2017).
249
Clement Joseph Mayambala
to them by our very acts and expressions of trust. Hence, by these acts and expres-
sions, we make ourselves vulnerable to them, yes, but in a way that actively holds
out a vision to them of what they can be or do. This vision creates for them a kind of
affectively charged scaffolding, empowering their own sense of potential agency with
the energy of our hope, and thus encouraging them to act in ways commensurate with
the vision we maintain. In this way, our hopeful trust can elicit from them an import-
ant and powerful kind of trust-responsiveness.26
By hopefully trusting, a marginalised speaker holds out to her socially constructed
ignorant audience “a vision of the kind of person they can be – a person who lives up
to our hopeful vision of caring and competence over the domain of our trust”27. This is
exactly what happened when Betts’s hopeful vision empowered Red Sox fans to stand
up for Jones and say no to racism. Hopeful trust is in this case motivational, whereby
the trusted party (H) responds to the speaker’s trust by thinking, “I want to be as she
sees me to be”28. In this way, a marginalised speaker’s hopeful vision empowers H to
be a kind of role model to himself. For example, “We as @RedSox and @MLB fans
are better than this [are better than being racists]”29 so went Betts hopefully trust
in the Red Sox fans. According to Jones’s case above (see section 1), Red Sox and
MLB fans normatively responded to Betts’s trust by giving Jones a standing applause
to which Jones was pleased to comment that ‘it was tremendous and expeditious’.
Richard Holton also affords us an offline example of how hopeful trust elicits trust-re-
sponsiveness.
Suppose you run a small shop. And suppose you discover that the person you have
recently employed has just been convicted of petty theft. Should you trust him with
the till? It appears that you can really decide whether or not to do so. And again it
appears that you can do so without believing that he is trustworthy. Perhaps you think
trust is the best way to draw him back into the moral community.30
Like in Betts’ online example, here the shopkeeper in an offline case holds out to
the employee a vision of the kind of person he can be – a person worth one’s trust.
250
The Enhancement of our Epistemic Interactions
This is motivational because it triggers the employee to live up to this vision, and in
so doing the employee is drawn back into the moral community.31
Nonetheless, one might ask, is hopeful trust rational? If yes, what makes it ra-
tional? I answer the first question in affirmative. What makes hopeful trust rational
depends on “knowing something about others’ values and putative capacities relevant
to the domain of our trustful interaction”32. This might be just general knowledge
about common human psychological tendencies that spur trust-responsiveness. Let
me use Holton’s example of the shopkeeper and the employee to illustrate this point.
The shopkeeper has some evidence, antecedent to any act of trust, that the employ-
ee is untrustworthy based on the employee’s past theft records. This would seem to
make it irrational for the shopkeeper to trust the employee. However, the shopkeeper
also knows other things about the employee. The shopkeeper knows that the employ-
ee shares common psychological tendencies that spur trust-responsiveness in human
beings.33 For example, people generally are prone to shame when detected in error,
or people generally want to live up to the expectations of those who trust them. Now
when the shopkeeper trusts the employee with the till, he makes himself vulnerable
to the employee but he also normatively expects that the employee will live up to
the expectations he (the shopkeeper) has toward him. The employee, like any other
human being, is prone to shame when detected in error (theft), and he will not take
the shopkeeper’s trust in him with the till for granted. As Frost-Arnold asserts: “if
the shopkeeper has no reason to believe that the employee lacks these psychological
features and lacks reasons that might override the trust-responsive mechanism, then
the shopkeeper’s hope that the employee will respond to a clear act of trust has some
rational basis”34, and this is the motivational power of hopeful trust seen above; when
Betts’ trust in Red Sox fans motivated them to want to be the kind of person who lives
up to his trust. Thus, “there is nothing rationally inappropriate about extending our
31 However, Frost-Arnold (2016, 521) warns us: “It is important that in this case the hope does not
outstrip what it is reasonable to expect people to do. Of course, there are many things that it would
be completely unreasonable to hope that someone will do. But if our hopes are that someone will do
something that they are able to do and against which there are no overriding incentives, then, in the
absence of reasons to believe that the trustee will not respond to our trust, we have at least some reason
to believe that our demonstrated trust in them may motivate them to act as we trust them to act.”
32 McGeer (2008) 250-251.
33 Frost-Arnold (2016) 521.
34 Ibid.
251
Clement Joseph Mayambala
trust to others beyond … evidence of their prior trustworthiness, so long as our hope
for what they are capable of in light of our trust are rationally based”35.
How does this account of hopeful trust apply to the trust demonstrated by those who
seek to challenge one’s privilege and prejudice? This question arises because the an-
tecedent evidence about a hearer’s implicit biases, for example, suggest that privi-
leged hearers will fail to give due uptake to the marginalised speakers’ trust. However,
the hopeful trust that the speaker demonstrates in her audience can lead the hearers
to work to avoid doing a testimonial injustice on members of a marginalised social
group, to push past defensive mechanism like implicit biases, and to become aware of
his prejudices and privileges. How does this happen? The vulnerability upon which S
relies in testifying to H triggers the trust-responsive mechanism in H. In other words,
when S engages in a clear act of hopeful trust, one that H recognizes as making S
vulnerable, H’s desire to avoid harming S is often automatically activated. This in
turn makes H desire to avoid doing the speaker harm and also motivates H to live up
to S’s vision of the kind of person he (H) can become. Recall Mookie Betts’s tweet in
response to what had happened to Jones at Fenway Stadium. Like Holton’s shopkeep-
er, Betts is trusting beyond the antecedent evidence. For example, he has good reasons
to believe that Red Sox fans are racist, closed-minded and intolerant, but rather he
makes himself vulnerable to the psychological harm of future attacks. Regardless of
their racist tendencies, Betts hopefully trusts them. And what happened in response
to Betts’ tweet? Red Sox fans gave a standing ovation to Jones once he stepped to the
pitch. Betts’ tweet had a positive effect on the Red Sox fans. It made them aware of
their socially constructed ignorance. An awareness of their ignorance made them in
turn realise that they were not living up to the vision of anti-racist, open-minded and
tolerant members of the MLB community. To put it in the language of hopeful trust,
Betts made himself vulnerable by telling his story of “I am black too”, and challeng-
ing the Red Sox fans’ prejudices and racist tendencies. But he did so in a way that held
out to Red Sox a vision of a moral community they could be – i.e. ‘We as @RedSox
and @MLB fans are better than this’. This vision of anti-racist, open-minded, tolerant
members of the MLB community made Red Sox fans want to live up to that vision,
thus giving Jones an outstanding ovation that Jones later described as tremendous and
expeditious. I shall summarise S’s hopeful trust in H thus:
252
The Enhancement of our Epistemic Interactions
One final question, before I conclude, concerns the sort of epistemic goods S hopeful-
ly trusts H for. In other words, in trust H, what does S hopefully trusts for? This ques-
tion arises from what Katherine Dormandy once implored us to understand about re-
lationships of trust: “To understand epistemic trust, we must understand the epistemic
goods that we trust for.”36 For Dormandy, “In case of testimony, there are two types of
epistemic goods: one that the hearer trusts the speaker for and another that the speaker
trusts the hearer for.”37 Dormandy calls the sort of epistemic goods a hearer trusts the
speaker for as representational epistemic goods, namely knowledge, evidence, or true
beliefs.38 But since this paper focused only on speaker’s trust in a hearer, I will not
discuss representational epistemic goods here. I shall, however, I argue that when S
hopefully trusts a privileged H with her testimony, S does it for what Dormandy calls
recognitional epistemic goods and what I shall call transformational epistemic goods.
Recognitional epistemic goods according to Dormandy are a sort of epistemic goods
for which S trusts H, namely believing S that she is saying truth that p, or giving S
appropriate credit as the source of the information, or granting S the final say in how
her words are to be interpreted etc.39
Although I agree with Dormandy that in cases of testimony S often trusts H for
recognitional goods, I also argue that there is another sort of epistemic goods for
which S may trust H for, something I call transformational epistemic goods. The no-
tion of transformational epistemic goods is something new that this paper contributes
to the literature on trust, testimony and ignorance. Transformational epistemic goods
are epistemic goods that aim at fostering an individual’s epistemic well-being and the
well-being of one’s online or offline epistemic community. By hopefully trusting H, I
argue, S does it for individual transformational epistemic goods like making H cogni-
zant of his privileges and prejudices, or transforming H into an epistemically virtuous
person (e.g. H may acquire epistemic virtues like open-mindedness, tolerance etc.).
253
Clement Joseph Mayambala
Conclusion
This paper aimed at addressing two fundamental claims that make hopeful trust in
marginalised people’s testimony – on offline and online platforms – epistemically
beneficial. These have been, first, the claim that through hopeful trust one becomes
aware of his/her socially constructed ignorance. Second, the claim that through hope-
ful trust, one attains both recognitional epistemic goods and what I have called trans-
formational epistemic goods. I think I have attained these goals, first by analysing
what is socially constructed ignorance, and the reasons for the ignorance of one’s
privileges and prejudices. Second by analysing the notion of hopeful trust as a rem-
edy to one’s socially constructed ignorance as well as highlighting recognitional and
transformational epistemic goods as sorts of epistemic goods for which a speaker S
hopefully trusts a hearer H.
NOTE: In writing this paper, I have learned much on the feedback that was given
on Karen Frost-Arnold’s paper “Social Media, Trust, and the Epistemology of Preju-
dice” (2016). I presented Frost-Arnold’s paper together with Maria Xavier Gnanadhas
Joseph Raj on 5th May 2023 at the “Doctorates/PhD-Students Workshop” held at the
Faculty of Catholic Theology, University of Innsbruck, during the Innsbruck-Pune
Conference: Anthropology in the Digital Age: Theological and Philosophical Re-
sponses, that took place between 3rd – 6th May 2023.
254
The Enhancement of our Epistemic Interactions
Reference
Aronson, Elliot et al. (2019) Social Psychology, 10th Edition, New York: Pearson.
Baier, Annette (1994) Moral Prejudice. Cambridge, MA: Harvard University Press.
Bailey, Alison (2020) The Weight of Whiteness: A Feminist Engagement with Privilege,
Race, and Ignorance. London: The Rowman and Littlefield Publishing Group, Inc.
Begby, Endre (2013) The Epistemology of Prejudice, in: Thought: A Journal of Philosophy
2 (2), 90-99.
Blum, Lawrence (2004) Stereotypes and Stereotyping: A Moral Analysis, in: Philosophi-
cal Papers 33 (3), 251-289.
Czopp, Alexander / Monteith, Margo (2003) Confronting Prejudice (Literally): Reactions
to Confrontations of Racial and Gender Bias, in: Personality and Social Psychology
Bulletin, 532-544.
Daniels, Tim (May 4, 2017) Curt Schilling Says Adam Jones Is ‘Creating a Situation’ by Re-
vealing Fan Slurs. Accessed online: https://bleacherreport.com/articles/2707821-curt-
schilling-says-adam-jones-is-creating-a-situation-by-revealing-fan-slurs. 28.10.2023.
DeCosta-Klipa, Nik. (May 2, 2017) Mookie Betts asks Red Sox fans to support Adam
Jones following racial taunts at Fenway. Accessed online: https://www.boston.com/
sports/mlb/2017/05/02/mookie-betts-asks-red-sox-fans-to-support-adam-jones-follo-
wing-racial-taunts-at-fenway/. 28.10.2023
Der Standard.com. (March 26, 2023). Rassismus bei der Wohnungssuche: Franziska
bekommt die Wohnung, Muzayen nicht. Accessed online: https://www.derstan-
dard.at/story/2000144676593/rassismus-bei-der-wohnungssuche-franziska-bekom-
mt-die-wohnung-muzayen-nicht 3.11.2023.
Dormandy, Katherine (2020) Trust in Epistemology. New York: Routledge.
ESPN.com news services (May 2, 2017) Orioles’ Adam Jones says he was target of racist
abuse at Fenway, https://www.espn.com/mlb/story/_/id/19291263/adam-jones-balti-
more-orioles-says-was-target-racist-abuse-fenway-park.
Fricker, Miranda (2007) Epistemic Injustice: Power and Ethics in Knowing. New York:
Oxford University Press.
Frost-Arnold, Karen (2016) Social Media, Trust, and the Epistemology of Prejudice, in:
Social Epistemology 30 (5-6), 513-531.
Holroyd, Jules (2012) Responsibility for Implicit Bias, in: Journal of Social Philosophy
43 (3), 274-306.
255
Clement Joseph Mayambala
Holton, Richard (1994) Deciding to Trust, Coming to Believe, in: Australian Journal of
Philosophy 72 (1), 63-76.
Kendall, Francis (2002) Understanding White Privilege. Accessed online: https://www.
american.edu/student-affairs/counseling/upload/understanding-white-privilege.pdf
12.10.2023.
McGeer, Victoria (2008) Trust, Hope and Empowerment, in: Australian Journal of
Philosophy 86 (2), 1-18.
McIntosh, Peggy (2008) White Privilege and Male Privilege, in: The Feminist Philosophy
Reader, 61-69.
Mikkola, Mari (2020) Self-Trust and Discriminatory Speech, in: Dormandy, Katherine
(ed.). Trust in Epistemology. New York: Routledge, 265-290.
Mills, Charles (2007) White Ignorance, in: Shannon Sullivan & Nancy Tuana (eds.). Race
and Epistemologies of Ignorance. Albany: SUNY Press, 11-38.
Peels, Rik / Blaauw, Martijn (eds.) (2016) Epistemic Dimensions of Ignorance. Cambridge:
Cambridge University Press.
Pohlhaus, Gaile (2012) Relational Knowing and Epistemic Injustice: Toward a Theory of
Willful Hermeneutical Ignorance, in: Hypatia 27 (2), 715-735.
Spelman, Elisabeth (2007) Managing Ignorance, in: Shannon Sullivan & Nancy Tuana
(eds.). Race and Epistemologies of Ignorance. Albany: SUNY Press, 119-131.
256
About the Authors
Snehal D’Souza: belongs to the Congregation of Sisters of Our Lady of Fatima. She
is a graduate from Pune University and has done her B Th in JD, Pune. She finished
her Masters in Theology with Licentiate in JD, Pune. Currently she is persuading her
Doctoral studies in Theology with specialization in the Book of Psalms at the Univer-
sity of Innsbruck.
Wilhelm Guggenberger: born 1966, did studies in Philosophy and Catholic The-
ology. Dissertation about the Sociological Theory of Niklas Luhmann, Habilitation
with a Monograph about Ethics of economy. Assistant-Professor for Christian Social
Ethics at the Department of Systematic Theology at the University of Innsbruck and
currently Dean of the Faculty for Catholic Theology. Board-Member of the Interna-
tional Colloquium on Violence and Religion (COV&R). Visited several Indian Uni-
versities and held Lectures there (Pune, Trichi, Palayiamkottai). Married and Father
of two Children.
Stefan Hofmann SJ: born in 1978 in Germany, he studied philosophy and theology
in Regensburg (Germany), Rome (Italy) and Steubenville (USA); 2004 Diploma in
Catholic Theology (Regensburg), 2006 Master of Arts in Philosophy (Steubenville,
257
About the Authors
VM Jose SJ: born in 1959. M.Th. at Vidyajyoti in New Delhi in 1994; Doctorate
in Pastoral Theology in Innsbruck in 2009. Doctoral Thesis (published): Towards a
Local Tribal Church: An Anthropological and Missiological Exploration of the Jesu-
its among the Santals, Delhi: Jnana-Deepa Vidyapeeth & Christian World Imprints,
2020. He joined Jnana Deepa Institute in June, 2012. Head of the Dept. for Pastoral &
Moral Theology since September 2018.
258
About the Authors
Dolichan Kollareth SJ: belongs to the Kerala province of the Jesuits. Fr. Kollareth
holds Master’s Degrees in Psychology and Philosophy and a Ph.D. in Social Psychol-
ogy. He is an Associate Professor of Psychology at Jnana Deepa, Pune, and a Research
Associate Professor at the Department of Psychology and Neuroscience, Boston Col-
lege, USA. He does research in emotion, cognition, and culture, and the findings have
appeared in peer-reviewed psychology journals.
Claudia Paganini: born 1978, three children, has been Professor of Media Ethics at
the Munich School of Philosophy since April 2021. She studied philosophy and theol-
ogy at the universities of Innsbruck and Vienna. Other research interests besides me-
dia ethics are medical, animal and environmental ethics as well as the question of how
the problem of motivation can be solved in ethics. During her time at the University of
Innsbruck, Paganini was a guest lecturer at the universities of Milan, Athens, Zagreb
and Limerick. She is a member of the Ethics Committee of the Medical University of
Innsbruck (MUI) and the Committee for Animal Experimentation Affairs of the Aus-
trian Federal Ministry of Education, Science and Research in Vienna.
Kuruvilla Pandikattu SJ: born 1957, is Chair Professor of JRD Tata Foundation for
Business Ethics at XLRI, Jamshedpur and is professor (Emeritus) of Philosophy and
Science at Jnana Deepa, Institute of Philosophy and Theology, Pune, India. He has
been actively involved in the dialogue between science and religion. Author/Editor of
259
About the Authors
more than 45 books and 240 academic articles, Pandikattu is a Jesuit priest belonging
to Dumka-Raiganj Province, India. Main topics of his research are: Ethics (incl. busi-
ness and applied), anthropology, artificial intelligence, life-management and transhu-
manism.
Patricia Santos RJM: belongs to the Congregation of the Religious of Jesus and
Mary, Pune Province. She is full time member of the Theology faculty at Jnana Deepa
and is Head of the Systematic Theology Department and Director of the Centre for
Women’s Studies.
Kristina Steimer: is a research assistant and PhD Student at the Chair of Media
Ethics at the Munich School of Philosophy. In her doctoral thesis, she examines how
selfies – i.e. the digital self-photography shared via social media – shape the human
self. In particular, she draws on Kierkegaard’s existential philosophy and questions
the selfie’s relationship to anxiety, stereotype, and self-empowerment. Steimer found-
ed the Selfie Research Network at the Center for Ethics of Media and Digital Society.
Her research interests include existential philosophy, digital culture, animal media
studies, and social media.
Andreas Vonach: born 1969, studied Theology in Innsbruck and Jerusalem. Habil-
itation with a Monography about the Greek Version of the Book of Jeremiah. He is
Assistant-Professor at the Department of Biblical Studies and Historical Theology in
Innsbruck and the Innsbruck Coordinator of the Cooperation with JD. Visiting Profes-
sor at JD for several times.
260
Given the incredible and exponential progress in digital revolution, affecting all
dimensions of human life, it is proper to reflect on who the human person is from
philosophical and theological perspectives, in order to understand ourselves better.
The cooperation between Jnana Deepa and the University of Innsbruck offers us
the opportunity to bring not only Christian tradition, but also Western and Indian
thinking into conversation with current technological developments. Scholars from
Pune/India and Innsbruck/Austria seek to shed more light on the self-understanding
of the human person within contemporary times to respond meaningfully and
adequately to the fundamental questions of ourselves, our nature and our destiny.
Such an understanding of the human person will hopefully enable us to encounter
God deeper and experience one another better.