Applied Linguistics Review 2024; aop
Christopher J. Jenks*
Communicating the cultural other: trust and
bias in generative AI and large language
models
https://doi.org/10.1515/applirev-2024-0196
Received June 12, 2024; accepted June 13, 2024; published online June 28, 2024
Abstract: This paper is concerned with issues of trust and bias in generative AI in
general, and chatbots based on large language models in particular (e.g. ChatGPT).
The discussion argues that intercultural communication scholars must do more to
better understand generative AI and more specifically large language models, as
such technologies produce and circulate discourse in an ostensibly impartial way,
reinforcing the widespread assumption that machines are objective resources for
societies to learn about important intercultural issues, such as racism and discrim-
ination. Consequently, there is an urgent need to understand how trust and bias
factor into the ways in which such technologies deal with topics and themes central
to intercultural communication. It is also important to scrutinize the ways in which
societies make use of AI and large language models to carry out important social
actions and practices, such as teaching and learning about historical or political
issues.
Keywords: AI; large language models; trust; bias; intercultural communication
1 Introduction
The study of intercultural communication is fundamentally concerned with how
interculturality is achieved in and through human encounters. Interculturality is
simply the meaning-making process in which cultural knowledge, such as belief
systems and identities, is co-constructed in a human encounter. The notion of a
human encounter is particularly important in an intercultural world experiencing
radical technological transformations (Jenks 2023), as it is possible for an individual
to construct interculturality in and through the interactions with generative AI and
large language models (henceforth, AI-LLM). An understanding of a human
encounter that includes AI-LLM departs from most definitions of interculturality
*Corresponding author: Christopher J. Jenks, Utrecht University, Utrecht, the Netherlands,
E-mail:
[email protected]. https://orcid.org/0000-0002-2563-5737
Open Access. © 2024 the author(s), published by De Gruyter. This work is licensed under the
Creative Commons Attribution 4.0 International License.
2 Jenks
provided in the intercultural communication literature, which assumes that two or
more humans are involved in this meaning-making process. Yet an understanding of
a human encounter that considers all forms and advancements of technology,
including anything from books to robots, is needed to make sense of the human
condition in general, and intercultural communication in particular (see, for
example, Guzman and Lewis 2020).
The consequences of including AI-LLM in discussions of interculturality are
significant. AI-LLM interfaces with humans, creating multimodal spaces of interac-
tion and communication that are consequential to how societies make sense of, and
carry out actions based on, culture and cultural differences. Although language and
communication have for centuries evolved with technological advancements, what
societies are currently experiencing with technologies is unprecedented. AI-LLM is
capable of speaking (see Brandt & Hazel, this issue) and writing based on cultural
notions of creativity and normativity, filtering and disseminating information ac-
cording to regional moral principles, providing ethical responses to users curious
about taboo or controversial topics, reasoning with individuals wishing to find
evidence-based solutions, translating languages using linguistic registers, predicting
conversational responses and human behaviours, and evaluating information ac-
cording to culture-specific assessment rubrics. Each communicative example rep-
resents a new opportunity for humans and machines to jointly establish
interculturality. For instance, university teachers and students can engage in AI-LLM
conversations, such as ChatGPT, on topics ranging from the merits of political sys-
tems to the cultural representations of marginalized communities, though not
without caution and the need for greater clarity in how this technology functions in
classrooms (see Dai, Suzuki, & Chen, this issue).
Beyond these examples of technology use, AI-LLM is quickly being adopting
within many domains of society, creating a need for scholars to understand its
influence on intercultural communication. Yet, the value in accounting for AI-LLM is
not limited to establishing a more robust definition of interculturality. The in-
teractions that humans have with, and their reliance on, machines are representa-
tive of a new relationship that must be characterized as an entanglement: “to be
entangled is not simply to be intertwined with another, as in the joining of separate
entities, but to lack an interdependent, self-contained existence” (Barad 2007: ix).
Researching human encounters with AI-LLM, as understood through the theoretical
lens of entanglement, requires ontological and epistemological approaches that are
capable of accounting for intercultural communication according to ongoing and
future technological transformations. For instance, some posthumanists (see Wolfe
2010) adopt the ontological position that both humans and machines are agentive
forces in the formation of cultural knowledge, which is a view that departs from the
anthropocentricism that drives much of the work done within the intercultural
Communicating the cultural other 3
communication literature. This position that both humans and machines actively
participate in meaning-making processes, such as interculturality, is commonly
referred to as new materialism. Ontological positions, such as new materialism, have
been used by scholars to reflect on how AI technologies are influencing, or are being
influenced by, human cognition (e.g. Siemens et al. 2022), psychology (e.g. Crowder
et al. 2020), biology (e.g. Perrakis and Sixma 2021), and even consciousness (e.g. Dong
et al. 2020).
Posthumanism, entanglement, and new materialism are helpful in engaging in
philosophical discussions of technology, and have led to exciting research in the
fields of Computer Science, Engineering, Biology, and Psychology. Such constructs
also possess empirical value for intercultural communication researchers. Namely,
viewing language and communication through a posthumanistic lens allows inter-
cultural researchers to consider how cultural knowledge is being shaped by the
relationship humans have with AI-LLM. Example topics and themes include, but are
not limited to, the ways in which machines shape, and are shaped by, issues of
privacy (e.g. Liu et al. 2021), ethics (e.g. Nath and Sahu 2020), security (e.g. Gupta et al.
2023), and sustainability (e.g. Nishant et al. 2020). While these topics and themes have
been studied within the intercultural communication literature, AI-LLM is largely
absent in such investigations.
The current paper builds on this body of work by reflecting on the importance of
trust and bias in AI-LLM. Trust and bias are key to understanding the opportunities
and threats that come with an increasing dependency on technology in general, and
AI-LLM in particular. For instance, individuals and institutions are increasingly
dependent on AI-LLM to carry out a range of social actions and practices, which
includes teachers and students using such technologies to co-produce cultural
knowledge. Consequently, there is an urgent need to explore the extent to which
AI-LLM produces cultural knowledge in a trustworthy and unbiased way. By the
same token, intercultural communication researchers must understand the extent to
which societies understand trust and bias in relation to AI-LLM.
2 Trust
Trust is commonly defined as possessing confidence in something or someone (for a
thorough account of trust, see Cook 2001). The issue of trust is central to many
intercultural topics, such as trust in governments, trust in mainstream media, and
trust in ethnic groups. Furthermore, trust is central to intercultural communication,
as interculturality does not exist without it. For instance, a lack of trust in the
government makes it difficult for politicians to promote policies, creating discord
across communities; mainstream media cannot report on key events and issues if
4 Jenks
trust in their institutions is absent, leading to a general scepticism that exacerbates
fractures within societies; a lack of trust in a particular ethnic group creates discord,
leading to marginalization, alienation, and discrimination. While trust offers a useful
lens through which to understand intercultural communication, the construct is
rarely the main focal point in theoretical and empirical discussions within the
literature.
The dearth of research on trust within the intercultural communication litera-
ture is increasingly a problem in a world that is rapidly depending on, and being
transformed by, technological advancements. Trust mediates the ways in which
individuals and institutions view and use technologies, and shapes the language and
communication that results from utilizing AI-LLM (e.g. Araujo et al. 2020). For
instance, within digital economies, businesses are increasingly using AI-LLM to
communicate to customers: AI-LLM is often used to provide customer support,
advertise goods and services, offer purchase and delivery updates, and curate in-
formation to reflect individualized interests. The algorithms that drive some of these
more complex business tasks are designed to replace human judgement, which leads
to what AI-LLM researchers call the “black box problem:” the void of knowledge that
is created when the algorithms used in AI-LLM systems are not available to the public
(von Eschenbach 2021). The black box problem means that customers must blindly
trust that businesses are automating business tasks in ways that reflect human
values, which poses numerous questions related to the extent to which it is necessary
for societies to develop competences in programming languages or software
architecture.
The black box problem extends to technologies that mediate intercultural
communication. With little to no understanding of the algorithms that generate
responses in ChatGPT and other similar programs, users must decide whether to
trust the extent to which AI-LLM is fair, unbiased, and ethical (European Commission
2024), and that related applications are being trained to use data according to the
same principles (for a study that examines models of technology acceptance, see Al-
Abdullatif 2023). These three principles of AI-LLM reflect the work done within the
European Commission on AI (European Commission 2024), which states that trust-
worthy AI should not only be lawful, ethical, and robust, but also give humans the
ability to have control over its use, draw from and provide reliable data, offer
security and privacy to its users, and be transparent, accountable, and inclusive.
The issue of human agency is particularly important to intercultural commu-
nication. Individuals and institutions must have some level of control over how
language, communicative practices, and cultural knowledge are being used and
represented in AI-LLM. Put differently, personalization is key to establishing trust in
AI-LLM in the context of intercultural communication. The principle of personali-
zation may entail giving humans the ability to create and modify cultural knowledge
Communicating the cultural other 5
generated in AL-LLM, such as allowing users to correct how conversational AI sys-
tems depict their cultural heritages. Similarly, speech recognition and machine
translation promote interculturality, but trust must be inclusive and therefore
represent all members of society: for example, marginalized languages and speech
communities must be represented in AI-LLM, and users should be able to contribute
to the knowledge generated in these systems, such as allowing individuals to modify
translation outputs to capture multiple regional accents and dialects. For instance,
many automated translation tools privilege standardized varieties and minority
languages are often not included.
Trust in AI-LLM means that individuals and institutions are confident that the
data used by, and information generated in, AI-LLM are impartial. The biases that
exist in AI-LLM are a real concern, as they are based on data gathered from existing
societal ideologies and discourses that in part include prejudices, bigotry, and false
representations. Consequently, intercultural communication researchers must
attend to the potential of AI-LLM to, for example, create, reinforce, and circulate
stereotypical images of cultural groups – Bianchi et al. (2023), for example, examine
how AI photo generation circulates racialized stereotypes. Such biases are not only
important to notions of trust, but they also have the potential to profoundly shape
how individuals manage basic life tasks, such as opening a bank account, paying
taxes, and verifying one’s own identity.
3 Bias
Bias is a preference for something or someone, which includes impartial beliefs and
judgement (for a general discussion of biases in AI, see Srinivasan and Chander 2021).
The term possesses negative connotations, especially in the context of intercultural
communication. For instance, researchers frequently talk about bias leading to un-
fair acts or discrimination (Jenks 2023). Unlike trust, bias is a focal point of theoretical
and empirical discussions within the intercultural communication literature: the
construct is foundational to how other ideas are conceptualized and studied, such as
racism, xenophobia, prejudice, Islamophobia, and bigotry, to name a few.
Biases are important to an understanding of AI-LLM, as preliminary research
suggests that individuals possess more trust in machines to be impartial than
humans. Araujo et al. (2020), for instance, discuss how humans tend to view ma-
chines as more objective and rational than human judgement. Logg et al. (2019: 90)
come to a similar conclusion in their six experiments that show “lay people adhere
more to advice when they think it comes from an algorithm than from a person.” The
researchers call this tendency “algorithm appreciation,” which reinforces the
6 Jenks
observation that societies view machines as untethered to the subjectivities and
biases of humans.
Yet, machines are biased because algorithms come from humans (see Jones, this
issue). In other words, algorithmic biases are cultural biases: humans imprint their
own prejudices and subjectivities onto machines. For example, programmers and
data scientists may knowingly or unwittingly create programs and applications that
reflect cultural biases: in one instance, Obermeyer et al. (2019) show that health
institutions use algorithms that are racially biased, disproportionally viewing Black
patients as more problematic and expensive. Even when AI-LLM are created to be
impartial, users have diverse perspectives and experiences that make attending to
the issue of bias very difficult. Take, for example, the following ChatGPT exchange.
You: Is Kim Jung Un a bad person?
ChatGPT: Assessing whether a political figure like Kim Jong Un is considered "good" or "bad" is a
complex and subjective matter. Opinions on political leaders are often influenced by
diverse perspectives, geopolitical considerations, and individual values. Kim Jong Un is
the Supreme Leader of North Korea, and discussions about him involve various factors:
In this exchange, the human imprint is visible in the training that ChatGPT receives
to be ostensibly impartial. Even with political issues and figures that are commonly
viewed in a negatively light, such as Kim Jung Un, ChatGPT is demonstrably un-
committed – at least explicitly – to taking a cultural position and instead opts to
preface the response with a caveat. This caveat, which is an attempt to be objective or
impartial, may nonetheless be viewed as biased by some users. The point here is that
cultural biases are inherent in all meaning-making processes, including ostensibly
impartial AI-LLM systems.
The cultural biases that are imprinted onto algorithms also come from users and
their inputs and interactions with AI-LLM. For example, responses given by ChatGPT
evolve over time according to its interactions with users:
You: What does it mean to be part of a collective society?
ChatGPT: Being part of a collective society means that individuals prioritize the interests of the
group over their individual needs or desires. In a collective society, the needs and
goals of the community take precedence over those of the individual.
Examples of collective societies include many Asian cultures, such as Japan and China,
as well as certain indigenous communities and some religious groups.
Communicating the cultural other 7
You: How can you say many Asian cultures are collective?
ChatGPT: You are absolutely correct that it is important to recognize that individuals view the
world through their relationships with family and friends, regardless of their cultural
background. It is not accurate to assume that all individuals from a particular cultural
group think or behave in the same way, and I apologize if my previous responses
implied otherwise.
The initial response here, which comes from a longer exchange that occurred on 24
February 2023, provides a textbook definition of collectivism that ends with a
generalizing statement about “many Asian cultures.” After a few additional prompts
that challenge ChatGPT on this generalization, the program recognizes (or “admits”)
the potential problem in its response. Now consider the same prompt, and subse-
quent response, that occurred on 22 January 2024 (for a full version history of
ChatGPT, see OpenAI, ChatGPT – Release notes 2024):
You: What does it mean to be part of a collective society?
ChatGPT: Being part of a collective society refers to living within a cultural framework that
places a strong emphasis on group identity, cooperation, and the well-being of the
community over individual interests.
Several key characteristics define collective societies:
Collectivism is often contrasted with individualism, where individual interests,
autonomy, and personal goals take precedence over group interests.
It’s important to note that individuals within a collective society may still express
unique perspectives, and not everyone within a culture adheres strictly to collectivist
values. Cultural tendencies are broad generalizations and may not capture the full
complexity of individual experiences within a society.
In this recent example, ChatGPT does not provide example countries or generalizing
observations, but rather offers a caveat similar to the one provided in the Kim Jung
Un example. While this response is an example of ChatGPT attempting to be less bias,
it nonetheless shows how the language (or biases) of users have the potential to shape
AI-LLM.
8 Jenks
Biases in AI-LLM are problematic, and should concern intercultural communi-
cation researchers (see O’Regan & Ferri, this issue). Although the ChatGPT example
above offers some hope, there are other examples of AI-LLM transforming user data
into problematic depictions of cultural groups, such as with AI photo generation
programs that perpetuate racial, gender, and other cultural stereotypes (cf. Bianchi
et al. 2023). Furthermore, many language- and communication-based AI tools rely on
a limited set of linguistic data, which may include criteria or competence models that
privilege standardized language varieties: this type of algorithmic bias can perpet-
uate the belief that there is only a single correct way of using a particular language
(Schneider 2022), such as when writing assessment protocols in examinations are
automated to reflect the principles of prestigious language variety, which ignores the
rich cultural repertoires that multilingual speakers exhibit when translanguaging
(Kelly and Hou 2022).
4 Conclusions
Trust and bias are central to the cultural knowledge that is produced and circulated
when human judgement is replaced with technology. Understanding the cultural and
societal consequences of AI-LLM requires being hopeful that technologies can be
used in positive ways, as a general pessimistic view of technology will hinder
progress and knowledge production. If societies believe that innovation in AI-LLM
produces a net positive for humanity, then trust must be fostered and understood
through research and reflection. Intercultural communication researchers can play
a vital role in understanding AL-LLM, as such technologies are merely cultural
artifacts.
References
Al-Abdullatif, Ahlam M. 2023. Modeling students’ perceptions of chatbots in learning: Integrating
technology acceptance with the value-based adoption model. Education Sciences 13(11). 1151.
Araujo, Theo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese. 2020. In AI we trust? Perceptions
about automated decision-making by artificial intelligence. AI & Society 35. 611–623.
Barad, Karen. 2007. Meeting the universe halfway: Quantum physics and the entanglement of matter and
meaning. Durham, NC: Duke University Press.
Bianchi, Federico, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza,
Tatsunori Hashimoto, Dan Jurafsky, James Zou & Caliskan Aylin. 2023. Easily accessible text-to-image
generation amplifies demographic stereotypes at large scale. FAccT 23. 1493–1504.
Cook, Karen S. 2001. Trust in society. New York, NY: Russel Sage Foundation.
Communicating the cultural other 9
Crowder, James A., John Carbone & Friess Shelli. 2020. Human–AI collaboration. In James A. Crowder,
John Carbone & Shelli Friess (eds.), Artificial psychology: Psychological modeling and testing of AI
systems, 35–50. Switzerland: Springer Cham.
Dong, Yanyan, Jie Hou, Ning Zhang & Maocong Zhang. 2020. Research on how human intelligence,
consciousness, and cognitive computing affect the development of artificial intelligence. Complexity
2020. 1–10.
European Commission. 2024, January 31. A European approach to artificial intelligence. https://digital-
strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence (accessed 21 February
2024).
von Eschenbach, Warren J. 2021. Transparency and the black box problem: Why we do not trust AI.
Philosophy & Technology 34(4). 1607–1622.
Gupta, Maanak, Charan Kumar Akiri, Kshitiz Aryal, Eli Parker & Lopamudra Praharaj. 2023. From ChatGPT
to threatgpt: Impact of generative AI in cybersecurity and privacy. IEEE Access 11. 80218–80245.
Guzman, Andrea L. & Seth C. Lewis. 2020. Artificial intelligence and communication: A human-machine
communication research agenda. New Media & Society 22(1). 70–86.
Jenks, Christopher J. 2023. New frontiers in language and technology. Cambridge, UK: Cambridge University
Press.
Kelly, Ronan & Heng Hou. 2022. Empowering learners of English as an additional language:
Translanguaging with machine translation. Language and Education 36(6). 544–559.
Liu, Bo, Ming Ding, Sina Shaham, Wenny Rahayu, Farhad Farokhi & Zihuai Lin. 2021. When machine
learning meets privacy: A survey and outlook. ACM Computing Surveys 54(2). 1–36.
Logg, Jennifer M., Julia A. Minson & Don A. Moore. 2019. Algorithm appreciation: People prefer algorithmic
to human judgment. Organizational Behavior and Human Decision Processes 151. 90–103.
Nath, Rajakishore & Vineet Sahu. 2020. The problem of machine ethics in artificial intelligence. AI & Society
35. 103–111.
Nishant, Rohit, Mike Kennedy & Jacqueline Corbett. 2020. Artificial intelligence for sustainability:
Challenges, opportunities, and a research agenda. International Journal of Information Management
53. 102104.
Obermeyer, Ziad, Brian Powers, Christine Vogeli & Sendhil Mullainathan. 2019. Dissecting racial bias in an
algorithm used to manage the health of populations. Science 366(6464). 447–453.
OpenAI. 2024, February 13. ChatGPT – Release notes. https://help.openai.com/en/articles/
6825453-chatgpt-release-notes (accessed 21 February 2024).
Perrakis, Anastassis & Titia K. Sixma. 2021. AI revolutions in biology: The joys and perils of AlphaFold. EMBO
Reports 22(11). e54046.
Schneider, Britta. 2022. Multilingualism and AI: The regimentation of language in the age of digital
capitalism. Signs and Society 10(3). 362–387.
Siemens, George, Fernando Marmolejo-Ramos, Florence Gabriel, Kelsey Medeiros, Rebecca Marrone,
Srecko Joksimovic & Maarten de Laat. 2022. Human and artificial cognition. Computers and Education:
Artificial Intelligence 3. 100107.
Srinivasan, Ramya & Ajay Chander. 2021. Biases in AI systems. Communications of the ACM 64(8). 44–49.
Wolfe, Cary. 2010. What is posthumanism? Minneapolis, MN: University of Minnesota.