Chatbot Anthropomorphism in Services
Chatbot Anthropomorphism in Services
anthropomorphism perspective
Author
Nguyen, Mai, Casper Ferm, Lars-Erik, Quach, Sara, Pontes, Nicolas, Thaichon, Park
Published
2023
Journal Title
Psychology & Marketing
Version
Version of Record (VoR)
DOI
10.1002/mar.21882
Rights statement
© 2023 The Authors. Psychology & Marketing published by Wiley Periodicals LLC. This is an
open access article under the terms of the Creative Commons Attribution License, which
permits use, distribution and reproduction in any medium, provided the original work is properly
cited.
Downloaded from
[Link]
DOI: 10.1002/mar.21882
RESEARCH ARTICLE
1
Department of Marketing, Griffith University,
Brisbane, Queensland, Australia Abstract
2
Centre of Science and Technology Research This study measures the effects of chatbot anthropomorphic language on customers'
and Development, Thuongmai University,
Hanoi, Vietnam
perception of chatbot competence and authenticity on customer engagement while
3
UQ Business School, The University of taking into consideration the moderating roles of humanlike appearance and brand
Queensland, Brisbane, Queensland, Australia credibility. We conducted two experimental studies to examine the conceptual
4
Department of Marketing, Griffith University, framework. Study 1 tests the moderating effect of a chatbot's anthropomorphic
Gold Coast, Queensland, Australia
5
appearance on the relationship between chatbots' language and customer
Faculty of Business Education, Law and Arts,
University of Southern Queensland, City of engagement. Study 2 tests the moderating effect of brand credibility on the
Ipswich, Queensland, Australia
relationship between a chatbot's anthropomorphic language and customer engage-
Correspondence ment. The findings confirm that the interaction between humanlike appearance via
Mai Nguyen, Department of Marketing, the use of avatars and anthropomorphic language, such as using emojis, in
Griffith University, 170 Kessels Rd, Brisbane,
QLD 4111, Australia.
conversations with customers influences customer engagement, and that this effect
Email: m.nguyen2@[Link] and is mediated by perceived chatbot competence and authenticity. Further, the positive
maidthm@[Link]
effect of anthropomorphic language on perceived competence, and subsequently on
authenticity and engagement, is only significant when the brand credibility was low
(vs. high). This study offers insights into the effect of chatbots' anthropomorphic
language and provides suggestions on how to devise efficient strategies for engaging
customers using chatbots.
KEYWORDS
anthropomorphism, authenticity, brand credibility, chatbots, competence, emojis
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
© 2023 The Authors. Psychology & Marketing published by Wiley Periodicals LLC.
Flavián et al., 2021). Hence, it is important to further explore new on social response theory (e.g., Moon, 2000) to capture the set of
horizons in customer experience. drivers behind customer engagement resulting from chatbots and their
Chatbots are an important strategic brand asset as they provide anthropomorphism. Social response theory is an appropriate theoretical
the chance to provide efficient customer service around‐the‐clock foundation because of its role in establishing how individuals treat
(Rajaobelina et al., 2021; Thomaz et al., 2020). For instance, over the technology (or for our study, chatbots) in their interactions and helps
course of a year, more than 100,000 chatbots were created to explain their behavioral outcomes (Miao et al., 2022). Discerning how
interact with customers on the Facebook Messenger platform anthropomorphic elements enhance individual chatbots' social
(Johnson, 2017). Chatbots are capable of simple jobs like sending responses will prove meaningful in understanding how anthropomorph-
customers airline tickets as well as more difficult ones like giving ism impacts the chatbots perceived competence and authenticity
them shopping, health, or financial advice (Jiménez‐Barreto (Grazzini et al., 2023; Huang & Lee, 2022). Thus, social response theory
et al., 2021). By 2025, 95% of all consumer–brand interactions will in our paper to explicate and join this important and emerging arena of
either be improved upon or replaced by chatbots, according to chatbot research. The current research has two main objectives: first, to
market analysis (Servion, 2020). Thus, it is critical for customer examine how the anthropomorphic language and appearance of
experience researchers to investigate the insights of chatbots into chatbots influence customers' perception of competence, authenticity,
customer experience and its implications (Mariani et al., 2022; Mehta and customer engagement; and second, to explore the role of brand
et al., 2022). credibility in shaping consumers' interaction with chatbots. We
Despite the fact that chatbots are frequently used to communicate conducted two experiments to test our conjectures. Thus, we are
with customers, recent reports show that their interactions with them among the first to examine the effect of emojis as a form of
frequently go wrong (e.g., Orlowski, 2017). Consumer scepticism anthropomorphic language. Using social response theory, we extend
toward user–chatbot interactions and a preference for interacting with prior research and include emojis as a vital form of anthropomorphic
human agents (as opposed to chatbots) are frequent outcomes of the language. In addition, our results show that brands with low credibility
accumulation of these negative experiences and a general customer should avoid the use of emojis in their chatbots—but this effect is
mistrust of technology (Elsner, 2017). The lack of perceived compe- negated when brand credibility is high.
tence and authenticity in the customer–chatbot interaction may be the Our research contributes to the existing body of knowledge in the
cause of consumers' scepticism and opposition to chatbots, but they field of digital services by providing new insights into the role of
have not received sufficient attention from previous studies. It is crucial chatbots' anthropomorphic language in predicting customer perception
that chatbot designers and programmers comprehend how to give and engagement. Our study makes three main contributions to the
chatbots vital social‐emotional and relational aspects to increase the literature. First, we propose that chatbots' use of emojis can positively
humanlikeness of chatbot interactions, which raises the perceived influence customers' social responses, as they begin to perceive the
competence and authenticity of the conversation. This is because chatbot as a social actor (Miao et al., 2022). Our findings also suggest
advances in AI technology have made it possible for businesses to that emojis can be used to further reduce feelings of anger when
program chatbots to provide personalized responses based on cutting‐ interacting with a chatbot. Second, our study provides a more nuanced
edge speech recognition technologies, enabling a more anthropo- understanding of how anthropomorphizing a chatbot can lead to
morphic interaction (Mozafari et al., 2021; Wilson et al., 2017). customer engagement, mediated by authenticity and perceived
While some research has looked at how chatbots resemble competence. Finally, our research may pave the way for future studies
human language, more research is needed to determine how to incorporate emojis in chatbots as a means of expressing emotions
different language styles affect customer perception of chatbot's and reducing feelings of uncanniness in a nonintrusive manner. The
humanlikeness which may lead to different perceived competence current research provides some suggestions for marketers on how to
and authenticity (S. Y. Kim et al., 2019; Pizzi et al., 2023). Additionally, create efficient plans for interacting with clients through chatbots.
the effort to anthropomorphize chatbots faces the difficulty that The paper is structured as follows. First, the literature review and
consumers find it increasingly challenging to correctly distinguish hypotheses development are presented. Next, we go over the two
humans from chatbots (Pizzi et al., 2023). A chatbot is the experiments to test the hypotheses. Finally, we address the general
representative of a brand when interacting with customers. Thus, discussion, implications, limitations, and directions for future research
brand credibility as the information received from a brand also may to form the paper's conclusion.
form customer expectations which may influence the interaction with
chatbots (Hamilton et al., 2021). However, brand credibility has been
overlooked in the interaction between chatbots and online custom- 2 | LITE RATURE REVIEW
ers. There have not been any studies as far as the authors' literature
review that examines the role of brand credibility in the interaction 2.1 | Anthropomorphism
between chatbots and online customers (Appendix A).
The research question of this paper is: “How does chatbot Anthropomorphism is “the attribution of human characteristics to
anthropomorphic language affects customers' perception of chatbot nonhuman entities” (Zhou et al., 2019, p. 954). Anthropomorphism
competence and authenticity and customer engagement?”. We draw is proposedly drawn from a theory of mind where humans
NGUYEN ET AL. | 3
attribute intentions, attitudes, and belief schemas to explain the 2.2 | Chatbots and current research
actions of a human or nonhuman entity (Epley, 2018). Such
attribution may include emotional traits (e.g., desire and openness; Chatbots can be broadly thought of as “any software application
Fan et al., 2016), physical traits (e.g., a humanoid face or body), or that engages in a dialog with a human using natural language” (Rese
cognitive traits (e.g., knowledge; Nguyen et al., 2022). Humans et al., 2020, p. 2), are often adopted as customer‐facing agents in
often anthropomorphize objects as a way of helping them contexts ranging from retailing and travel to legal and medical
understand and relate to the world around them. For example, services (Garvey et al., 2023). Chatbots can act as a 24/7
humans often attribute agency to animals, natural forces, or touchpoint answering customer queries and provide a 15%–90%
deities to explain their behaviors, even if there is no intentionality cost reduction opportunity as it replaces the need of a human agent
behind these entities' actions (e.g., “The dog is affectionate” vs. (Bakhshi et al., 2018). Indeed, 80% of firms have incorporated, or
“The dog loves me”; Epley et al., 2007). Therefore, anthropo- plan to use, chatbots in their service provision (W. Kim et al., 2022).
morphism can be specifically thought of as a process that As a result, a flurry of research has emerged to determine what
imbues nonhuman entities with a sense of agency as a means of factors contribute to the efficacy of chatbots (Appendix A).
forming a relationship with the focal object (Newman, 2018; Tam Most of this research has investigated the effectiveness of
et al., 2013). chatbots and their specific elements, namely (1) the effect of
Anthropomorphism has been used in a marketing space to chatbot anthropomorphism, and (2) the effectiveness of the
generate relationships with customers. For instance, brand logos and communication elements the chatbot uses.
brand personas are attempts by a brand to be perceived as a living First, anthropomorphism has been widely studied to determine
entity to encourage relationships with customers (Aggarwal & what aspects of chatbots are most effective in enabling adoption
McGill, 2007). Indeed, the Michelin Man logo, invented in 1889, intent (Sheehan et al., 2020), purchase intentions (Crolic et al., 2022),
was amongst the first anthropomorphic brand logos created that (1) engagement (Kull et al., 2021) or satisfaction and loyalty (Esmark
humanized a brand, and (2) provided an engaging brand experience Jones et al., 2022). Studies have often compared the differences
that resonated with customers (Jurberg, 2020; Newman, 2018). between the presence of humanlike (vs. non‐humanlike) avatars to
Anthropomorphism has also been used in autonomous vehicles to understand if the effect of anthropomorphizing a chatbot works to
increase trust (Waytz et al., 2014), implemented as product features increase positive outcomes (Jin & Youn, 2022). Interestingly, this has
to increase product preference (MacInnis & Folkes, 2017), or in social produced a variety of mixed effects in the literature. For instance,
cause campaigns to increase message compliance (Ahn et al., 2014). anthropomorphizing a chatbot can be detrimental in scenarios where
Overall, brands strive for customers to develop a relationship with a customer is angry (Crolic et al., 2022). Chatbots have also been
them and use anthropomorphic strategies to do so (Steinhoff & found to be better than humans at giving bad news to customers
Palmatier, 2021). (Garvey et al., 2023). Further, Drouin et al. (2022) found that
In the digital age, technology has removed most semblance of individuals who spoke to a chatbot had fewer negative emotions but
direct human presence in social interactions (Steinhoff & reported a greater sense of homophily with a human—which
Palmatier, 2021). To instil a sense of social presence, firms have contrasts with Pizzi et al. (2021), who found that lower anthropo-
begun anthropomorphizing customer's experiences across a wide morphizing of chatbots increased feelings of reactance.
variety of platforms (Brandão & Popoli, 2022) or devices (e.g., The fragmented findings have been proposed due to situational
voice assistants such as Alexa; Mende et al., 2019). With the factors (T. W. Kim et al., 2023). For example, in medical diagnosis
growth of online shopping, implementing anthropomorphic ele- (Longoni et al., 2019), financial contexts (Luo et al., 2019), or travel
ments within a customer's journey has enabled the presence of a and banking (Kull et al., 2021). Yet even in the face of these
perceived partner (Hamilton et al., 2021). This generally leads to fragmented findings, research has forged forward to finding what
increased positive feelings and experiences on the part of the aspects of chatbots work or not.
customer. However, anthropomorphism can be a double‐edged Overall, research has forged forward to finding what aspects of
sword, leading to feelings of reactance and anger (Crolic chatbots work and do not work. This has led to research classifying
et al., 2022). For example, if an algorithm fails (Srinivasan & what communication elements are most effective.
Sarial‐Abi, 2021) or if the agency is attributed to a specific Second, the ways in which a chatbot interacts with customers are
anthropomorphic actor (Waytz et al., 2014), then negative feelings a cause of great scrutiny in the literature (see Appendix A for
may be greater than if those actors were non‐anthropomorphic previous studies on chatbots). The majority of papers focus on text‐
(Garvey et al., 2023). This effect is stronger when there is an based communications (e.g., Rese et al., 2020), with less focusing on
interaction with an anthropomorphic actor, as negative attribu- imagery (e.g., Roy & Naidoo, 2021) and speech (e.g., Luo et al., 2019).
tions are more likely to occur if they are highly anthropomorphic Text‐based elements often focus on using schemas to determine
(T. W. Kim et al., 2023). As a result, anthropomorphism can reap what is the most effective way to anthropomorphize chatbot–cus-
great benefits but can also be quite risky, as is most clear in the tomer interactions (Pizzi et al., 2021), such as the use of humor (Shin
case of chatbots discussed next. et al., 2023) or warmth (Roy & Naidoo, 2021) to reduce customers
4 | NGUYEN ET AL.
uncertainty when interacting with a nonhuman entity (Yu et al., 2022). fact, the level of realism of ChatGPT and Bing's Chatbot using emojis
These specific elements of chatbots are to simulate social presence, has sparked ethical concerns wherein emoji usage may make a
which leads to better customer experience (Rizomyliotis et al., 2022) chatbot indistinguishable from humans and induce mistakes through
and is a good predictor of chatbot usage continuance (Jin & empathetic social responses (Véliz, 2023). For instance, researchers
Youn, 2022). placed a pair of eyes on an “honesty box” in a university coffee shop
As chatbots become more advanced, imagery is an emerging and found people pay up to three times as much as the control group
factor in chatbot interactions. For example, chatbots can provide due to feelings of observation (Bateson et al., 2006).
images of the product (W. Kim et al., 2022) or use GIFs (animated We extend such thinking, via social response theory, that emojis
images) and memes (widespread inside jokes; Zhang et al., 2022). convey emotion and connection with individuals, increasing a
Combining both text and imagery may contribute to stronger chatbot's anthropomorphism and individuals' likelihood to treat the
intentions and attitudes in chatbot–customer interactions. However, chatbot as a social actor (Moon, 2000). As such, we propose the use
research has focused primarily on text and imagery but “ignored of emojis in chatbots is becoming increasingly important to
social media afforded language forms, e.g., emojis” (Ge & understand as their use by AI agents begins to blur the distinction
Gretzel, 2018, p. 1272). Scrutiny of emojis is important in chatbot between humans and nonhumans leading to greater influence
interactions, especially as they can provide nonverbal cues to convey individuals' social responses (Véliz, 2023). Overall, we build our
feelings—a sign of anthropomorphism (Wu et al., 2022). Emojis are research on social response theory due to its explanatory power for
proposed to generate 57% more likes on Facebook and increase why individuals engage with high (vs. low) anthropomorphic chatbots
click‐through rates by 241% (Cyca, 2022a). Emojis are purported to and their behavioral outcomes.
convey meaning, and posts containing emojis have been found to
increase positive emotions and purchase intentions over social media
(Das et al., 2019). Further, the presence of emojis is proposed to 3 | HYPOTHESES DEVELOPMENT
prompt a greater effect on the helpfulness of online reviews (Wu
et al., 2022). As emojis have become an integral part of online 3.1 | Chatbot anthropomorphism and customer
communications, only viewing chatbots via text and imagery engagement
elements is a major shortcoming (Ge & Gretzel, 2018). As a result,
this study will build on prior research and focus on the use of emojis Customer engagement, for this study, will be defined as “a customer's
as a form of anthropomorphism in chatbot–customer interactions. behavioral manifestations that have a brand or firm focus, beyond
purchase, resulting from motivational drivers” (Van Doorn et al., 2010,
p. 254). We adopt this definition as we find traditional definitions of
2.3 | Social response theory customer engagement frequently assume an affective, cognitive, and
behavioral facet (e.g., Chi et al., 2022; Hollebeek et al., 2014). Such
Social response theory suggests that the more anthropomorphic a studies often view engagement through social media or online brand
chatbot is, the greater the likelihood that people will react to it as a communities in relation to a focal actor (the brand) and attribute an
human actor and apply social rules toward it (Miao et al., 2022). emotional and cognitive component to the interactions (i.e., Khan
Specifically, when any technology possesses a set of humanlike et al., 2016). This is reasonable for brand communities due to their
characteristics (such as language, turn‐taking, politeness, and inter- ongoing social components, but chatbot interactions are short, goal‐
activity), individuals begin treating it as a social actor (Moon, 2000; driven and the interaction ceases upon goal attainment or failure
Nass et al., 1996). We propose social response theory plays a central (e.g., did the chatbot answer the customers' question correctly or
role in the reactions customers have toward chatbots where, as not). Due to our studies' context, we propose customers' interactions
mentioned prior, the more anthropomorphic the chatbot, the greater with a chatbot influence their behaviors (i.e., word‐of‐mouth and
human characteristics and social rules are attributed toward it (Crolic review intentions) as driven by the customer's specific motivations to
et al., 2022). It is the levels of anthropomorphism that are attributed interact (e.g., information seeking). Besides, for customer engage-
to a chatbot that may increase individuals' tendency to perceive and ment in the realm of chatbots, behavioral facets are most relevant,
behave toward it as more (vs. less) human. In essence, anthropo- which coincides with prior research (see Kull et al., 2021; Mostafa &
morphic elements of a chatbot, from its visual appearance to its Kasamani, 2022).
language‐ability will influence the level of social response humans will Customer engagement has a wide array of outcomes ranging
have toward it. from word‐of‐mouth recommendations, knowledge sharing, and
Turning toward our study's focus, these anthropomorphic cues helping other customers (Mostafa & Kasamani, 2022). Through the
that chatbots represent, for which we argue emoji‐usage is lens of chatbot interactions, a customer's engagement may heighten
prominent, will influence an individual's social response to a chatbot. based on their perceived demographic similarity to the chatbot and
For example, emojis play a double‐edged sword in customers' enable a better co‐creation process (Esmark Jones et al., 2022).
reactions due to contextual factors influencing social responses Additionally, the language and tone used by a chatbot have been
(e.g., emojis in professional vs. communal contexts; Li et al., 2019). In found to increase customer engagement and their social response
NGUYEN ET AL. | 5
(Kull et al., 2021). With regard to emojis, prior research has shown Rizomyliotis et al. (2022) and extend their findings to propose that
emojis increases the number of likes on social media posts by 72% the perceived competence of chatbots may be heightened in the
and comments by 70% when compared to those that do not feature presence of emoji in customer–chatbot interactions.
them (Ko et al., 2022). Further, customers who received a smiley face Authenticity can be thought of “as the real thing” (Rese
were significantly happier, while those who received negative emoji et al., 2020). The authenticity of a chatbot can refer to a customer's
felt worse (Das et al., 2019). We thus propose that the inclusion of ability to communicate with the chatbot in a natural way (Rese
emojis (vs. none) in chatbot communication is a sign of anthropo- et al., 2020). Yet, algorithms are perceived as less authentic than their
morphism and will lead to a higher level of customer engagement. human counterparts (Yu et al., 2022). Anthropomorphic cues, such as
The physical appearance of a chatbot has been the center of language or visual elements, may provide a direct signal to a customer
much anthropomorphism discussion in extant literature (Garvey about the authenticity of a chatbot and heighten feelings of
et al., 2023; T. W. Kim et al., 2022; Sheehan et al., 2020). Research engagement and social response (Esmark Jones et al., 2022; Nass
has essentially agreed that the greater the anthropomorphism of an et al., 1996). The social presence of an anthropomorphic chatbot has
agent, the greater customers' expectations are of that agent. For been found to increase its perceived authenticity and extend
example, aligning with social response theory, Miao et al. (2022) behavioral intentions such as continued usage (Rese et al., 2020).
propose a typology of avatar behavior and form realism. The authors' Therefore, in processing and interacting with chatbots with higher
typology suggests that the higher (vs. lower) an avatar's form realism, levels of anthropomorphism, we expect customers to view them as
the greater (vs. lesser) expectations of its behavioral realism are. To more authentic than their non‐anthropomorphic counterparts.
illustrate, Go and Sundar (2019) found that the anthropomorphism of As a result, we can expect that when customers perceive a chatbot
chatbot agents triggered human/machine heuristics of how custom- with a higher (vs. lower) level of anthropomorphism, the authenticity
ers would evaluate a subsequent interaction. As discussed earlier, the and perceived competence of the chatbot will increase tandemly by
effect of anthropomorphizing a nonhuman entity is to assign agency shifting levels of perceived authenticity and perceived competence
and intentionality to it (Epley et al., 2007). Therefore, as a chatbot's cues toward specific heuristics (i.e., anthropomorphic elements). The
humanlike appearance increases, more human capabilities are social response of individuals where, the higher (vs. lower) the
attributed to the chatbot (Garvey et al., 2023). We propose that anthropomorphism they perceive within a chatbot, the greater (vs.
the anthropomorphic appearance of a chatbot (vs. none) will have an lesser) they will perceive the chatbot as a social actor. As emojis are
interaction effect with the use of emojis (i.e., humanlike communica- proposed to increase the authenticity (Kull et al., 2021) and perceived
tion). The interaction between these two factors will increase the competence (Rizomyliotis et al., 2022) of a chatbot, we argue that
overall anthropomorphism of the chatbot and impact customer's higher (vs. lower) anthropomorphic cues (such as text and visuals) will
engagement. Formally, we propose: have different effects as mediated by authenticity and perceived
competence. Consequently, we propose the following hypothesis.
H1. The interaction of chatbot's anthropomorphic
appearance and language affects customer engagement. H2. The influence of high (vs. low) anthropomorphic
language on customer engagement, as moderated by
The presence of anthropomorphism increases perceived compe- anthropomorphic appearance, is serially mediated by (a)
tence (Crolic et al., 2022). Competence reflects the ability of robots perceived competence and (b) authenticity.
(such as chatbots) to perform a task with intelligence, skill, and
efficacy (Belanche, Casaló, Schepers, et al., 2021). Despite the
objective competence of chatbots, customers generally have a lesser 3.2 | Chatbot anthropomorphic language and
preference to interact with them (Luo et al., 2019). This effect can be brand credibility
attenuated by the perceived realism of chatbot communications (Kull
et al., 2021). For example, a higher (lower) level of anthropomorphism Brand credibility can be understood as the information received from
may increase (decrease) levels of perceived competence due to the a brand that is contingent on its willingness and ability to deliver what
attribution of human characteristics, even before any interaction has they promise (Spry et al., 2009). Generally, the more a brand invests
taken place (Miao et al., 2022). While performed in a human service in its marketing‐mix consistency, the greater its perceived credibility
employee setting, Li et al. (2019) found the use of emoji by human (Erdem et al., 2006). A chatbot, which can be viewed as a part of a
agents was perceived as less competent in most conditions. brand's marketing‐mix, is a proxy frontline actor for a brand and, by
Contrastingly, the use of emotional expression, such as emojis, in default, represents them during customer interactions. As chatbots
chatbots may be perceived as enhancing competence (Rizomyliotis are replacing frontline employees as an integral touchpoint in a
et al., 2022). Interestingly, the future (vs. present) time orientation of customer's experience and journey (Lemon & Verhoef, 2016),
individuals reported higher levels of brand attitude and purchase chatbots can begin to serve as a substitute for human entities and
intentions based on the perceived competence of a chatbot (Roy & leverage brand credibility (Hamilton et al., 2021). For instance, in the
Naidoo, 2021). Overall, these studies show that the level of perceived context of a luxury brand (e.g., Burberry), even though consumers
competence of a chatbot and emoji is mixed, but we agree with perceived the brand's chatbot as failing to provide a diversity of
6 | NGUYEN ET AL.
service chatbot (Larini, 2023). To ensure data quality, we consistently validity of the measurement. To increase the internal validity of our
used attention checks questions in which we asked the respondents experiment and avoid familiarity or attachment effects, we used a
to recall the name of the customer service agent, the product, and if fictional brand in our studies (Belanche, Casaló, Schepers, et al., 2021).
the agent wore glasses. We then removed participants who failed all Competence was measured by asking the respondents if the
attention checks. customer service agent has the following traits: competent and
capable (S. Y. Kim et al., 2019). Authenticity was measured by asking
the respondents to indicate how they feel about the customer service
4.1 | Study 1: the interplay of chatbot's language representative: unnatural/natural, inorganic/organic, and artificial/
and appearance on customer engagement real (Esmark Jones et al., 2022). To measure customer engagement,
participants were asked to indicate the extent to which they agree or
4.1.1 | Study design and procedure disagree with the following statement: I feel motivated to learn more
about X, I encourage friends and relatives to do business with X that
For this first study, we recruited 318 participants (39% males, 59.2% offers this online chat support; I consider X that offers this online
females, and 1.8% others; Mage = 36.86, SD = 13.823) located in the chat support to be my first choice when buying products (Kull
United States from the online crowdsourcing platform Prolific. Partici- et al., 2021; Mostafa & Kasamani, 2022). Technical anxiety was
pants were randomly assigned to a 2 (anthropomorphic language: low, measured by 4 items “I have difficulty understanding most
high) × 2 (humanlike appearance: chatbot, human) between‐subjects technological matters relating to online chat support services,” “I
design. Anthropomorphic language was manipulated using humanlike feel apprehensive about using online chat support services,” “I
conversational traits such as the display of empathy and the use of hesitate to use online chat support services for fear of making
emojis. To confirm that our scenario would be a suitable proxy for the mistakes I cannot correct,” and “Online chat support services are
manipulation of language anthropomorphism, we conducted a pre‐test somewhat intimidating to me” (Meuter et al., 2003; Thatcher &
with a separate group of 42 participants (57% males) recruited from the Perrewe, 2002) was used as a control variable. Finally, participants
same online panel. Results from this pre‐test showed that the use of reported their demographics. To minimize common method bias, we
emojis and the display of empathy in language were a successful proxy applied several strategies, including developing a research informa-
for an anthropomorphized language. Specifically, participants from our tion coversheet with a clear research purpose and set of instructions,
pre‐test indicated that they perceived the language used by the virtual and improve scale item clarity via pretest, and remove common scale
assistant as warmer (Memj = 7.90, Mnoemj = 6.57, t(40) = 2.09, p = 0.043), properties such as using both Likert and unipolar scales (Jordan &
more intimate (Memj = 6.80, Mnoemj = 4.61, t(40) = 3.81, p < 0.001), per- Troth, 2020; Podsakoff et al., 2012).
sonal (Memj = 7.80, Mnoemj = 6.04, t(40) = 2.94, p = 0.005), friendly (Memj =
8.52, Mnoemj = 7.61, t(40) = 2.04, p = 0.047), empathetic (Memj = 7.42,
Mnoemj = 5.71, t(40) = 2.78, p = 0.008), emotional (Memj = 6.71, Mnoemj = 4.1.3 | Results and discussion
5.38, t(40) = 2.12, p = 0.040), humanlike (Memj = 7.47, Mnoemj = 6.33, t
(40) = 2.07, p = 0.045), and less formal (Memj = 3.76, Mnoemj = 6.61, t Manipulation checks
(40) = −5.46, p < 0.001) when emojis and a more emphatic tone As expected, participants in the high level of anthropomorphic
were used. language condition (vs. the low level of anthropomorphic language
For the humanlike appearance factor, participants were shown condition) perceived the language to be warmer (Mhigh = 8.30,
either the image of a human (human condition) or an avatar image Mlow = 7.30, t(469.763) = 5.738, p < 0.001), more intimate (Mhigh =
(chatbot), which was created based on the human image using an 6.68, Mlow = 5.09, t(480) = 7.375; p < 0.001), personal (Mhigh = 7.75,
online free tool. A sample of the manipulation scenarios is displayed Mlow = 6.54, t(473.036) = 5.775, p < 0.001), friendly (Mhigh =
in Appendix B. Participants were asked to consider an online 8.85, Mlow = 7.98, t(455.508) = 5.681, p < 0.001), empathetic (Mhigh =
shopping scenario where they decided to buy a jacket for the 7.81, Mlow = 6.67, t(472.141) = 6.640, p < 0.001), emotional
upcoming winter season from a fictitious online store (Maza) where a (Mhigh = 7.34, Mlow = 5.67, t(472.052) = 8.367, p < 0.001), humanlike
chat box popped up and a sales agent started a conversation with (Mhigh = 7.50, Mlow = 6.77, t(478.221) = 3.074, p < 0.01) and less formal
them. After reading the conversation script, participants were then (Memoji = 4.82, Mlow = 7.13, t(464.337) = −10.151, p < 0.001). This shows
asked to answer a series of dependent measures. Each participant that our manipulation of humanlike language was successful.
only saw the scenario they were randomly assigned to and were not
exposed to any other condition. Reliability and validity check
The measures of customer engagement (Cronbach α = 0.833),
authenticity (α = 0.933), competence (α = 0.908), and technical anxi-
4.1.2 | Measures ety (α = 0.772) were all reliable. To confirm the dimensional structure
of the scales, we conducted a confirmatory factor analysis using
The items we used to measure the latent constructs were taken from AMOS. The factor was statistically significant (at 0.01) and greater
validated scales in prior research, which helps ensure the content than 0.5 (Jöreskog & Sörbom, 1993). The model was a good fit to the
8 | NGUYEN ET AL.
F I G U R E 2 Customer engagement by
anthropomorphic language and appearance.
data (χ2 (17) = 35.708, p = 0.005; NFI = 0.988; CFI = 0.994; IFI = Perceived competence
0.994; RMSEA = 0.048). The composite reliability (CR) values were The effect of anthropomorphic language on competence would be
larger than the 0.70 cutoff (Nunnally & Bernstein, 1994). AVEs were moderated by anthropomorphic appearance. We conducted a two‐
above the recommended 0.50 (Hair et al., 2006), supporting way ANOVA with anthropomorphic language and appearance, and
convergent validity. For each construct, the square root of the AVE their interaction as independent variables, and perceived competence
was greater than its correlations with other constructs (Table 1), as the dependent variable. We also controlled for technical anxiety.
demonstrating discriminant validity (Fornell & Larcker, 1981). Results revealed nonsignificant main effects of anthropomorphic
language (F(1, 313) = 0.603, p = 0.438), appearance (F(1, 313) = 0.827,
Customer engagement p = 0.364), and a significant effect of technology anxiety (F(1,
We conducted a two‐way ANOVA with anthropomorphic language 313) = 25.976, p < 0.001). However, and more importantly, we found
and appearance, and their interaction as independent variables, and a significant interaction effect (F(1, 313) = 5.294, p = 0.022). Follow‐
customer engagement as dependent variables. We also controlled for up tests showed that when interacting with a chatbot avatar, a high
technical anxiety. Results revealed nonsignificant main effects of level of anthropomorphic language (vs. a low level of anthropo-
anthropomorphic language (F(1, 313) = 1.695, p = 0.194), and appear- morphic language) increased perceived competence (Mhigh = 8.963,
ance (F(1, 313) = 1.188, p = 0.276), and a significant effect of SDhigh = 1.296; Mlow = 8.469, SDlow = 1.754; t(159) = 2.035,
technology anxiety (F(1, 313) = 8.759, p = 0.027). However and more p = 0.044). However, such differences did not emerge among
importantly, we found a significant interaction effect (F(1, participants interacting with a human avatar (Mhigh = 8.691, SDhigh =
313) = 4.627, p = 0.032; see Figure 2). Therefore, H1 is supported. 1.755; Mlow = 8.963, SDlow = 1.348; t(155) = −1.094, p = 0.276).
Follow‐up tests showed that when interacting with a chatbot avatar,
a high level of anthropomorphic language (vs. a low level of Authenticity
anthropomorphic language) increased customer engagement (Mhigh = We conducted a two‐way ANOVA with anthropomorphic language
3.457, SDhigh = 0.978; Mlow = 3.071, SDlow = 1.066; t(159) = 2.395, and appearance, and their interaction as independent variables, and
p = 0.018). However, such differences did not emerge among authenticity as the dependent variable. We also controlled for
participants interacting with a human avatar (Mhigh = 3.320, SDhigh = technical anxiety. Results revealed nonsignificant main effects of
1.160; Mlow = 3.424, SDlow = 0.907; t(141.896) = −0.621, p = 0.536). anthropomorphic language (F(1, 313) = 0.183, p = 0.669), appearance
NGUYEN ET AL. | 9
(F(1, 313) = 0.004, p = 0.950), and a significant effect of technology anthropomorphic appearance condition (i.e., chatbot avatar;
anxiety (F(1, 313) = 12.406, p < 0.001). There was a nonsignificant b = −0.042, SE = 0.021, 95% CI: 0.004 to 0.086) but not in the high
interaction effect of anthropomorphic language and anthropo- level of anthropomorphic appearance condition (i.e., human avatar;
morphic appearance on authenticity (F(1, 313) = 2.950, p = 0.087). b = −0.021, SE = 0.021, 95% CI: −0.063 to 0.017). These results
supported H2a and H2b.
Serial mediation analysis
We propose that the effect of anthropomorphic language on
customer engagement is sequentially mediated by competence and 4.2 | Study 2: the moderating role of brand
authenticity. We conducted a serial moderated mediation analysis credibility
using Hayes' PROCESS Model 85 (Hayes, 2017). This analysis
examined the indirect effects of anthropomorphic language (high = 4.2.1 | Study design and procedure
1, low = −1), moderated by anthropomorphic appearance (human
avatar = 1, chatbot avatar = −1), on customer engagement via Study 2 employed a 2 (anthropomorphic language: low, high) × 2 (brand
competence and authenticity (as serial mediators). We also controlled credibility: low, high) between‐subjects design. For this study, we
for technical anxiety. The results are shown in Table 2. recruited 361 participants (47.1% males, 52.4% females, and 0.6%
Perceived competence has a significant impact on authenticity. others; Mage = 41.68, SD = 13.572) located in the United States from the
Competence and authenticity were significant predictors of customer online crowdsourcing platform Prolific. Anthropomorphic language was
engagement. The index of moderated mediation was significant and manipulated in the same manner as Study 1. For the manipulation of
negative (b = −0.063, SE = 0.030, 95% CI: −0.127 to −0.007), with the brand credibility, we selected two real brands from the fashion industry.
indirect serial effect being significant and positive for the low level of Specifically, H&M was chosen as the high‐credibility brand and
10 | NGUYEN ET AL.
Forever21 as the low‐credibility brand. The choice of these brands was anxiety (α = 0.798) were all reliable. To confirm the dimensional
derived from a pre‐test conducted with a separate group of 82 structure of the scales, we conducted a confirmatory factor analysis
participants who rated H&M and Forever21, amongst other brands, on using AMOS. The factor was statistically significant (at 0.01) and
two 10‐point scales measuring perceived brand honesty and credibility greater than 0.5 (Jöreskog & Sörbom, 1993). The model was a good
(1 = low, 10 = high). Results from a repeated‐measures ANOVA revealed fit to the data (χ2 (48) = 60.398, p = 0.108; NFI = 0.981; CFI = 0.996;
that participants perceived H&M to be more honest (MH&M = 6.68 vs. IFI = 0.996; RMSEA = 0.027). The composite reliability values were
MForever21 = 3.04, F(81) = 214.54; p < 0.001) and more credible (MH&M = larger than the 0.70 cutoff (Nunnally & Bernstein, 1994). AVEs were
6.39 vs. MForever21 = 3.08, F(81) = 234.48; p < 0.001) than Forever21, above the recommended 0.50 (Hair et al., 2006), supporting
thus supporting the choice of H&M as the higher credibility brand and convergent validity. For each construct, the square root of the AVE
Forever21 as the lower credibility brand for our brand credibility was greater than its correlations with other constructs (Table 3),
manipulation. demonstrating discriminant validity (Fornell & Larcker, 1981).
Similar to Study 1, participants were asked to consider a To check that the manipulation of brand credibility worked as
shopping scenario where they decided to buy a jacket for the planner, brand credibility was measured using a four items scale
upcoming winter season and that they either decided to visit the adapted from Morhart et al. (2015) “X is a brand that accomplishes its
H&M or Forever21 online store where they engaged with the online value promise,” “X is an honest brand,” “X is a brand they will not
store's chatbot agent. After reading the conversation script, betray you,” and “I am very familiar with X” (Morhart et al., 2015). As
participants were then asked to answer the same dependent expected, participants in the high brand credibility condition (i.e.,
measures as Study 1. Besides, participants in this study were H&M) perceived the brand to be more credible than those in the low
specifically told that the sales agent was a chatbot. Each participant brand credibility condition, that is, Forever21 did (Mlow = 4.88,
only saw the scenario they were randomly assigned to and were not Mhigh = 6.09, t(359) = −5.031, p < 0.001). This shows that our manip-
exposed to any other condition. A sample of the manipulation ulation of brand credibility was successful.
scenarios is displayed in Appendix B.
Customer engagement
We conducted a two‐way ANOVA with anthropomorphic language
4.2.2 | Results and discussion and brand credibility, their interaction as independent variables, and
customer engagement as dependent variables. We also controlled for
Manipulation checks technical anxiety. Results revealed nonsignificant main effects of
As expected, participants in the high level of anthropomorphic anthropomorphic language (F(1, 356) = 1.557, p = 0.456), and brand
language condition (vs. the low level of anthropomorphic language credibility (F(1, 356) = 0.001, p = 0.976), and a marginally significant
condition) perceived the language to be warmer (Mhigh = 7.56, effect of technology anxiety (F(1, 356) = 34.656, p = 0.057). We
Mlow = 6.62, t(359) = 4.074, p < 0.001), more intimate (Mhigh = 5.92, found a nonsignificant interaction effect (F(1, 356) = 0.199,
Mlow = 5.14, t(359) = 3.130; p < 0.01), personal (Mhigh = 6.96, Mlow = p = 0.656). Overall, this does not support our H3.
6.22, t(355.926) = 2.980, p < 0.01), friendly (Mhigh = 8.25, Mlow = 7.60,
t(359) = 3.443, p < 0.001), empathetic (Mhigh = 6.92, Mlow = 6.20, t Perceived competence
(359) = 3.057, p < 0.01), emotional (Mhigh = 6.44, Mlow = 5.15, t The effect of anthropomorphic language on competence would be
(359) = 5.263, p < 0.001), humanlike (Mhigh = 6.67, Mlow = 5.37, moderated by brand credibility. We conducted a two‐way ANOVA
t(359) = 2.347, p = 0.019) and less formal (Mhigh = 4.88, Mlow = 6.09, with anthropomorphic language and brand credibility, and their
t(359) = −5.031, p < 0.001). This shows that our manipulation of interaction as independent variables, and competence as the
humanlike language was successful. dependent variable. Results showed nonsignificant main effects of
anthropomorphic language (F(1, 356) = 11.733, p = 0.189), brand
Reliability and validity check credibility (F(1, 356) = 2.273, p = 0.132), and technology anxiety (F
The measures of customer engagement (Cronbach α = 0.880), (1, 356) = 1.548, p = 0.214). However and more importantly, we
authenticity (α = 0.922), competence (α = 0.906), and technical found a significant interaction effect (F(1, 356) = 4.239, p = 0.040; see
NGUYEN ET AL. | 11
F I G U R E 3 Perceived competence by
anthropomorphic language and brand
credibility.
Figure 3). When the brand credibility was low, participants in the high (b = 0.017, SE = 0.030, 95% CI: −0.043 to 0.073). These results
level of anthropomorphic language condition (vs. the low level of supported H4a and H4b.
anthropomorphic language condition) reported a lower level of
competence (Mhigh = 7.360, SD = 2.246, Mlow = 8.086, SD = 1.873, t(1,
178) = −2.357, p = 0.020). However, when the brand credibility was 5 | GE NERAL DISC US SION
high, there was no difference between participants in the high and
low levels of anthropomorphic language conditions (Mhigh = 8.115, The goal of this research was to provide empirical insights into the
SD = 1.889, Mlow = 7.936, SD = 1.723, F(1, 179) = 0.666, p = 0.506). influence of chatbot anthropomorphic language, in the interaction
with chatbot anthropomorphic language and brand credibility, on
Authenticity customer engagement. Prior research reflects the increasing
We conducted a two‐way ANOVA with anthropomorphic language interest in chatbots and the contrasting effects of their anthropo-
and brand credibility, and their interaction as independent variables, morphism (e.g., Garvey et al., 2023) and their communication
and authenticity as the dependent variable. We also controlled for elements (e.g., Jin & Youn, 2022). Most research has neglected
technical anxiety. Results revealed nonsignificant effects of anthro- social media afforded communication tools as an anthropo-
pomorphic language (F(1, 356) = 0.331, p = 0.565), brand credibility morphic mechanism (Ge & Gretzel, 2018)—a gap this research
(F(1, 356) = 0.568, p = 0.451), and technology anxiety (F(1, sought to address. Study 1 shows that the humanlikeness of
356) = 0.456, p = 0.500). There was a nonsignificant interaction effect chatbots, which is combined with human appearance with an
of anthropomorphic language and anthropomorphic appearance on avatar and anthropomorphic language such as using emojis in
authenticity (F(1, 356) = 1.275, p = 0.260). conversation with customers, increases customer engagement.
Humanlike languages make customers feel more intimate, per-
Serial mediation analysis sonal, friendly, and less formal. Human appearance or anthropo-
We propose that the effect of anthropomorphic language on morphic language only is not sufficient to engage customers, but
customer engagement is sequentially mediated by competence and the combination makes a great difference.
authenticity. We conducted a serial moderated mediation analysis In Study 1, we found that chatbot humanlikeness with the
using Hayes' PROCESS Model 85 (Hayes, 2017). This analysis combination of language and appearance affects customer percep-
examined the indirect effects of anthropomorphic language (high = tion of competence which in turn affects perceived authenticity and
1, low = −1), moderated by brand credibility (high = 1, low = −1), on customer engagement. While previous literature has shown that AI
customer engagement via competence and authenticity (as serial chatbots with communication styles lead to positive outcomes (e.g.,
mediators). We also controlled technical anxiety. The results are perceived competence and warmth; Roy & Naidoo, 2021), while also
shown in Table 4. addressing the effects of anthropomorphism (T. W. Kim et al., 2022),
Competence has a significant impact on authenticity. The our research is the first to show that the effect of perceived
effects of competence and authenticity on customer engagement competence on consumer intentions is mediated by perceived
were significant. The index of moderated mediation was significant authenticity. We not only extend the anthropomorphism literature
and positive (b = 0.092, SE = 0.046, 95% CI: 0.003 to 0.183), with where previous studies either focus on chatbot appearance or
the indirect serial effect being significant and negative for the low language (e.g., Esmark Jones et al., 2022), but our study is the first to
brand credibility condition (b = −0.076, SE = 0.034, 95% CI: −0.144 examine emoji in chatbot language as a cue for perceptions of
to −0.011) but not in the high brand credibility condition chatbot anthropomorphism.
12 | NGUYEN ET AL.
In addition, we show that when individuals are aware they are et al., 2020). Brand credibility has been overlooked in the anthropo-
engaging with an AI (i.e., avatar condition), they become more morphism literature and the link between brand credibility and chatbot
motivated to engage in central route processing and therefore are characteristics has not been yet explored. Previous research in AI and
more likely to critically evaluate information and form attitudes based chatbots have mostly found that AI/chatbot anthropomorphism has a
on the content of the information (e.g., text, emojis). That is, when positive influence on perceptions of competence (e.g., Pizzi et al., 2023)
consumers are exposed to a chatbot sales agent, an AI schema is and warmth (e.g., S. Y. Kim et al., 2019; Pizzi et al., 2023). However, it
triggered, and consumers judge text interactions in more detail than if should be noted that the relationship between anthropomorphism and
they were to engage with a human sales agent. As such, our results perceived competence/authenticity is not always be linear or universal. In
demonstrate that higher (vs. lower) humanlike language leads to this study, we show that this effect is moderated by brand credibility such
higher perceived competence for chatbot interactions but not when that there is a reversal of the effect of anthropomorphism on perceived
consumers interact with a human. Furthermore, we show that competence and perceived authenticity. In particular, our results are
consumers attribute higher levels of authenticity when a chatbot is consistent with the notion that when consumers interact with a brand
perceived as more (vs. less) competent. However, because consumers that is perceived to be more credible, they are less motivated to engage in
do not have a reason to question the authenticity of human‐to‐ central route processing because their previous interactions with the
human interactions, we did not find evidence that consumers brand give customers assurance that the brand will perform as promised,
engaged in more nuanced analysis of text‐based interactions. and therefore new encounters (e.g., interaction with a chatbot sales
In Study 2, the findings indicated that brand credibility set an assistant) are likely to be processed through the peripheral route. On the
expectation in the head of customers which influences the impact of other hand, when consumers interact with a brand perceived to be less
chatbot anthrophonic language on customer engagement. Customers credible, they are more likely to process information thoroughly, via
often wonder whether chatbots can represent a brand well (Chung central processing, as there is higher uncertainty about the brand's ability
NGUYEN ET AL. | 13
Emoji usage • Addressing the growing stream of literature on chatbots, and in conjunction with social response theory, this
paper is the first to analyze the effect of emojis.
• Emoji usage is recommended for chatbots where anthropomorphism is required.
Authenticity and competence • Perceived competence and authenticity serially mediate anthropomorphic chatbots' effect on customer
engagement.
• Emojis had a negative effect when low‐credible brands utilized them, but this effect was not present for
brands with high credibility.
Anthropomorphism • Using emojis may overcome feelings of unease in highly anthropomorphic chatbots.
Managerial implication
Brand credibility • Brands with low credibility should not use chatbots with emojis. But if emoji usage is desired, this negative
effect can be offset by lower anthropomorphic appearance.
Authenticity and competence • Marketing managers of lesser credible brands should test highly anthropomorphic chatbots and develop
appropriate metrics for success (e.g., churn, feedback scores and time spent interacting).
to deliver on its promise. When doing so, consumers' prior beliefs of emojis can thus be utilized in conjunction with similar research, such
lower credibility also translate into lower trust, sincerity, and reliability as Garvey et al. (2023), who found sending an anthropomorphized
which is demonstrated by lower consumer ratings of perceived agent can be more effective than a human. Using our findings, emojis
competence. In consequence, when less credible brands try to improve may be also used to further attenuate feelings of anger when using
perceptions of chatbot competence with anthropomorphism, consumers such an agent (Crolic et al., 2022).
scepticism is triggered and instead, the attempt backfires such that Second, our findings provide a more nuanced piece of the puzzle on
interactions are perceived to more artificial, less organic, more how anthropomorphizing a chatbot can lead to customer engagement.
inauthentic, a complete reversal of the chatbot anthropomorphism effect. We established the mediating role of authenticity and perceived
competence. Prior research identified the importance of these factors
(e.g., Kull et al., 2021; Roy & Naidoo, 2021) but rather identified their
6 | I M PL I C A T I O N S direct effects in conjunction with the perceived warmth of the chatbot.
Our results indicate a chatbot's perceived competence and authenticity
In addressing the emerging interest in the anthropomorphism of serially mediate the influence of anthropomorphic chatbots under two
chatbots, this study has attempted to contribute to this extant specific conditions (1) the anthropomorphic appearance of the chatbot
literature. The significant findings found within this study will have and (2) low brand credibility. In particular, the presence of emojis
implications for practitioners and academics alike. We provide a enhanced low (vs high) anthropomorphized chatbot agents, while the use
summary of our findings and their implications in Table 5. of emojis had a negative effect when a chatbot with low brand credibility
used them. These findings are interesting as emojis can thus act as a
double‐edged sword where individuals' social response and perception of
6.1 | Theoretical implications a chatbot as a social actor can benefit overall engagement, but harm
those smaller and less credible brands. We validated these constructs for
The current study contributes to the literature in three ways. First, a future researchers as strong mediators of anthropomorphism on
growing stream of literature has begun investigating the replacement customer engagement.
of human agents with AI agents and customer preferences between Finally, we contribute to the overarching literature on anthropo-
them (Yu et al., 2022). Literature in this area has shown the influence morphism. Prior research has often dealt with high levels of
of humor (Shin et al., 2023), language schemas and (Zhang et al., 2022) anthropomorphism using the uncanny valley hypotheses (e.g., Mende
—but this study is, to the authors' knowledge, the first to empirically et al., 2019). For instance, if a robot physically resembles a human
test the effect of emojis in chatbots. As the capabilities of chatbots and acts in an agentic manner, feelings of discomfort may emerge
emerge, the use of emojis may humanize not only the chatbot, but (Crolic et al., 2022). This was further addressed by extant literature
the brand. Further, when combined with anthropomorphic visual on chatbots by predominantly using a human (vs. none) avatar (e.g.,
cues, the use of emojis leads to higher levels of customer Esmark Jones et al., 2022). However, utilizing emojis, a form of
engagement. We propose that the chatbots use of emojis positively cultural and emotional expression (Li et al., 2019), in a chatbot
influence customers' social response as they begin to perceive it context addresses this area of research. Our contribution to this
more as a social actor (Miao et al., 2022). Taken together, the use of arena may allow future researchers to incorporate emojis to
14 | NGUYEN ET AL.
communicate feelings and placate feelings of uncanniness in a was the customer service environment in the apparel sector. It may
nonintrusive manner (S. Y. Kim et al., 2019). be worthwhile to validate the findings for chatbots used for customer
service in different industries, even though we do not expect the
research setting we chose to have a substantial impact on our
6.2 | Managerial implications findings. To examine the findings’ broader application, it could be
worthwhile to replicate the study in a new environment, such as one
Our findings indicate that the use of emojis may act as a double‐edged that makes use of chatbots for teaching or medical support. Third, the
sword and backfire if used incorrectly. For instance, in brands that have chatbot's persona and conversational intelligence were left out of the
low credibility, emojis may have a negative effect. Therefore, marketing current research and could have an effect on how the user perceives
managers of brands with lesser credibility, such as those with lower brand the chatbot's humanlikeness (Go & Sundar, 2019). Despite the
awareness (Spry et al., 2009), should use chatbots without emojis. aforementioned drawbacks, we think that our work has greatly
Further, emojis and the anthropomorphic appearance of a chatbot had a advanced theory and practice and will serve as a source of inspiration
main, and mediating effect, on customer engagement. This suggests for more study.
marketing managers should create a multidimensional chatbot (i.e.,
anthropomorphic appearance and language) are the most effective at
building customer engagement overall. Taken together, we recommend 8 | CONCLUSION
marketing managers of brands with lower brand credibility, such as in
small‐medium enterprises (SMEs), to be cautious in their use of emojis This article makes important contributions to research in marketing
with chatbots but, if doing so, use chatbots with lower anthropomorphic and for practitioners. First, while research into chatbots and
appearance (Miao et al., 2022). As such, the more anthropomorphic a anthropomorphism have experienced a flurry of contributions, there
chatbot is, the more a mistake can harm a less credible brand is a dearth of literature on the role of emojis which is where our study
(Moon, 2000). is situated. Second, it identifies the importance of anthropomorphic
In addition, our results can be beneficial in understanding how language and appearance, a novel finding as prior studies focus on
competence and authenticity increase customer engagement. It is one element of anthropomorphism. Third, we find anthropomorphic
critical that the chatbot is perceived as competent and authentic, language (i.e., emoji use) contributes to the perceived competence
particularly if anthropomorphic, as social response theory indicates and authenticity of an AI service agent, particularly in low brand
that customers will attribute human characteristics to it (Mende credibility contexts. Finally, we offer a series of implications for
et al., 2019). It is important that marketing managers test their theory and practice to explore greater elements of chatbot
chatbots with customers—particularly chatbots (1) with more human- anthropomorphism.
like elements and (2) for lesser credible businesses such as SMEs.
Developing metrics such as churn, time spent with a chatbot, goal ACKNOWL EDGEM ENTS
completion rates or feedback scores are essential to improve the Open access publishing facilitated by Griffith University, as part of
chatbot, its competen,cy and authenticity as well as the customer's the Wiley ‐ Griffith University agreement via the Council of Australian
experience (Cyca, 2022b). For example, Telenor implemented specific University Librarians.
metrics for their chatbot and increased customer satisfaction and
revenue by 20% and 15%, respectively (Leah, 2022). These metrics DATA AVAILABILITY STATEMENT
will help managers understand the effectiveness of their chatbots and The data that support the findings of this study are available on
which characteristics of anthropomorphizing, such as via emojis or request from the corresponding author.
humanlikeness, is appropriate (Li et al., 2019; Pizzi et al., 2021).
ORC I D
Mai Nguyen [Link]
7 | L IM I TAT I ONS AND FU TU RE Lars‐Erik Casper Ferm [Link]
RESEARCH DIRECTIONS Sara Quach [Link]
Nicolas Pontes [Link]
Despite the current research provides insights into chatbots' Park Thaichon [Link]
anthropomorphic language, it still has a number of limitations. First,
we collected data from the online Prolific platform, where each RE F ER EN CES
participant completed the questionnaire on his or her own chosen Adam, M., Wessel, M., & Benlian, A. (2021). AI‐based chatbots in
devices. Yet providing results that are more generalizable and customer service and their effects on user compliance. Electronic
Markets, 31(2), 427–445.
trustworthy than samples made up of college students, this way of
Aggarwal, P., & McGill, A. L. (2007). Is that car smiling at me? Schema
data collection may bring “noise” into the data collection methods congruity as a basis for evaluating anthropomorphized products.
used in the study (Goodman et al., 2013). Second, the study context Journal of Consumer Research, 34(4), 468–479.
NGUYEN ET AL. | 15
Ahn, H.‐K., Kim, H. J., & Aggarwal, P. (2014). Helping fellow beings: Epley, N. (2018). A mind like mine: The exceptionally ordinary under-
Anthropomorphized social causes and the role of anticipatory guilt. pinnings of anthropomorphism. Journal of the Association for
Psychological Science, 25(1), 224–229. Consumer Research, 3(4), 591–598.
Ameen, N., Cheah, J. H., & Kumar, S. (2022). It's all part of the customer Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three‐
journey: The impact of augmented reality, chatbots, and social media factor theory of anthropomorphism. Psychological review, 114(4),
on the body image and self‐esteem of Generation Z female 864–886.
consumers. Psychology & Marketing, 39(11), 2110–2129. Erdem, T., Swait, J., & Valenzuela, A. (2006). Brands as signals: A cross‐
Bakhshi, N., vanden Berg, H., Broersen, S., de Vries, D., Bouazzaoui, H. E., country validation study. Journal of Marketing, 70(1), 34–49.
& Michels, B. (2018). Chatbots Point of View. Deloitte. Retrieved Esmark Jones, C. L., Hancock, T., Kazandjian, B., & Voorhees, C. M. (2022).
December 20, 2022, from [Link] Engaging the avatar: The effects of authenticity signals during chat‐
dam/Deloitte/nl/Documents/deloitte-analytics/deloitte-nl- based service recoveries. Journal of Business Research, 144(5),
[Link] 703–716.
Bateson, M., Nettle, D., & Roberts, G. (2006). Cues of being watched Fan, A., Wu, L., & Mattila, A. S. (2016). Does anthropomorphism influence
enhance cooperation in a real‐world setting. Biology Letters, 2(3), customers' switching intentions in the self‐service technology failure
412–414. context? Journal of Services Marketing, 30(7), 713–723.
Belanche, D., Casaló, L. V., Schepers, J., & Flavián, C. (2021). Examining Flavián, C., Pérez‐Rueda, A., Belanche, D., & Casaló, L. V. (2021). Intention
the effects of robots' physical appearance, warmth, and competence to use analytical artificial intelligence (AI) in services—The effect of
in frontline services: The Humanness‐Value‐Loyalty model. technology readiness and awareness. Journal of Service Management,
Psychology & Marketing, 38(12), 2357–2376. 33(2), 293–320.
Belanche, D., Casaló, L. V., Flavián, M., & Ibáñez‐Sánchez, S. (2021). Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models
Understanding influencer marketing: The role of congruence with unobservable variables and measurement error. Journal of
between influencers, products and consumers. Journal of Business Marketing Research, 18(1), 39–50.
Research, 132, 186–195. Garvey, A. M., Kim, T., & Duhachek, A. (2023). Bad news? Send an AI.
Belk, R. (2022). Artificial emotions and love and sex doll service workers. Good news? Send a human. Journal of Marketing, 87(1), 10–25.
Journal of Service Research, 25(4), 521–536. Ge, J., & Gretzel, U. (2018). Emoji rhetoric: a social media influencer
Borau, S., Otterbring, T., Laporte, S., & Fosso Wamba, S. (2021). The most perspective. Journal of Marketing Management, 34(15–16),
human bot: Female gendering increases humanness perceptions of 1272–1295.
bots and acceptance of AI. Psychology & Marketing, 38(7), Go, E., & Sundar, S. S. (2019). Humanizing chatbots: The effects of visual,
1052–1068. identity and conversational cues on humanness perceptions.
Brandão, A., & Popoli, P. (2022). “I'm hatin' it”! Negative consumer–brand Computers in Human Behavior, 97(8), 304–316.
relationships in online anti‐brand communities. European Journal of Goodman, J. K., Cryder, C. E., & Cheema, A. (2013). Data collection in a
Marketing, 56(2), 622–650. flat world: The strengths and weaknesses of Mechanical Turk
Castillo, D., Canhoto, A. I., & Said, E. (2020). The dark side of AI‐powered samples. Journal of Behavioral Decision Making, 26(3), 213–224.
service interactions: Exploring the process of co‐destruction from Grand View Research. (2022). Chatbot market size, share & trends. https://
the customer perspective. The Service Industries Journal, 41(13–14), [Link]/industry-analysis/chatbot-market
900–925. Grazzini, L., Viglia, G., & Nunan, D. (2023). Dashed expectations in service
Chen, S., Li, X., Liu, K., & Wang, X. (2023). Chatbot or human? The impact experiences. Effects of robots humanlikeness on customers'
of online customer service on consumers' purchase intentions. responses. European Journal of Marketing, 57(4), 957–986.
Psychology & Marketing. [Link] Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006).
Chi, M., Harrigan, P., & Xu, Y. (2022). Customer engagement in online Multivariate data analysis.
service brand communities. Journal of Services Marketing, 36(2), Hamilton, R., Ferraro, R., Haws, K. L., & Mukhopadhyay, A. (2021).
201–216. Traveling with companions: The social customer journey. Journal of
Chmielewski, M., & Kucker, S. C. (2020). An MTurk crisis? Shifts in data Marketing, 85(1), 68–92.
quality and the impact on study results. Social Psychological and Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional
Personality Science, 11(4), 464–473. process analysis: A regression‐based approach. Guilford publications.
Chung, M., Ko, E., Joung, H., & Kim, S. J. (2020). Chatbot e‐service and Hollebeek, L. D., Glynn, M. S., & Brodie, R. J. (2014). Consumer brand
customer satisfaction regarding luxury brands. Journal of Business engagement in social media: Conceptualization, scale development
Research, 117, 587–595. and validation. Journal of Interactive Marketing, 28(2), 149–165.
Crolic, C., Thomaz, F., Hadi, R., & Stephen, A. T. (2022). Blame the Bot: Huang, S. Y. B., & Lee, C. J. (2022). Predicting continuance intention to
Anthropomorphism and anger in customer–chatbot interactions. fintech chatbot. Computers in Human Behavior, 129, 107027.
Journal of Marketing, 86(1), 132–148. Jiménez‐Barreto, J., Rubio, N., & Molinillo, S. (2021). Find a flight for me,
Cyca, M. (2022a). Emoji meanings: Communicate without embarrassing Oscar!” Motivational customer experiences with chatbots.
yourself. Hootsuite. [Link] International Journal of Contemporary Hospitality Management,
Cyca, M. (2022b). Chatbot analytics 101: Essential metrics to track. 33(11), 3860–3882.
Hootsuite. from [Link] Jin, S. V., & Youn, S. (2022). Social presence and imagery processing as
Das, G., Wiener, H. J. D., & Kareklas, I. (2019). To emoji or not to emoji? predictors of chatbot continuance intention in human‐AI‐interaction.
Examining the influence of emoji on consumer reactions to International Journal of Human–Computer Interaction, 39(9),
advertising. Journal of Business Research, 96(3), 147–156. 1874–1886. [Link]
Drouin, M., Sprecher, S., Nicola, R., & Perkins, T. (2022). Is chatting with a Johnson, K. (2017). Facebook Messenger hits 100,000 bots. https://
sophisticated chatbot as good as chatting online or FTF with a [Link]/2017/04/18/facebook-messenger-hits-
stranger? Computers in Human Behavior, 128, 107100. [Link] 100000bots/
org/10.1016/[Link].2021.107100 Jordan, P. J., & Troth, A. C. (2020). Common method bias in applied
Elsner, N. (2017). KAYAK mobile travel Report: Chatbots in the UK. https:// settings: The dilemma of researching in organizations. Australian
[Link]/news/mobile-travel-report-2017/ Journal of Management, 45(1), 3–14.
16 | NGUYEN ET AL.
Jöreskog, K. G., & Sörbom, D. (1993). LISREL 8: Structural equation experiences and elicit compensatory consumer responses. Journal
modeling with the SIMPLIS command language. Scientific Software of Marketing Research, 56(4), 535–556.
International. Meuter, M. L., Ostrom, A. L., Bitner, M. J., & Roundtree, R. (2003). The
Jurberg, A. (2020). The greatest logo in history. Better Marketing. https:// influence of technology anxiety on consumer use and experiences
[Link]/the-greatest-logo-in-history-f33dcaacd455 with self‐service technologies. Journal of Business Research, 56(11),
Khan, I., Rahman, Z., & Fatma, M. (2016). The role of customer brand 899–906.
engagement and brand experience in online banking. International Miao, F., Kozlenkova, I. V., Wang, H., Xie, T., & Palmatier, R. W. (2022). An
Journal of Bank Marketing, 34(7), 1025–1041. emerging theory of avatar marketing. Journal of Marketing, 86(1),
Kim, S. Y., Schmitt, B. H., & Thalmann, N. M. (2019). Eliza in the uncanny 67–90.
valley: Anthropomorphizing consumer robots increases their per- Moon, Y. (2000). Intimate exchanges: Using computers to elicit self‐
ceived warmth but decreases liking. Marketing Letters, 30(1), 1–12. disclosure from consumers. Journal of Consumer Research, 26(4),
Kim, T. W., Jiang, L., Duhachek, A., Lee, H., & Garvey, A. (2022). Do you 323–339.
mind if I ask you a personal question? How AI service agents alter Morhart, F., Malär, L., Guèvremont, A., Girardin, F., & Grohmann, B.
consumer self‐disclosure. Journal of Service Research, 25(4), (2015). Brand authenticity: An integrative framework and measure-
649–666. ment scale. Journal of Consumer Psychology, 25(2), 200–218.
Kim, T. W., Lee, H., Kim, M. Y., Kim, S., & Duhachek, A. (2023). AI increases Mostafa, R. B., & Kasamani, T. (2022). Antecedents and consequences of
unethical consumer behavior due to reduced anticipatory guilt. chatbot initial trust. European Journal of Marketing, 56(6),
Journal of the Academy of Marketing Science, 51, 785–801. https:// 1748–1771.
[Link]/10.1007/s11747-021-00832-9 Mozafari, N., Weiger, W. H., & Hammerschmidt, M. (2021). Trust me, I'm a
Kim, W., Ryoo, Y., Lee, S., & Lee, J. A. (2023). Chatbot advertising as a bot–repercussions of chatbot disclosure in different service frontline
double‐edged sword: The roles of regulatory focus and privacy settings. Journal of Service Management, 33(2), 221–245.
concerns. Journal of Advertising, 52(4), 504–522. [Link] Nass, C., Fogg, B. J., & Moon, Y. (1996). Can computers be teammates?
1080/00913367.2022.2043795 International Journal of Human‐Computer Studies, 45(6), 669–678.
Kirk, R. E. (2012). Experimental design: Procedures for the behavioral Newman, G. E. (2018). Bringing narratives to life: Animism, totems, and
sciences. Sage Publications. intangible value. Journal of the Association for Consumer Research,
Ko, E., Kim, D., & Kim, G. (2022). Influence of emojis on user engagement 3(4), 514–526.
in brand‐related user generated content. Computers in Human Nguyen, T. M., Quach, S., & Thaichon, P. (2022). The effect of AI quality
Behavior, 136, 107387. [Link] on customer experience and brand relationship. Journal of Consumer
Kull, A. J., Romero, M., & Monahan, L. (2021). How may I help you? Behaviour, 21(3), 481–493.
Driving brand engagement through the warmth of an initial chatbot Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. McGraw.
message. Journal of Business Research, 135(10), 840–850. Orlowski, A. (2017). Facebook scales back AI flagship after chatbots hit 70%
Larini, L. (2023). Artificial intelligence. Ipsos. [Link] f‐AI‐lure rate. [Link]
default/files/ct/publication/documents/2023-03/Ipsos%20AI% facebook_ai_fail/
20Tracker%20Data%20March%[Link] Paharia, N., & Swaminathan, V. (2019). Who is wary of user design? The
Leah. (2022). Chatbot case studies: Real‐life results across industries. role of power‐distance beliefs in preference for user‐designed
Userlike. [Link] products. Journal of Marketing, 83(3), 91–107.
Lee, H., & Yi, Y. (2022). The impact of self‐service versus interpersonal Palan, S., & Schitter, C. (2018). [Link]—A subject pool for online
contact on customer–brand relationship in the time of frontline experiments. Journal of Behavioral and Experimental Finance, 17,
technology infusion. Psychology & Marketing, 39(5), 906–920. 22–27.
Lemon, K. N., & Verhoef, P. C. (2016). Understanding customer Peer, E., Rothschild, D., Gordon, A., Evernden, Z., & Damer, E. (2022). Data
experience throughout the customer journey. Journal of Marketing, quality of platforms and panels for online behavioral research.
80(6), 69–96. Behavior Research Methods, 54(4), 1643–1662.
Li, X., Chan, K. W., & Kim, S. (2019). Service with emoticons: How Pizzi, G., Scarpi, D., & Pantano, E. (2021). Artificial intelligence and the
customers interpret employee use of emoticons in online service new forms of interaction: Who has the control when interacting with
encounters. Journal of Consumer Research, 45(5), 973–987. a chatbot? Journal of Business Research, 129(5), 878–890.
Li, Y., & Shin, H. (2023). Should a luxury brand's chatbot use emoticons? Pizzi, G., Vannucci, V., Mazzoli, V., & Donvito, R. (2023). I, chatbot! the
Impact on brand status. Journal of Consumer Behaviour, 22(3), impact of anthropomorphism and gaze direction on willingness to
569–581. [Link] disclose personal information and behavioral intentions. Psychology
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to & Marketing, 40(7), 1372–1387.
medical artificial intelligence. Journal of Consumer Research, 46(4), Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of
629–650. method bias in social science research and recommendations on
Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs. how to control it. Annual Review of Psychology, 63, 539–569.
humans: The impact of artificial intelligence chatbot disclosure on Rajaobelina, L., Prom Tep, S., Arcand, M., & Ricard, L. (2021). Creepiness:
customer purchases. Marketing Science, 38(6), 913–1084. Its antecedents and impact on loyalty when interacting with a
MacInnis, D. J., & Folkes, V. S. (2017). Humanizing brands: When brands chatbot. Psychology & Marketing, 38(12), 2339–2356.
seem to be like me, part of me, and in a relationship with me. Journal Rajavi, K., Kushwaha, T., & Steenkamp, J. B. E. M. (2019). In brands we
of Consumer Psychology, 27(3), 355–374. trust? A multicategory, multicountry investigation of sensitivity of
Mariani, M. M., Perez‐Vega, R., & Wirtz, J. (2022). AI in marketing, consumers' trust in brands to marketing‐mix activities. Journal of
consumer research and psychology: A systematic literature review Consumer Research, 46(4), 651–670.
and research agenda. Psychology & Marketing, 39(4), 755–776. Rese, A., Ganster, L., & Baier, D. (2020). Chatbots in retailers' customer
Mehta, P., Jebarajakirthy, C., Maseeh, H. I., Anubha, A., Saha, R., & communication: How to measure their acceptance? Journal of
Dhanda, K. (2022). Artificial intelligence in marketing: A meta‐ Retailing and Consumer Services, 56, 102176. [Link]
analytic review. Psychology & Marketing, 39(11), 2013–2038. 1016/[Link].2020.102176
Mende, M., Scott, M. L., Van Doorn, J., Grewal, D., & Shanks, I. (2019). Rizomyliotis, I., Kastanakis, M. N., Giovanis, A., Konstantoulaki, K., &
Service robots rising: How humanoid robots influence service Kostopoulos, I. (2022). “How may I help you today?” The use of AI
NGUYEN ET AL. | 17
chatbots in small family businesses and the moderating role of foundations and research directions. Journal of Service Research,
customer affective commitment. Journal of Business Research, 153, 13(3), 253–266.
329–340. Véliz, C. (2023). Chatbots shouldn't use emojis. Nature. [Link]
Roy, R., & Naidoo, V. (2021). Enhancing chatbot effectiveness: The role of [Link]/articles/d41586-023-00758-y
anthropomorphic conversational styles and time orientation. Journal Viglia, G., Zaefarian, G., & Ulqinaku, A. (2021). How to design good
of Business Research, 126, 23–34. experiments in marketing: Types, examples, and methods. Industrial
Schuetzler, R. M., Grimes, G. M., & Scott Giboney, J. (2020). The impact of Marketing Management, 98, 193–206.
chatbot conversational skill on engagement and perceived human- Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine:
ness. Journal of Management Information Systems, 37(3), 875–900. Anthropomorphism increases trust in an autonomous vehicle.
Servion. (2020). AI will power 95% of customer interactions by 2025. Journal of Experimental Social Psychology, 52(5), 113–117.
[Link] Wilson, H. J., Daugherty, P., & Bianzino, N. (2017). The jobs that artificial
[Link] intelligence will create. MIT Sloan Management Review, 58(4), 14–16.
Sheehan, B., Jin, H. S., & Gottlieb, U. (2020). Customer service chatbots: Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., &
Anthropomorphism and adoption. Journal of Business Research, 115, Martins, A. (2018). Brave new world: Service robots in the frontline.
14–24. Journal of Service Management, 29(5), 907–931.
Shin, H., Bunosso, I., & Levine, L. R. (2023). The influence of chatbot Wu, R., Chen, J., Lu Wang, C., & Zhou, L. (2022). The influence of emoji
humour on consumer evaluations of services. International Journal of meaning multipleness on perceived online review helpfulness: The
Consumer Studies, 47(2), 545–562. [Link] mediating role of processing fluency. Journal of Business Research,
12849 141(3), 299–307.
Spry, A., Pappu, R., & Bettina Cornwell, T. (2009). Celebrity endorsement,
Yu, S., Xiong, J., & Shen, H. (2022). The rise of chatbots: The effect of
brand credibility and brand equity. European Journal of Marketing,
using chatbot agents on consumers' responses to request rejection.
45(6), 882–909.
Journal of Consumer Psychology, 1–14. [Link]
Srinivasan, R., & Sarial‐Abi, G. (2021). When algorithms fail: Consumers'
jcpy.1330
responses to brand harm crises caused by algorithm errors. Journal of
Zhang, T., Feng, C., Chen, H., & Xian, J. (2022). Calming the customers by
Marketing, 85(5), 74–91.
AI: Investigating the role of chatbot acting‐cute strategies in
Steinhoff, L., & Palmatier, R. W. (2021). Commentary: Opportunities and
soothing negative customer emotions. Electronic Markets, 32,
challenges of technology in relationship marketing. Australasian
2277–2292. [Link]
Marketing Journal, 29(2), 111–117.
Zhou, X., Kim, S., & Wang, L. (2019). Money helps when money feels:
Tam, K. P., Lee, S. L., & Chao, M. M. (2013). Saving Mr. Nature:
Money anthropomorphism increases charitable giving. Journal of
Anthropomorphism enhances connectedness to and protectiveness
Consumer Research, 45(5), 953–972.
toward nature. Journal of Experimental Social Psychology, 49(3),
514–521.
Thatcher, J. B., & Perrewe, P. L. (2002). An empirical examination of
individual traits as antecedents to computer anxiety and computer
How to cite this article: Nguyen, M., Casper Ferm, L.‐E.,
self‐efficacy. MIS Quarterly, 26, 381–396.
Thomaz, F., Salge, C., Karahanna, E., & Hulland, J. (2020). Learning from Quach, S., Pontes, N., & Thaichon, P. (2023). Chatbots in
the dark web: Leveraging conversational agents in the era of hyper‐ frontline services and customer experience: An
privacy to enhance marketing. Journal of the Academy of Marketing anthropomorphism perspective. Psychology & Marketing,
Science, 48(1), 43–63.
1–25. [Link]
Van Doorn, J., Lemon, K. N., Mittal, V., Nass, S., Pick, D., Pirner, P., &
Verhoef, P. C. (2010). Customer engagement behavior—Theoretical
18 | NGUYEN ET AL.
A P P E N D IX A : L I TE RA TU R E R E V I E W O N C H A TB O TS
Independent variable
(IV); mediator (M);
Chatbot features
moderator (MO); Explanatory
Author(s) Context Method dependent variable (DV) concept/theory Text Imagery Speech Relevant findings
Adam Overseas Debit Experiment IV: Anthropomorphic Social response, ✓ Anthropomorphism
et al. Card Use Design Cues, Foot‐ commitment‐ and the need to
(2021) (Banking) in‐the‐door consistency stay consistent
technique theory increase users'
M: Social Presence compliance with a
DV: User Compliance chatbot's request
for service
feedback The
results show that
social presence
mediates the effect
of
anthropomorphic
design cues on user
compliance.
Ameen Luxury Experiment IV: Perceived Social comparison ✓ The more friendly a
et al. augmentation theory chatbot's
(2022) M: Body Image, anthropomorphic
MO: Chatbot support language was, the
(assistant vs friend), more likely a
External influences customer was to
of social media purchase and
(Trust in social improve self‐
media celebrities, esteem.
Addictive use of
social media)
DV: Actual purchase
behavior, Self‐
esteem
Borau Chatbot Imagery Experiment IV: Robot (vs. Humanness ✓ The appearance of a
et al. Humanoid) Imagery chatbot, and its
(2021) M: Perceived gender, impacted
Humanness customers'
MO: Gender preference and its
DV: Dual model of capability to
dehumanization; consider the
infrahumanization customers' needs
model; animalistic
and mechanistic
dehumanization;
competent, warm,
moral model.
Human (person;
people; humanity;
nature; soul).
Machine (thing;
robots; program;
mechanism;
computer)
Chen Customer service Experiment IV: Service Type (AI Schema Congruity ✓ AI chatbots (vs. human)
et al. chatbot vs. Human), Theory trigger higher (vs.
(2023)) Product Type lower) purchase
(Search vs. intention for search
Experience) (vs. experience)
products
NGUYEN ET AL. | 19
Independent variable
(IV); mediator (M);
Chatbot features
moderator (MO); Explanatory
Author(s) Context Method dependent variable (DV) concept/theory Text Imagery Speech Relevant findings
M: Processing fluency,
Perceived service
quality
MO: Consumer
demand certainty
(High vs. low)
DV: Purchase Intention
Crolic Customer Service. Field and Lab IV: Chatbot Anthropomorphism ✓ Anthropomorphism has
et al. Study 1: Ex- Anthropomorphism a negative effect on
(2022) Telecommunica- periment M: Increased DV's for angry
tions Preinteraction customers but not
Study 2/3/4/5: E‐ Expectations for non‐angry
Commerce MO: Customer Anger customers.
DV: Customer
satisfaction,
purchase intention,
firm evaluation
Drouin AI Chatbot (Replika) Field IV: FTF Chat with None reported ✓ Those who chatted FTF
et al. versus face‐to‐ Experiment Human versus with a human
(2022) face interaction Online chat with reported more
versus online human versus AI negative emotions
chat with a Chatbot than those who
human DV: Emotional chatted with a bot.
Outcomes (PANAS), Those who chatted
Perceived degree of with a human also
similarity, liking for reported more
the other, others homophily with and
responsiveness, liking their chat
self‐presentation partner and that
concerns their partner was
more responsive.
Participants had
fewest
conversational
concerns with the
chatbot.
Esmark Study 1: Service Experiment IV: Female/Male Communication ✓ Avatar authenticity can
Jones Failure (Online Avatar Accommodation be enhanced when
et al. Ordering) M: Authenticity, Theory the avatar is female,
(2022) Study 2a: Similar to Engagement, and these effects
Study 1 Efficiency, are amplified when
Study 2b: Similar to Effectiveness the avatar is
Study 1/2a MO: Same/Different dressed
Study 3a: Similar to Race, Professional/ professionally or a
Study 1/2a/2b Casual different race than
Study 3b: Similar to DV: Loyalty, the consumer.
Study 1/2a/ Satisfaction
2b/3a
(Continues)
20 | NGUYEN ET AL.
Independent variable
(IV); mediator (M);
Chatbot features
moderator (MO); Explanatory
Author(s) Context Method dependent variable (DV) concept/theory Text Imagery Speech Relevant findings
Study 3b: MO: expected, they
Ridesharing Anthropomorphiz- respond better to a
ing of AI Agent human.
(human vs AI)
DV: Likelihood of
Accepting Offer,
Customer
Satisfaction
Kull Study 1 and 2: Experiment IV: Warm (vs. Social Response ✓ When using a warm (vs
et al. Travel Competent) Theory competent)
(2021) Study 3: Banking Message opening message,
M: Brand‐Self Distance brand engagement
MO: Brand Affiliation increases and, as
DV: Brand Engagement mediated by brand‐
self distance, makes
customers feel
closer to the brand.
Luo Internet‐based Field IV: Underdogs (bottom None Reported ✓ Disclosure of chatbots
et al. financial Experiment 20th percentile reduced purchase
(2019) services. The workers), Proficient rates by 79.7%.
authors used workers (top 20th Chatbot disclosure
automated percentile workers), reduces call length.
robotic calls (vs. Chatbot without Customers perceive
human) to disclosure, chatbot chatbots as less
encourage with disclosure, knowledgeable and
repeat loans. chatbot disclosure empathetic
after conservation,
chatbot disclosure
after decision
NGUYEN ET AL. | 21
Independent variable
(IV); mediator (M);
Chatbot features
moderator (MO); Explanatory
Author(s) Context Method dependent variable (DV) concept/theory Text Imagery Speech Relevant findings
DV: Call Length,
Purchase Rate
Rajaobelina Car Insurance Survey IV: Chatbot user Paradoxes of ✓ Privacy concerns and
et al. perceptions Technology; technological
(2021) (Privacy Concerns, Trust‐ anxiety impact
Usability), Individual commitment creepiness of
Characteristics theory chatbots. The
(Technology authors posit the
Anxiety, Need for personal nature of
Human Interaction) car insurance
M: Creepiness, exacerbates these
Negative Emotions, feelings.
Trust
MO: Creepiness
DV: Loyalty
Rese Uses “Emma” a Survey IV: U&G Model Use & Gratification ✓ Authenticity, perceived
et al. shopping (Technology Theory (U&G); usefulness and
(2020) chatbot over [Convenience, Technology hedonic factors
Facebook Authenticity], Acceptance positively influence
Messenger Hedonic Model (TAM) chatbot
which helps [Enjoyment, Passing acceptance.
customers in Time], Risks However, privacy
their [Privacy Concerns, concerns and the
prepurchase Immature immaturity of the
phase Technology); TAM technology had a
Model (Perceived negative effect on
Usefulness, usage intention and
Perceived Ease of frequency.
Use, Perceived
Enjoyment)
DV: Behavioral
Intention
(Continues)
22 | NGUYEN ET AL.
Independent variable
(IV); mediator (M);
Chatbot features
moderator (MO); Explanatory
Author(s) Context Method dependent variable (DV) concept/theory Text Imagery Speech Relevant findings
M: Social Perceptions Future‐oriented
of Brand (Warm vs. subjects, prefer a
Competent) competent versus
DV: warm conversation.
Brand Attitude, Brand perceptions
Purchase Intention mediate these
effects.
Pizzi Multiple: Experiment IV: Gaze Direction Theory of Mind ✓ Perceptions of chatbot
et al. Study 1: Car Rentals (Direct vs. Averted), warmth are
(2023) Study 2: Travel Anthropomorphism influenced by gaze
Insurance (low vs. High) direction, and
M: Chatbot Warmth, competence
Theory of Mind, perceptions are
Chatbot affected by
Competence, anthropomorphism.
Scepticism, Trust
DV: Willingness to
disclose, future
intentions
Schuetzler Study 1/2: Image Experiment IV: Conversational Skill Social Presence ✓ Show that people
et al. Description (Tailored responses, Theory perceive a more
(2020) response variety) skilled chatbot to
M: Social Presence be more socially
DV: Perceived present and
Humanness, anthropomorphic
Partner than a less skilled
Engagement chatbot
Independent variable
(IV); mediator (M);
Chatbot features
moderator (MO); Explanatory
Author(s) Context Method dependent variable (DV) concept/theory Text Imagery Speech Relevant findings
anthropomorphism‐
adoption
relationship.
Shin Multiple. Experiment IV: Humor, Type of Anthropomorphism ✓ The use of humor
et al. Study 1: Humor enhances service
(2023) Tele- M: Perceived satisfaction when it
communication Anthropomorphism, is used by a chatbot
Study 2: Mobile Perceived but not when it is
service provider Interestingness of used by a human
Study 3: Flights interaction agent. This chatbot
MO: Service humor effect is
Agent Type serially mediated by
DV: Service enhanced
Satisfaction perceptions of
anthropomorphism
and interestingness
of the interactions
with the chatbot.
Socially appropriate
(vs. inappropriate)
humor leads to
higher service
satisfaction
Zhang E‐Commerce. Experiment IV: Chatbot acting Information Process ✓ ✓ Both schema types are
et al. strategies Theory effective. But in
(2022) (Whimsical, high failure
Kindenschema) conditions, the
M: Customer's strategies weaken.
Negative Emotions Whimsical is good
MO: Gender (male/ for high technology
female), Technology anxiety versus
(Continues)
24 | NGUYEN ET AL.
Independent variable
(IV); mediator (M);
Chatbot features
moderator (MO); Explanatory
Author(s) Context Method dependent variable (DV) concept/theory Text Imagery Speech Relevant findings
Anxiety (High/Low), Kindenschema for
Product or service low technology
failure severity anxiety.
(High/Low)
DV: Post‐recovery
satisfaction
(repurchase
intention,
Recommendation
intention,
Satisfaction with
Store
This Study E‐commerce Experiment IV: Anthropomorphic Social Response ✓ ✓ Interaction between
Language Theory humanlike
M: Perceived appearance via the
Competence, use of avatars and
Perceived anthropomorphic
Authenticity language is
MO: Chatbot mediated by
Appearance, Brand perceived chatbot
Credibility competence and
DV: Customer authenticity. The
Engagement positive effect of
anthropomorphic
language on
perceived
competence, and
subsequently on
authenticity and
engagement, is only
significant when
the brand
credibility is low
(vs. high).
NGUYEN ET AL. | 25
A P P E N D IX B
Study 1: Stimuli Examples
Human, Low Anthropomorphic Language Condition Chatbot, High Anthropomorphic Language Condition