0% found this document useful (0 votes)
12 views24 pages

Design Principles For Artificial Intelligence Augmented Decision Making

Uploaded by

Soulinchaos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views24 pages

Design Principles For Artificial Intelligence Augmented Decision Making

Uploaded by

Soulinchaos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

European Journal of Information Systems

ISSN: (Print) (Online) Journal homepage: www.tandfonline.com/journals/tjis20

Design principles for artificial intelligence-


augmented decision making: An action design
research study

Savindu Herath Pathirannehelage, Yash Raj Shrestha & Georg von Krogh

To cite this article: Savindu Herath Pathirannehelage, Yash Raj Shrestha & Georg von
Krogh (20 Mar 2024): Design principles for artificial intelligence-augmented decision
making: An action design research study, European Journal of Information Systems, DOI:
10.1080/0960085X.2024.2330402

To link to this article: https://doi.org/10.1080/0960085X.2024.2330402

© 2024 The Author(s). Published by Informa


UK Limited, trading as Taylor & Francis
Group.

View supplementary material

Published online: 20 Mar 2024.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=tjis20

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS
https://doi.org/10.1080/0960085X.2024.2330402

RESEARCH ARTICLE

Design principles for artificial intelligence-augmented decision making: An


action design research study
Savindu Herath Pathirannehelagea, Yash Raj Shresthab and Georg von Krogha
a
Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland; bFaculty of Business and Economics (HEC),
University of Lausanne, Lausanne, Switzerland

ABSTRACT ARTICLE HISTORY


Artificial intelligence (AI) applications have proliferated, garnering significant interest among Received 11 August 2022
information systems (IS) scholars. AI-powered analytics, promising effective and low-cost Accepted 6 March 2024
decision augmentation, has become a ubiquitous aspect of contemporary organisations. KEYWORDS
Unlike traditional decision support systems (DSS) designed to support decisionmakers with AI-augmented decision
fixed decision rules and models that often generate stable outcomes and rely on human making; artificial
agentic primacy, AI systems learn, adapt, and act autonomously, demanding recognition of intelligence; decision
IS agency within AI-augmented decision making (AIADM) systems. Given this fundamental shift support systems; design
in DSS; its influence on autonomy, responsibility, and accountability in decision making within principles; action design
organisations; the increasing regulatory and ethical concerns about AI use; and the corre­ research
sponding risks of stochastic outputs, the extrapolation of prescriptive design knowledge from
conventional DSS to AIADM is problematic. Hence, novel design principles incorporating
contextual idiosyncrasies and practice-based domain knowledge are needed to overcome
unprecedented challenges when adopting AIADM. To this end, we conduct an action design
research (ADR) study within an e-commerce company specialising in producing and selling
clothing. We develop an AIADM system to support marketing, consumer engagement, and
product design decisions. Our work contributes to theory and practice with a set of actionable
design principles to guide AIADM system design and deployment.

1. Introduction
management, and the construction of decision mod­
The past decade has witnessed rapid growth in the els, thereby aiding decisionmakers in problem identi­
deployment of AI-based systems within organisations, fication, process execution, and decision making
creating human-AI ensembles (Choudhary et al., (Arnott & Pervan, 2014, Power, 2001). With the ever-
2023, Rai et al., 2019, Van den Broek et al., 2021). increasing amount of data accessible to organisations,
A key application of AI-based systems in organisations AI-powered DSS promise significant organisational
is for augmenting decision making with AI-generated value by boosting human productivity, reducing coor­
predictions and insights – a phenomenon referred to dination costs and enhancing decision-making speed
as AI-augmented decision making (AIADM) (Keding and accuracy (Brynjolfsson & McElheran, 2016,
& Meissner, 2021, Raisch & Krakowski, 2021). Interest Lebovitz et al., 2022, Shrestha et al., 2019, Tinguely
in AIADM is fuelled by AI’s capacity to mine complex et al., 2020).
patterns from large volumes of data and generate Given these developments, IS researchers endea­
accurate predictions, which when combined with vour to design effective DSS that empower humans
human judgement and decision making can generate and AI to jointly make decisions, creating superior
value for organisations (Shrestha et al., 2019). business value for organisations while also producing
Consequently, AIADM systems are increasingly emer­ societal impact (Fang et al., 2021, Gregor & Hevner,
ging as a prominent subclass of decision support sys­ 2013, Padmanabhan et al., 2022, Samtani et al., 2021).
tems (DSS) (Arnott & Pervan, 2014, Hevner & Storey, Furthering this objective, design science research
2023, Rai, 2016). With recent advancements in deep (DSR) attempts to create novel artefacts that solve
learning architectures (Shrestha et al., 2021) and large previously unresolved issues or improve existing solu­
language model-powered generative AI (Dasborough, tions (Hevner et al., 2004). Action design research
2023), this uptrend shows no signs of stagnation. (ADR) adopts a DSR approach wherein an IS artefact
DSS have been instrumental in organisations as IS is constructed within a specific client context to glean
artefacts, delivering significant benefits by supporting prescriptive design knowledge to address a class of
communication, data processing and knowledge problems (Iivari, 2015, Maedche et al., 2021,

CONTACT Savindu Herath Pathirannehelage [email protected]


Supplemental data for this article can be accessed online at https://doi.org/10.1080/0960085X.2024.2330402.
© 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits
unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The terms on which this article has been published allow the posting
of the Accepted Manuscript in a repository by the author(s) or with their consent.

Electronic copy available at: https://ssrn.com/abstract=4071519


2 S. HERATH PATHIRANNEHELAGE ET AL.

Mandviwalla, 2015). Recent calls for design research et al., 2022). This dearth of design knowledge on
on human-AI systems underscore its practical utility AIADM systems further aggravates the already
in addressing practitioner problems and its potential polarised academic debate around the bright and
to enhance our comprehension of AI technology dark side of AI in decision making (Mikalef et al.,
(Padmanabhan et al., 2022, Rai et al., 2019). 2022). Conflicting expectations and assumptions
The challenge is that DSR has thus far primarily about the use of AI may partly stem from the lack of
concentrated on the formulation of design theories in-depth examination of AI artefact design or limited
(Miah et al., 2019) based on conventional (i.e., non- first-hand understanding of how decision-making
AI-based) DSS artefacts (Golovianko et al., 2022, Pan processes unfold in human-AI ensembles. Further,
et al., 2021), overlooking increasingly important lack of prescriptive knowledge also limits practi­
AIADM systems. Recent research shows the extrapo­ tioners’ capacity to apply AI in decision making
lation of prevailing prescriptive design knowledge (Padmanabhan et al., 2022). Popular accounts show
from conventional DSS to AIADM systems faces that despite the technological superiority of AI algo­
four key design challenges (Hevner & Storey, 2023). rithms, challenges in adopting them could result in
First, while conventional DSS employs fixed, prede­ AIADM being dismissed altogether (Joshi et al., 2021,
fined decision rules and models, mostly leading to Ransbotham et al., 2019).
deterministic outcomes, AI algorithms learn from Against this background, the current study exam­
data and adapt over time, resulting in stochastic out­ ines the following research questions:
puts (Padmanabhan et al., 2022). Such outputs influ­
ence the decisions within organisations markedly (1) What are the challenges involved in designing
differently compared to conventional DSS, demanding and deploying AIADM systems in
oversight and careful evaluation. For instance, biases organisations?
and errors in AIADM systems are comparatively dif­ (2) What are the principles for designing AIADM
ficult to expose and handle (Shrestha et al., 2021). systems in organisations?
Second, ambiguity in agency within AIADM systems
gives rise to uncertainty around decision making To this end, we design, deploy, and evaluate an
authority, autonomy, responsibility, and accountabil­ AIADM system consisting of three AI use cases in
ity in organisations (Abdul et al., 2018, von Krogh a young online fashion retailing company (TBô
et al., 2021). As a result, AIADM systems may face Clothing) serving a global customer base (Iivari’s
significant organisational resistance driven by the per­ (2015) strategy 2). TBô envisaged AI design, develop­
ceived loss of managerial control and costly organisa­ ment, and deployment and effectively and efficiently
tion-wide transformations (Feuerriegel et al., 2022). using data-driven insights, as core components of its
Third, AIADM systems are increasingly raising regu­ strategy to operationalise a business model of co-
latory and ethical concerns that extend far beyond creating products as a community-led brand. The
those associated with conventional DSS (Berente AIADM system augmented decisions across three
et al., 2021, Mikalef et al., 2022). Finally, conventional pivotal domains: a) customer segmentation and tar­
DSS exhibits limited configurability and contextual geting, b) customer retention, and c) redesign of the
sensitivity (Arnott & Pervan, 2008, Miah et al., 2019), product and service portfolio.
raising concerns about the transferability of design We use ADR (Sein et al., 2011) to design an
knowledge across decision contexts. Practitioners AIADM system at TBô. ADR applies an iterative
often find extant DSS research irrelevant due to build-intervene-evaluate process covering key IS
a lack of configurability and contextual adaptability, design and deployment stages within organisational
demanding the incorporation of contextual idiosyn­ contexts (Peffers et al., 2018). We critically examine
crasies and domain knowledge in DSS design (Arnott, the three AI use cases, drawing on a rich set of primary
2006, Arnott & Pervan, 2014). data, including interviews with the firm’s executive
The markedly distinct characteristics of AIADM members and data scientists; archival data related to
compared to traditional DSS (see Appendix A) influ­ sales and customer feedback, the website, and corpo­
ence various facets of organising (Abbasi et al., 2016, rate presentations; field notes from weekly meetings;
Bailey et al., 2022, Baird & Maruping, 2021) and and experiments in TBô’s customer portals and cus­
underscore the need for new prescriptive design tomer surveys. The AIADM system deployment is
knowledge incorporating contextual idiosyncrasies followed by evaluation, reflection and learning and
(Miah et al., 2019) and practice-based domain knowl­ systematising the research-based design knowledge
edge (Padmanabhan et al., 2022). Therefore, how to as a set of design principles (Chandra et al., 2015),
design AIADM systems while navigating unique orga­ advancing the IS literature on AIADM systems design
nisational challenges remains an important open and deployment. Our design principles guide IS prac­
question for design theory and practice (Abbasi titioners to successfully overcome the challenges of
et al., 2016, Hevner & Storey, 2023, Padmanabhan transforming to AIADM.

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 3

2. Background 2.2. Decision support systems


2.1. Decision making in organisations
Prior work on behavioural decision making laid the
Within the decision theory literature, scholars have theoretical foundations for DSS development and
explored decision-making processes from different research. The seminal works in the Carnegie school
vantage points (e.g., individual and collective). The (Cyert & March, 1963, Galbraith, 1974, Simon, 1947,
two primary approaches to decision making include: 1956, 1960, Tushman & Nadler, 1978) provided the
a) following the logics of preferences and expectations key theoretical constructs to examine the influence of
(March, 1994, Schoemaker, 1982) and b) following DSS on organisational decision-making processes,
appropriateness, obligation, rules, and routines decision outcomes, and decision performance
(March & Olsen, 1989, March & Simon, 1993). (Huber, 1990, Leidner & Elam, 1995, Molloy &
Scholars adopting the former approach have tradition­ Schwenk, 1995, Sharma et al., 2014). Arnott and
ally assumed that choices are innately rational given Pervan (2014) found strong evidence for a shift in
perfect information espoused by neo-classical eco­ decision-making orthodoxy in DSS research from
nomic theory (March, 1994). However, later scholars, a behavioural view characterised by bounded ration­
following the Carnegie school, questioned the infor­ ality and satisficing to prospect theory characterised
mation completeness and perfect rationality assump­ by human biases and judgement heuristics. Following
tions by introducing the concept of bounded this shift in decision theory, DSS research became
rationality – limitations of human information proces­ increasingly grounded on Tversky and Kahneman’s
sing resulting in satisfactory, rather than optimal, (1974) theory (e.g., Chen & Koufaris, 2015).
decisions (Simon, 1960). Besides bounded rationality, The DSS literature has advanced both in terms of
Simon’s (1960) major conceptual contribution – the general theory and design theory. With respect to gen­
decision-making phase theorem – identified three eral theory, Huber (1990) developed a theory on the
iterative and recursive phases in managerial decision effects of computer-assisted communication and DSS
making: intelligence, design, and choice. Building on on organisational design, intelligence, and decision
Simon’s (1947, 1960) seminal works, a subsequent making. A large body of research examined DSS use
body of research has examined how organisations (Abouzahra et al., 2022, Kamis et al., 2008), develop­
process information – information processing view ment (Lynch & Gregor, 2004), and the impact on
of the firm (Galbraith, 1974) – and integrate it into decision-making behaviour, capturing both advantages
decision-making processes (Joseph & Gaba, 2020, (Barkhi, 2002, Lilien et al., 2004, Todd & Benbasat,
Tushman & Nadler, 1978). 1999) and challenges (Chen & Koufaris, 2015,
Drawing on the concept of bounded rationality, Giermindl et al., 2022, Mikalef et al., 2022, Rinta-
Tversky and Kahneman (1974) advanced decision the­ Kahila et al., 2022). In terms of design theory, Keen’s
ory towards a post-Simon orthodoxy by developing an (1980) adaptive design framework for DSS develop­
array of empirically validated theories about the cog­ ment has been highly influential as a kernel theory for
nitive processes involved in decision making. They put subsequent design studies (Miah et al., 2019).
forth theoretical explanations for systematic failures of According to Keen (1980), not all uses of DSS can be
human decision making, emphasising human biases stipulated during the design phase, but a design evol­
and judgement heuristics (the prospect theory). This ving through use can overcome the foibles of the semi­
theory suggests that due to humans’ limited informa­ nal Gorry and Morton (1971) framework that assumed
tion processing capabilities, they apply simplifying a static and technical view on DSS design. Following the
heuristics to complex decisions typified by uncertainty prospect theory (Tversky & Kahneman, 1974), DSS
and opt for satisficing actions that deviate from the design aimed to mitigate humans’ cognitive limitations
rational optimal alternative. This also applies to deci­ by focusing on enhancing both primary (action selec­
sions within organisations, which are made under tion) and secondary (protocol selection) decisions
uncertainty and are fraught with biases and heuristics (Arnott, 2006, Remus & Kottemann, 1986). Recent
(Remus & Kottemann, 1986). Acknowledging these work in DSS literature has shown a significant rise in
systematic decision-making failures, the follow-up DSR (Arnott & Pervan, 2014), developing design the­
work focused on designing corrective actions to ories (Miah et al., 2019) and contextual DSS artefacts
improve decision making using statistical methods often featuring in European IS (Collins et al., 2010,
(Grove & Meehl, 1996, Grove et al., 2000) and tech­ Golovianko et al., 2022, Klör et al., 2018, Pan et al.,
nology (Huber, 1990, Molloy & Schwenk, 1995). With 2021, Seidel et al., 2018).
the advent of mainframes and computers and their With the advent of big data and machine learning
mainstream adoption, DSS emerged as a prominent (ML), the focus in IS artefacts – including DSS (Arnott
sub-field of IS scholarship aimed at facilitating and & Pervan, 2014) – has shifted from systems, functions,
improving decision making in organisations (Arnott features/requirements, and technology towards deriv­
& Pervan, 2005). ing knowledge and insights from data (i.e., information

Electronic copy available at: https://ssrn.com/abstract=4071519


4 S. HERATH PATHIRANNEHELAGE ET AL.

and analytics) (Abbasi et al., 2016). Data-driven deci­ about who has responsibility and accountability
sion-support solutions – business analytics, business (Abdul et al., 2018) in decision-making protocol
intelligence, and big data analytics – have been on the development and ultimate action selection (Murray
rise, generating significant rejuvenation of DSS research et al., 2020). For instance, Lebovitz et al. (2022) ques­
and practice (Arnott & Pervan, 2014, Rai, 2016). ML- tion the locus of accountability when AIADM systems
based AI technologies can extract business relevant diagnose patients.
patterns from large volumes of data (Ågerfalk, 2020, Other striking distinctions between conventional
Berente et al., 2021). This is arguably the most signifi­ DSS and AIADM systems warrant assessing it as
cant movement within the history of DSS, resulting in a separate class within DSS. First, in conventional
a substantial footprint of AI across technological, orga­ DSS, decision rules are programmed to produce an
nisational, economical, and social domains simulta­ output based on an input. Such designs involve neither
neously (Berente et al., 2021, Jain et al., 2021, Rai training nor learning, as the decision rules often lead
et al., 2019, von Krogh, 2018). Thus, AI-based DSS are to a definitive output. These systems with predefined
gaining prominence as a sub-class of DSS. The resulting rules and models do not learn from data or adapt over
human-AI hybrids have the potential to shape decision time. Due to this, conventional DSS models and out­
making on a spectrum ranging from automation (AI puts are often more interpretable than their AI coun­
substitutes humans) to augmentation (AI and humans terparts, making it easy to understand the reasoning
complement each other) (Rai et al., 2019, Raisch & behind specific decisions. AI algorithms, on the other
Krakowski, 2021). On this spectrum, we chose decision hand, are not programmed to perform a fixed task, but
augmentation and positioned our study within the class to learn to perform the task from data and adapt over
of AIADM systems where AI insights enhance manage­ time (Padmanabhan et al., 2022). Second, stemming
rial decisions. from the first distinction, AIADM systems are more
dynamic, stochastic, unpredictable, and less explain­
able with respect to operations and outcomes
2.3. AI-augmented decision making
(Shrestha et al., 2021). Therefore, AIADM systems
AI-based IS artefacts learn, adapt, and act with limited can lead to unintended results, causing significant
or no human intervention (Baird & Maruping, 2021). risks and damages to the organisation. As an antidote,
These aspects of AI challenge the primacy of human the human-in-the-loop literature endorses the pre­
agency in organisations while shifting the focus sence of humans in ML workflows to identify
towards recognising IS agency (Ågerfalk, 2020). instances where systems might fail, assess associated
Hence, AI is not merely a technology that harnesses risks, and develop contingency plans to mitigate risks
knowledge and insights from data, but it also spurs (Grønsund & Aanestad, 2020, Xin et al., 2018). Third,
paradigmatic shifts in relationships between humans as opposed to traditional DSS, AIADM systems may
and machines (Ågerfalk, 2020, Lyytinen et al., 2021) face intensive organisational resistance due to the per­
and how they relate and co-organise to process infor­ ceived loss of managerial authority, their opaque and
mation to make decisions (Bailey et al., 2022, von complex algorithms that transcend managerial intui­
Krogh, 2018). A new literature stream has emerged tion, their unquantifiable economic benefits, and the
to study different facets of these human-AI ensembles fact that they trigger swift, organisation-wide changes
and resulting augmented intelligence. Lyytinen et al. (Feuerriegel et al., 2022).
(2021) proposed the concept of “metahuman systems” Moreover, extant DSS research and artefacts suffer
as a hybrid of humans and machines learning jointly from a lack of configurability and contextual sensitiv­
while mutually reinforcing each other’s strengths. ity (Arnott & Pervan, 2008, Miah et al., 2019). This
Murray et al. (2020) identified four forms of conjoined casts doubt on whether design knowledge can be
agency between humans and technologies and the applied across different decision contexts, problem
impact of these agency forms on the evolution of domains, and underlying technologies. There have
organisational routines. They identified ML as an been repeated claims that practitioners find DSS
augmenting technology that (1) increases the degree research and artefacts irrelevant as they fail to meet
of a routine’s change, (2) decreases the predictability the practitioners’ needs (Arnott, 2006, Arnott &
of a routine’s change, and (3) decreases routine Pervan, 2014). The lack of configurability and contex­
responsiveness. These findings point towards the far- tual dynamism across different application domains,
reaching organisational implications of AI and the domain-specific languages, and different enabling
recognition of AI agency. While the extant IS artefact technologies impedes the wider adoption of DSS
(including DSS) literature is eloquent on human knowledge contributions (Miah et al., 2019).
agency, it gives scant attention to IS agency Therefore, in the current study, capturing contextual
(Ågerfalk, 2020, Baird & Maruping, 2021). Agentic idiosyncrasies and practice-based domain knowledge
primacy is ambiguous and fluid in AIADM systems in AIADM system design is crucial to ensure practi­
(Baird & Maruping, 2021), but there is limited clarity tioner relevance and acceptance.

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 5

For these reasons, we argue that extrapolating the We also had several practical considerations in
extant prescriptive design knowledge from conventional choosing TBô, such as the alignment between our
DSS to novel and context-specific AIADM systems is research interests and the company’s strategy and
contentious. AIADM systems are an emerging phenom­ vision to leverage AI adoption. TBô’s young age and
enon, distinct from traditional DSS (Abbasi et al., 2016, limited resources, notably in terms of human talent,
Baird & Maruping, 2021), and their potential to shape render it transparent and receptive to collaboration.
multiple aspects of organising simultaneously (Bailey Consequently, the ADR intervention can be conveni­
et al., 2022) calls for novel prescriptive knowledge con­ ently implemented and meticulously examined.
tributions to the design of AIADM systems while
remaining alert to contextual idiosyncrasies (Miah
3.2. AIADM use cases
et al., 2019) and contemporary problems in practice
(Padmanabhan et al., 2022). Little can be designed The strategic data roadmap of TBô officially proposed
a priori, but instead these systems need to be rapidly by the co-founders to all employees and investors
adjusted to the specific client context (Iivari, 2015). In highlighted three high-priority AI use cases that
this study, we contribute to the design knowledge on centred on three key decision-making areas: (1) cus­
AIADM in organisations by describing the design and tomer segmentation and targeting, (2) customer reten­
deployment of an AIADM system in a specific context. tion, and (3) redesigning the product and service
portfolio. While (1) focused on increasing co-
creation participation and the efficiency of co-
3. Research context and methodology creation campaigns, (2) and (3) aimed to maximise
customer lifetime value (CLV) and subsequently sales
3.1. Case selection revenue. Given TBô’s digital business model, rich
We selected TBô Clothing (https://tbo.clothing/ch-en/), accumulated customer data enabled the development
a globally operating online fashion retail company of AIADM to identify relationships between purchas­
headquartered in Switzerland, as our research context. ing and co-creation participation to purposefully
Established in 2019, TBô is a young company with its nudge the customer community to purchase as well
employees spread across Europe, North America, and as co-create. TBô anticipated that AIADM would
Asia. TBô caters to a diverse customer base across three guide the decisionmakers in designing and imple­
continents, relying exclusively on digital platforms and menting meaningful interventions for enhancing
online stores. It also positions itself as a community-led both sales and co-creation.
brand, with its entire product range being co-created We conducted the end-to-end process of AIADM
using customer input. To do so, TBô has created and system design and deployment, from when the co-
maintained an online community where customers can founders first conceived of the idea to adopt AIADM
participate as co-creators. Curating ideas on product to the final implementation and the company’s post-
design and development, TBô routinely (usually hoc evaluation of the system. At TBô, AIADM was
weekly) circulates online questionnaires where co- built on online purchase (order), survey (co-creation),
creators can engage and contribute by answering ques­ and advertising and promotional campaign data they
tions on personal information, user experience, product collected to inform managerial decisions. Before
ideas, personal preferences, and personal aspirations. adopting AIADM, TBô relied on manual reading and
TBô is a suitable research setting to investigate our coding of textual data collected via online surveys to
research questions. First, TBô envisaged AIADM identify and evaluate prominent, attractive, and lucra­
design and deployment as a core component of its tive ideas and integrate a subset of them into products.
strategy. Second, TBô is a “clean slate” in which we Subsequently, based on analysis of the number of pre-
can observe AIADM system design and deployment orders they received, decisions were made on whether
without much interference from legacy systems, prior to promote products in the core collection or as lim­
routines, decision-making processes, and experience ited editions. In essence, the AIADM system at TBô
with similar projects. Third, the evolution of a project aimed to augment managerial decision making in
could be tracked from ideation to deployment from operationalising its co-creation business model.
both managerial and operational perspectives. Fourth,
the unique technological, organisational, operational,
3.3. Action design research
and market conditions of TBô engender certain con­
textual characteristics and peculiar challenges for We adopt the ADR methodology for three reasons.
AIADM deployment that are congruent with our pro­ First, AIADM system design and deployment in an
blem concept. Hence, the design principles we develop organisation comprises the “inseparable and inherently
can be used to guide AIADM initiatives in similar interwoven activities of building the information tech­
settings by overcoming the challenges identified. nology (IT) artefact, intervening in the organisation,

Electronic copy available at: https://ssrn.com/abstract=4071519


6 S. HERATH PATHIRANNEHELAGE ET AL.

and evaluating it concurrently” (Sein et al., 2011, p. 37), customer demand and preferences. The online store’s
which aligns with the research process conceptualised sales data could be exactly and routinely monitored.
in ADR. Extant literature attests that the ADR approach Data richness and complementarities among the three
not only fosters richer insights into the interactions of datasets encouraged TBô to find ways to leverage AI-
technology and organisation (Altendeitering et al., driven insights to augment decisions such as segment­
2021, Ebel et al., 2016, Sun et al., 2019), but also per­ ing and targeting customers, enhancing marketing
forms a dual mission of contributing to theory and efforts, and redesigning the product and service
providing practical insights (Sein et al., 2011). Second, portfolio.
ADR builds on the premise that IS artefacts are ensem­ Manual data analysis was restrictive in building
bles: a collection of software/hardware tools, shaped by models that could predict customer journeys, enhance
the organisational and technological context during customer engagement, and increase customer
development and use (Sein et al., 2011, Sun et al., repurchasing. Furthermore, manual methods required
2019). Relatedly, AIADM represents multiple soft­ dedicated organisational roles and employees, increas­
ware/hardware systems and is embedded within the ing labour costs. According to I1, AI-based decision
organisational context (Shrestha et al., 2019). Finally, models were critical when competitors such as
ADR facilitates a dynamic and flexible research process, Zalando and Zara began applying them at scale.
cycling between building the IS artefact and evaluating The TBô executive team initially experimented with
its utility (Sein et al., 2011). The artefact emerges third-party tools such as Google Analytics, Facebook
through the contemporaneous interaction between (1) Business Manager, and Shopify Analytics. Experience
design and use and (2) organisational and technological and early success with these tools, as well as the CEO’s
context (Orlikowski & Iacono, 2001), facilitating dis­ firm belief that using AI tools could improve the firm’s
covery of both intended and unintended organisational decision making, became the catalyst to transition to
consequences of a specific artefact design and accom­ an in-house designed and developed AIADM system.
panying organisational challenges and mitigating The CEO (I1) succinctly summarised this as follows:
strategies.
Following Sein et al. (2011), we conducted ADR in The main advantage of moving from manual to AI
tools was the quick summaries and making a nice
four stages: (1) problem formulation; (2) building,
dataset for us to analyse, making it fast and accurate.
intervention, and evaluation (BIE); (3) reflection and Now we also see a big advantage in developing our
learning; and (4) formalisation of learning. In each own tools to further bring co-creation into the com­
stage, we gathered data from multiple sources (e.g., munity to make it more dynamic.
interviews and field notes) to capture an unbiased,
holistic view of AIADM systems design and deploy­ During artefact formulation, three key challenges,
ment while navigating through various challenges and related to a) lack of experience and scepticism, b)
trade-offs (see Table 1). managing multiple objectives, and c) competing
interests, emerged. First, TBô lacked specialised
knowledge of the underlying algorithmic mechanics
resulting in initial scepticism of the management
4. Artefact development and evaluation
team about the possibility to design an AI system
4.1. Artefact formulation to enhance decision making and subsequently create
value. Second, we encountered challenges in aligning
TBô was driven by an immediate need for automated
business problem formulation with AIADM system
analysis as the volume of data being collected expo­
design. AI algorithms necessitate specific problem
nentially increased over time, making their traditional
formulation, which can be impractical in real-
manual analysis impractical. The CEO (I1) expressed
world scenarios, leading to difficulties in aligning
the tediousness and slowness of traditional manual
AI problem formulation with actual business objec­
efforts of survey analysis:
tives and metrics. For instance, TBô struggled
We used to read every customer survey response by between its dual business objectives of sales and
ourselves to find the product ideas. This is impossible customer co-creation and in formulating this for
with the increasing number of customers and their AI. TBô operated with a co-creation business
responses. model in which all its products were designed and
Within its co-creation model, TBô had a unique developed based on customer insights. The impor­
opportunity to accumulate diverse and complemen­ tance of listening to their customers is exemplified
tary data about customer interactions through multi­ on TBô’s website:
ple channels (order, co-creation/survey, and campaign
TBô Bodywear is the world’s first DirectByConsumer
data; see Appendix D). Given purely online interac­ brand. It’s TBô’s customers—the 400,000-strong
tions with customers, digital trace data facilitated an Tribe—who decide the brand’s direction and which
opportunity to identify seasonality and trends in products get made.

Electronic copy available at: https://ssrn.com/abstract=4071519


Table 1. Summary of the ADR process at TBô Clothing.
ADR stage ADR principles Activities Actors involved Data collection and analysis Artefact
Problem formulation Practice-inspired research (P1): 1. Understanding the co- Researchers, 1. Interviews with CEO (I1; 60 min; see Appendix B) Recognition:
The research was driven by (1) a vision to improve creation business model, CEO, COO, 2. 16 Weekly meetings with CEO/COO and data 1. Understanding the problem and solution
decision making on CLV and co-creation types and sources of data engineer engineer domain as “AIADM system design and
participation by leveraging AI-driven insights and (2) data, the processes, and 3. Exploratory data analysis on archival data deployment”
a lack of prescriptive knowledge about AIADM the systems at TBô 4. Company data workshop presentation (21 slides) 2. Shortcomings of traditional decision-making
system design and deployment. 2. Reviewing the literature 5. Company website approaches in business practice. Identifying the
Theory-ingrained artefact (P2): on AIADM system design 6. Literature review (see Section 2) factors driving AIADM adoption
We chose the decision augmentation thesis (over and deployment 3. Absence of prior prescriptive knowledge and
automation) of the human-AI ensemble theories 3. Formulating research empirical research on design and deployment
(Fügener et al., 2022, Murray et al., 2020). questions of AIADM systems
BIE Reciprocal shaping (P3): 1. Developing data-driven Researchers, 1. Using archival order data and online survey data of Alpha version:
The technological context and organisational context decision models to CEO, COO, 28 consecutive months to develop AI models Three AIADM models for identified use cases (see
mutually influenced the AIADM system design and augment customer data 2. Improving timing of co-creation surveys, marketing Section 3.2) as proof of concept (POC)
deployment, as the challenges encountered were intervention decisions at engineer, efforts, and analysis of customer co-creation Beta version:
technical and/or organisational. TBô community responses using developed AI models Upon POC, recommendations from AIADM models
Mutually influential roles (P4): 2. Implementing marketing 3. Evaluating performance gains of the AIADM system were deployed to evaluate proof of value (POV),
The ADR team was comprised of researchers and recommendations of the director by (1) conducting field experiments based on proof of use (POU), and design activities
practitioners to capture theoretical, technical, and developed decision (CMD) a “one-factor-a-time” design involving four groups
practical insights. The lead designer (a PhD student) models in a real-world (two control-treatment pairs) of 1,000 customers
worked full time on developing AIADM tools with business context each and (2) interviewing the CEO and CMD (I2, I3;
TBô. 3. Evaluating whether the 60 min; see Appendix C)
Authentic and concurrent evaluation (P5): AIADM system delivered 4. 32 Weekly meetings
The ADR team concurrently evaluated whether the improvements in terms
AIADM system delivered business value through of sales and co-creation
field experiments and by interviewing the executive input
users.
Reflection and Guided emergence (P6): 1. Recognising the Emerging version and realisation:
learning The AIADM system was developed and implemented in challenges encountered The revised version of the AIADM system, specific
the firm so that the deployment process would not in AIADM system design challenges in each phase emerging from
only reflect the technical design (P2) but also and deployment organisational and technological context, and
undergo ongoing shaping by organisational use 2. Realisation of design a reflection on how these challenges were
(P3), participant perspectives (P4), and concurrent activities while addressed through design activities
evaluation (P5). navigating through the
challenges identified in

Electronic copy available at: https://ssrn.com/abstract=4071519


design and deployment
Formalisation of Generalised outcomes (P7): Proposing a set of design Researchers, 1. Meetings with CEO and COO2. Literature review to Ensemble version:
learning The resulting ensemble from ADR was a solution to principles for AIADM CEO, COO rationalise the design principles3. Design principle Ensemble embodying an AIADM system for
a problem. We conceptualised the problem (P1) and system design and reusability evaluation (Iivari et al., 2021) customer interventions, challenges
our solution (P6) as an instance of the class to deployment, positioning encountered, and design principles addressing
generalise our findings. In our study, this class is TBô as an instance/case these challenges
“AIADM system design and deployment”.
EUROPEAN JOURNAL OF INFORMATION SYSTEMS
7
8 S. HERATH PATHIRANNEHELAGE ET AL.

Although TBô benefited from design ideas from its The data roadmap that we used in the workshop in
customers and sales growth, it remained unclear how mid-September with an overview of our business
to weight these two related but distinct objectives in provided confidence [to employees and investors] in
our data-driven approach going forward.
the concrete objective function that AI requires. To
circumvent this challenge, we initiated AIADM with
two separate objectives (instead of aggregated objec­
4.2. Artefact development
tives) stemming from AIADM use cases (see
Section 3.2): to (1) increase co-creation participation Following the second principle of developing a theory-
and (2) improve the CLV and subsequently sales rev­ ingrained artefact (P2), we drew on the extant human-
enue. We designed for synergy in input (data, domain AI ensemble literature, which is germane to our class
knowledge), AI model, and the output (predictions, of systems. This literature examines how to integrate
visualisations) iteratively to derive useful insights for decisions involving humans and AI while recognising
above objectives. the agency of AI artefacts (Baird & Maruping, 2021,
Third, to schedule resource utilisation, at the incep­ Murray et al., 2020, Shrestha et al., 2019). Research
tion of operations, TBô concentrated on investments addressing this fundamental question of human-AI
that were likely to immediately strengthen their busi­ ensemble decision making coalesces into two major
ness model (e.g., building website infrastructure, set­ conceptualisations: decision automation and augmen­
ting up co-creation channels, marketing and tation (Raisch & Krakowski, 2021). Raisch and
promotions) and AIADM transformation was consid­ Krakowski (2021) defined automation as machines
ered a second step. substituting humans, whereas augmentation refers to
The CEO took charge of championing the change. humans collaborating with AI in making decisions.
This required a transformation in decision-making Recent work has evidenced the superiority of the aug­
structures, reporting hierarchy, and data management mentation theory, citing improved decision-making
practices, which induced uncertainty in the organisa­ performance. Fügener et al. (2022) found that humans
tion. To garner support and prevent strong risk aver­ and AI working collaboratively can outperform the AI
sion amongst employees, the CEO formulated that outperforms humans when they work indepen­
a concrete data roadmap. The roadmap outlined dently. However, the combined performance
short-, medium-, and long-term goals of AI design improves only when the AI delegates work to
and deployment and thus formed a concrete and humans – not when humans delegate work to the AI.
actionable object for curating organisational support Bouschery et al. (2023) found that AI can augment
and trust, resulting in enhanced coordination (see human innovation teams by fostering divergent pro­
Figure 1). cesses to explore wider problem and solution spaces in
The CEO (I1) highlighted the benefits of a clear new product development. These crucial discoveries
roadmap as follows: align with the augmentation theory and our class of

Figure 1. Goals of TBô as extracted from the company’s data roadmap.

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 9

AIADM systems, where AI parses large amounts of transfer helped the data scientists to better understand
data, detects patterns therein, and provides recom­ what the business problems/tasks are and to formulate
mendations, while humans assume responsibility those into an objective function that an ML algorithm
over decision and action selection. Therefore, we rely can comprehend (see Section 3.2 for AI use cases). The
on augmentation theory. ADR team exchanged domain knowledge (e.g., about
Augmentation triggers a partial shift from human- the co-creation business model and its performance
driven to AI-supported decision making, in which AI metrics) with the data scientists in several collaborative
systems provide recommendations (output of AIADM sessions and meetings (see Table 1) which helped in
systems) for humans to act. This ensures the involve­ formulating evaluation criteria for the effectiveness of
ment of humans without losing the characteristics of AIADM (see Section 4.3). Although at the beginning of
decision making, such as responsibility, accountability, the process, there were misunderstandings (e.g., data
context specificity, and utility expectations (value seek­ scientists lacked experience with the co-creation model
ing). Thus, we formulate the initial design principles and its performance metrics), after several discussions,
(DPs), prescriptive statements to constitute the basis of the team members converged on a common language
the design actions (Chandra Kruse et al., 2016), as con­ and found a way to further collaborate.
text specificity (Miah et al., 2019), utility (Sein et al., The data science team took significant steps in
2011), and responsibility (Mikalef et al., 2022), while describing, exploring, and verifying the quality of the
keeping human involvement (Van den Broek et al., data. This included descriptive statistics, visualisation,
2021) as the primary and fundamental design principle. assessing data quality, and discussing potential use
Following the fourth principle of mutually influen­ cases with the domain experts. We found that data
tial roles (P4), we emphasised learning and cross- understanding and business understanding benefitted
fertilisation between the research team members and from many iterations between the domain experts and
the TBô executives by enabling a combination of aca­ data science team.
demic insights with domain knowledge from industry
and practice (Sein et al., 2011). The lead designer (the 4.2.2. AI modelling
first author) worked full time on developing the First, data was prepared for AI modelling following
AIADM tools with TBô and interacted regularly with standard steps such as removing redundant features,
TBô staff in weekly meetings (see Table 1 and Appendix feature engineering, and treatment of missing values.
D). The co-authors had multiple roles, including facil­ Pre-processing mechanisms such as feature selection
itating the technical development; managing the and reweighting were used to debias training datasets
research partnership; and undertaking the organisa­ before feeding them into learning algorithms
tional and theoretical introspection, synthesis, reflec­ (Kordzadeh & Ghasemaghaei, 2022). We describe
tion, and learning. The CEO and data engineer data preparation and subsequent training and testing
facilitated the ADR procedures by contributing their of predictive ML models in Appendix E. To under­
practical experience (see Figure 2). stand the relationship between purchasing and co-
The artefact development consisted of business and creation behaviour in TBô’s business model, three
data understanding, followed by AI modelling and decision models (DMs) were developed (see Figure 3):
validation.
4.2.2.1. DM1 for customer segmentation and target­
4.2.1. Business and data understanding ing. DM1 aimed at predicting co-creation behaviour
AIADM is the confluence of insight from data (explora­ based on purchasing behaviour. Observing purchasing
tion/induction) and the domain expertise of decision­ behaviour, this ML model guided segmentation of the
makers (Agrawal et al., 2018, Tarafdar et al., 2019). customer base and subsequently targeting promising
Managers’ experience and their understanding of con­ segments with interventions aimed at increasing co-
sumer behaviour and products were necessary for the creation participation.
AIADM system design process. Transferring adequate
domain expertise to data scientists to work on the 4.2.2.2 DM2 for customer retention. DM2 aimed at
problem(s) was crucial. This domain knowledge identifying the difference in purchasing behaviour

Figure 2. ADR team.

Electronic copy available at: https://ssrn.com/abstract=4071519


10 S. HERATH PATHIRANNEHELAGE ET AL.

Figure 3. Architecture of the AIADM system at TBô.

between co-creators and non-co-creators by compar­ constraints were identified during artefact develop­
ing their purchases. This model guided retention of ment. First, TBô faced constraints in talent, capital,
lucrative customer segments. and time while developing AIADM. It struggled to
fulfil its vacancies requiring specialist technical and
4.2.2.3 DM3 for redesigning product portfolio and business domain expertise. The following quote from
services. DM3, via topic modelling, focused on iden­ I1 highlights the CEO’s earnest search for experience
tifying salient product and service issues that custo­ and expertise in AI-related technology:
mers raise as reasons not to place repeat orders. It
guided TBô to redesign their product and service The main bottleneck [in adopting AIADM] is the lack
of engineers and data scientists to develop the tech
portfolio. and algorithms.

4.2.3 AI model validation This statement was corroborated by the job advertise­
Model validation appraised the predictive perfor­ ments posted on the company website, which failed to
mance of the models built. For DM1, a collection of attract suitable applicants for more than nine months
three metrics was used to validate the predictive per­ (see Figure 4).
formance of the models, namely accuracy, mean log- Moreover, IT infrastructure to store and analyse
loss score, and area under the receiver operating char­ data is costly and time consuming to install. The two
acteristic curve (AUC). The best predictive perfor­ co-founders (CEO and COO) of TBô also found it
mance appeared with the random forest model. For difficult to dedicate time and managerial attention to
DM2, we used standard statistical testing. For DM3, implement AIADM while managing their day-to-day
the coherence score was used as the performance business operations.
metric to choose the best hyperparameter combina­ Second, TBô faced several data challenges in the
tion in our grid search. The coherence score is adoption of AIADM. To pursue the AIADM journey,
a measure of semantic similarity between words the firm needed to capture, clean, and store data,
within a topic, and it measures the quality of the because data is the centrepiece of AIADM (Kuguoglu
generated topics. et al., 2021). AIADM initiatives that fail to capture and
We observed that model validation also required store clean and relevant data are destined to be error
collective decisions from managers and data scientists, prone and hence unsuccessful (Kuguoglu et al., 2021).
such as deciding on accuracy metrics, model selection, By “relevant”, we mean that the data can give mean­
and interpreting the topics in topic modelling. By ingful insights to solve the problem(s) under consid­
integrating data science knowledge with business eration. Data management is challenging, especially
expertise, the ADR team was able to significantly the crucial tasks of collecting, cleaning, and storing
improve the predictive performance of the models data. TBô used third-party service providers to run
(see Appendix E) by demonstrating the effectiveness email and social media campaigns to collect co-
of our artefact (POC) (Venable et al., 2012, creation data (see Appendix D). To accumulate order
Nunamaker et al., 2015). data, TBô leveraged a third-party proprietary e-com­
Three key challenges related to a) resource con­ merce platform for online stores and retail point-of-
straints, b) data constraints, and c) technological sale systems. These external service providers offer

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 11

Figure 4. AI-related job advertisements on TBô website.

interaction platforms, methods to collect data in real Another issue is interlinking all the different software
time, data analysis tools, trouble-free integration with seamlessly and having it all in one software or dash­
the firm’s internal systems, and storing this data in board—that is, combining email, SMS, social media,
the website, and other outlets.
(cloud) data storage systems via dedicated application
programming interfaces. The company experienced
initial challenges in curating complementary datasets,
4.3. Artefact deployment and evaluation
such as finding behavioural data for customers, since
order data only contained limited behavioural infor­ The decision recommendations derived from the
mation. TBô thus relied on surveys to collect beha­ developed decision models were deployed. It is impor­
vioural data. Combining these three datasets – order, tant to mention that diverse recommendations were
co-creation, and campaign data – was challenging, as identified. In Table 2, our intention is not to provide
the process demanded a unique customer identifier a comprehensive list of all the recommendations of the
(e.g., email address) across datasets. If customers decision models, but to elucidate the deployment with
used different emails, joining the datasets became a few examples. By doing so, we demonstrate how an
inefficient, creating multiple copies of the same custo­ organisation envisaging transformation to AIADM
mers in a fragmented form. may replicate a similar approach.
Third, systems and technologies, collectively called Best-validated ML models based on suitable per­
the “technology” or “solution stack”, are leveraged for formance metrics such as accuracy, mean log-loss
diverse business tasks in organisations. Even though score, and AUC were deployed (see Section 4.2,
TBô tried to integrate these systems running on het­ model validation). However, it is likely that the mod­
erogeneous technologies through standard interfaces, el’s accuracy might not align with the additional
they often found the interlinking difficult, while also value being generated by AIADM. In artefact evalua­
failing to achieve desired results. The CMD high­ tion, we thus examined the expected gains from our
lighted this issue (I3): AIADM system (P5).

Table 2. Recommendations stemming from decision models.


Decision model 1 Decision model 2 Decision model 3
R1. Leverage the recency effect of sales on 1. Expand community of co-creators From topic modelling, we identified three latent
co-creation Insight: Co-creators account for higher CLV than topics that hamper repeat orders: (T1) no need
Insight: The probability of co-creation drops non-co-creators. Managers should integrate to buy, (T2) high cost, and (T3) dissatisfaction
considerably as the time from the last co-creation initiatives into their business with service.
purchase increases. Hence, segment the models and incentivise co-creation. We present one recommendation for each topic
customers depending on the recency of the identified.
purchase and target the recent customers T1: Expand product portfolio to cater to
with co-creation surveys. different product needs
R2. Leverage repeat customers to optimise T2: Product bundling
co-creation initiatives T3: Improve support services and customer
Insight: Co-creation initiatives can be made more inquiry handling
efficient (cost per response, etc.) if they target
repeat customers.

Electronic copy available at: https://ssrn.com/abstract=4071519


12 S. HERATH PATHIRANNEHELAGE ET AL.

We conducted field experiments to evaluate the methods). We created two groups for each recommen­
model’s actual benefits (POV) and interviewed the dation: a control group and a treatment group. The
responsible stakeholders of the organisation to iden­ treatment of our experiment design was an interven­
tify both desirable and undesirable consequences of its tion in the customer’s purchasing behaviour, that is,
use (POU) at TBô. We adopted the DSR evaluation making a purchase for R1 and making repeat pur­
approach proposed by Venable et al. (2012) and chases for R2. However, we could not force the custo­
Nunamaker et al. (2015) and applied by Tuunanen mers to make (repeat) purchases. To overcome this
and Peffers (2018), Nguyen et al. (2021), and challenge, we categorised existing customers into
Golovianko et al. (2022). treatment and control groups using thresholds for
As discussed above, we created a set of recommen­ days from the last purchase (purchase recency) and
dations from each decision model. We executed the the number of orders (purchase frequency).
two recommendations from DM1 (R1 and R2 in The second challenge was obtaining enough
Table 2), which were then evaluated in the field. engagement in the experiment. This is evident from
Specifically, we created the treatment groups with the the low response rates of our experimental groups (the
new customer segments suggested by our AIADM maximum response rate was 4.4%). When the partici­
system while the control groups were predefined by pants’ engagement was sporadic and sparse, the inter­
TBô. The resulting field experiment returned co- nal and external validity of the experiment’s results
creation survey response rates – the performance suffered. Grouping participants into treatment and
metric reflecting co-creation participation – of 1% control groups also raises ethical and fairness
and 4.4% for treatment groups compared to 0.1% concerns.
and 0.2% for control groups. Interviews I2 (CEO) Third, covariate shifts, that is, changes in data dis­
and I3 (CMD) confirmed that treatment groups’ sur­ tribution, pose risks to AIADM system performance.
vey response rates are significantly superior compared ML algorithms assume stable data-generating pro­
to what was previously observed in the company and cesses, and any changes in the underlying process
the industry in general. In conclusion, the results of could lead to performance deterioration. For instance,
the experiment confirm that the selected recommen­ the COVID-19 pandemic unfolded during the study,
dations derived from DM1 (R1 and R2) are effective in and it was difficult to disentangle the effects of
delivering significant gains in co-creation COVID-19 (e.g., increased remote working, higher
participation. e-commerce, higher savings, etc.) on consumer pur­
After this experiment, the ADR team conducted chasing behaviour and the resulting evaluation of the
meetings with TBô to evaluate the effectiveness of the AIADM system.
topic model (DM3). The CEO confirmed that these
topics are highly relevant, and they find great value in
4.4. Artefact sustenance
such a topic model to extract insights hidden in large
amounts of textual data they gather from various The decision to sustain AIADM at TBô relied on three
channels (I2). Current approaches in practice, includ­ key aspects: (1) the adoption of AIADM delivered
ing manual reading and coding of textual data, were measurable performance gains when juxtaposed with
also discussed. Manual text processing had already conventional decision-making methods, (2) the bene­
identified several issues that had some commonality fits exceeded the recurrent costs of the AIADM sys­
with the topics we found. The CEO expressed the tem, and (3) the general expectation that AIADM
firm’s interest in implementing an AI-based auto­ improves as the models learn. After observing the
mated text analysis tool, especially to analyse customer performance gains of AIADM over traditional deci­
conversations in the newly implemented social space sion-making methods and the benefits that outweigh
on their website. Via successful implementations and the costs of AIADM (demonstrated POC, POV, and
deployments, and practitioners’ intention to extend POU), TBô expressed its eagerness to apply AIADM
the use cases, we demonstrated the POC, POV, and not only to the business cases considered in this study,
POU of our artefact. but also to other future use cases (see Figure 1, long-
We observed several challenges in artefact evalua­ term goals).
tion, related to a) challenges in experiment design, b) We identified three pressing challenges in sus­
consumer/user engagement and fairness, and c) data taining AIADM, related to a) trust and confidence
shifts. First, technical challenges emanated from in AIADM, b) economics of AIADM, and c) man­
implementing experiment design. Experimental set­ agerial over-optimism. First, during I2, I3, and
tings are widely leveraged to validate the effectiveness company meetings, the co-founders and team
of AIADM (Senoner et al., 2022), and they mainly members emphasised the importance of the relia­
compare the change in the performance indicator bility of the decision models. In other words,
between the treatment scenario (AIADM case) against AIADM should be reliable for at least a partial
the control scenario (conventional decision-making delegation of decision making. We observed that

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 13

some results obtained by the decision models (e.g., 5. Prescriptive learning


the total purchase value of a customer negatively
5.1. Reflection and learning
affected co-creation probability) were thoroughly
scrutinised by employees and we observed their We noted two important practices that facilitated the
scepticism in accepting some of the insights the guided emergence (P6) of the AIADM system within
AIADM system presented. The CMD expressed the organisational and business context: pursuing
his concerns about trusting AI (I3): AIADM system deployment in a real-world uncon­
trolled corporate context and leveraging a variety of
Some tools give shallow analysis and insights and data collection procedures to collect a diverse yet rich
require more labour or other software to extract the dataset to reflect on and learn from. This enabled the
insights we need. identification of an eclectic set of challenges that orga­
nisations face from formulation of to sustaining
Second, we increasingly recognised that the AIADM systems. We call these challenges the “unan­
AIADM system could not be sustained without ticipated outcomes” of our IS artefact. Table 3 pro­
demonstrating sufficient economic value. During vides an overview of the challenges we identified for
our study, we identified significant recurring each phase and our design activities addressing these
labour costs (both for employees and outsourced challenges. As we reflected and learned about antici­
work) and costs of maintaining AI infrastructure pated (e.g., strategic data roadmap) and unanticipated
(data storage, computational power, etc.). To this outcomes (identified challenges) demanding ongoing
end, the company had to ensure that the benefits of changes to the preliminary artefact design, we devel­
the AIADM system outweighed the costs in the oped a set of design activities on how the identified
long run. challenges can be addressed in our class of systems.
Third, we found that learning from failures was an Based on our reflections on the design activities in
integral part of the continuous improvement at TBô. Table 3, we refined our initial design principles and
The AIADM initiative should not be viewed as formalised our learning into an expanded set of final
sequential but as cyclical. In essence, AI models are design principles.
not oracles that provide predictions, but instead cross-
learning agents that evolve over time through multiple
interactions. Several iterations can pave the way for 5.2. Design principles of AIADM systems
gradual system improvements over time. Promising
AIADM projects can be scrapped when they are The final stage relates to formalisation of the learning.
audited against utopian managerial expectations. ADR suggests that generalisation of outcomes (P7) has
Hence, it was important to set clear objectives with three levels: problem instance, solution instance, and
proper business understanding and define thresholds derivation of design principles (Sein et al., 2011).
to audit the performance gains of AIADM, keeping in While we deployed an AIADM system within
mind that failures can lead to success in subsequent a single organisation, we aimed to extract insights
iterations. that may extend beyond a single business problem.

Table 3. Overview of challenges and design activities.


Challenges Design activities
Formulation 1. Technological ambiguity 1. Formulate a strategic data roadmap and communicate it to all the project
2. Managing multiple objectives stakeholders.
3. Organisational and cultural inertia 2. Identify measurable AIADM use cases.
4. Competing interests 3. Develop a prototype to prove the business value creation and encourage user
acceptance.
4. Make the AI outcomes understandable (e.g., visualisations—partial dependence
plots [PDPs] and feature importance).
Development 1. Resource constraints 1. Leverage open and free AI resources (code, libraries, etc.).
2. Data constraints—storage, 2. Foster partnerships (e.g., industry-academia, solution providers).
integration, and ethical use 3. Combine multiple datasets using common fields.
3. Technological constraints 4. Adopt regulatory guidelines (e.g., European Commission, 2021, OECD, 2021).
5. Develop AI guiding principles in-house.
6. Audit AIADM to ensure ethical outcomes.
7. Implement interactive user interfaces with customisable parameters for human
inputs.
Deployment and 1. Challenges in experiment design 1. Identify measurable AIADM use cases.
Evaluation 2. Consumer/user engagement and 2. Use flexible quasi-experimental designs.
fairness 3. Adhere to AI regulations and the firm’s code of ethics.
3. Data shifts—dynamic environment 4. Maintain domain expert involvement.
Sustenance 1. Trust and confidence in AIADM 1. Make the AI outcome understandable and hence trustable, e.g., explainability
2. Economics of AIADM libraries.
3. Managerial over-optimism: 2. Involve humans to correct AIADM errors.
expectation vs reality 3. Follow a comprehensive evaluation of the AIADM system (POC, POV, and POU).
4. Accept failures and adopt a continuous improvement mindset.

Electronic copy available at: https://ssrn.com/abstract=4071519


14 S. HERATH PATHIRANNEHELAGE ET AL.

We investigated the general problem of “AIADM sys­ value and usability is key to convincing stakeholders
tem design and deployment in organisations”. The and gaining organisational commitment for additional
final design principles distilled from the design and resources over other competing interests. At TBô, this
deployment of AIADM system at TBô are as follows. ensured that the organisation pursued the value-led
solid AI use cases rather than blindly following AI-
5.2.1. DP1: Design for alignment between the hype-led implementations.
business model and organisational resources and
capabilities
5.2.3. DP3: Design for ethical AI governance
Formulation of an actionable strategic roadmap align­
frameworks
ing with a company’s business model is crucial for AI-
Accompanying the great promises and possibilities of
based decision augmentation in firms. Such
AI is a host of intricate thorny issues related to security
a roadmap should include and explicitly illustrate
and privacy, fairness, deskilling, surveillance, and
(1) measurable and easily interpretable AI use cases;
accountability (Berente et al., 2021, Mikalef et al.,
(2) availability of domain expertise associated with
2022). Organisations are highly susceptible to these
identified business cases; (3) technical feasibility of
perils due to the rudimentary state of the guidelines,
the (proposed) AI tech stack; and (4) clear and con­
inadequate expertise in these guidelines, low institu­
crete goals, sub-goals, timeline, and likely challenges
tional support, and the dire need to scale up rapidly
in implementing AIADM. This principle serves three
(Bessen et al., 2022; Singh et al., 1986). These perils are
purposes. First, it makes the AIADM system specific
circumvented by an AI governance framework—a set
to the decision-making and business context, thus
of normative declarations on how AI is developed,
increasing practitioner relevance and acceptance
deployed, and governed, adhering to legal, ethical,
(Miah et al., 2019). Second, it coordinates the project
social, and organisational values. Through our ADR
communication in line with corporate vision and mis­
study, we offer three pathways to a responsible AI
sion for organisational support. Third, the strategic
design: (1) Adopt extant regulatory guidelines (e.g.,
roadmap helps steer the project in overcoming tech­
European Commission, 2019, 2021, OECD, 2021);
nological ambiguity and managing diverse use cases
(2) develop own AI guiding principles consistent
with multiple objectives. The TBô data roadmap
with customer and user expectations (Bessen et al.,
included all these aspects and thus demonstrates this
2022; Google, 2022); and (3) establish an AI auditing
design principle. TBô’s leadership championed the
and governance framework (Grønsund & Aanestad,
proposed project with an actionable strategic road­
2020). AI auditing should evaluate not only potential
map, remained accountable for the AI project out­
business value but also potential business risks.
comes, and proactively led the process of gathering
A responsible design should be transparent in its
employee support.
operation and should not compromise ethical values
for business value. This principle helps gain customer
5.2.2. DP2: Design for synergy in input, model, and
and user trust, fostering fairness and engagement.
output to ensure business value
Once the AIADM use cases were defined, we employed
an iterative design approach to synergise input ele­ 5.2.4. DP4: Design for human involvement and
ments (data, domain knowledge), the AI model (ML engagement
and natural language processing), and output (predic­ Several crucial advantages arise from human involve­
tions, visualisations) of the system to derive meaningful ment in AIADM systems. First, human domain exper­
insights for the use cases. To overcome data constraints, tise is an essential input for AIADM systems in
we merged multiple datasets. Leveraging these compre­ organisational contexts. Humans possessing tacit
hensive datasets alongside existing domain expertise, knowledge about decision contexts can comprehend
we evaluated several AI models to identify the best intangible information that may elude AIADM sys­
performing models on the chosen ML performance tems. The integration of this tacit knowledge into the
metric (see Appendix E). As an integral part of the AIADM system, whenever possible, improves the sys­
output, we included visualisations (PDPs, feature tem performance. Second, AI algorithms are prone to
importance) and explanations (SHapley Additive errors and might yield unintended results (Shrestha
exPlanations [SHAP], Local Interpretable Model- et al., 2021, Xin et al., 2018). Such errors may lead to
agnostic Explanations [LIME]). Finally, we followed detrimental consequences and incur many types of
a comprehensive three-pronged evaluation of the risks for organisations. For instance, exogenous
AIADM system to demonstrate POC, POV, and POU. shocks such as pandemics and climate disasters
This principle serves two purposes. First, establishing could result in drastic changes in the quality of data
synergy in the key components of AIADM ensures for making predictions. Humans can identify and rec­
accurate and comprehensible recommendations and tify such errors, contributing to user acceptance and
prevents massive failures. Second, demonstrating trust in the systems.

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 15

To attain these benefits, we facilitated human invol­ embrace failures and adopt a continuous improve­
vement in two ways. First, by closely involving the ment mindset to overcome various challenges and
domain experts in every phase of design, development uncertainties that may arise in different stages of AI
and evaluation/auditing, we could integrate tacit development and deployment. E.g., the prototype
knowledge components into our AIADM system, pre­ models of AI might not be highly effective and accu­
vent unintended outcomes, and preserve responsibil­ rate in their predictions due to lack of training data.
ity and accountability. Human decisions also act as The development of AI is a staged process, and as data
benchmarks for AI decisions in evaluation, as we is accumulated over time, AI models and correspond­
showed. We devised interactive user interfaces with ing use cases need to be adapted. An AI model’s
customisable parameters, enabling domain experts to effectiveness increases when various users engage
seamlessly integrate tacit domain knowledge into the with it and the system improves over time. If use is
system during operation. Moreover, by grounding our restricted, opportunities to update become limited.
work in decision augmentation over automation, we Our AIADM system design and deployment was char­
leave the responsibility and accountability of action acterised by many iterations and adaptations. Hence,
selection with humans and illustrate the significance we learned that AIADM system development should
of having the human in the loop (Feuerriegel et al., be guided by adaptive and iterative enhancements to
2022, Grønsund & Aanestad, 2020). Second, to benefit minimise the risk of failure and accumulate the learn­
from aggregation and interaction, AI systems should ing effects over time.
be designed to connect different users (customers and Equally important is keeping the design and expec­
employees) who interact with them. Outputs from the tations around AIADM realistic, as AI is not
AI systems (dashboards, reports, plots, user interfaces, a technological panacea for all business ills (Berente
etc.) should facilitate engagement and interpretability/ et al., 2021). AI implementation is filled with various
explainability. In our AIADM system, we demon­ trade-offs in different stages, as we demonstrated in
strated the explainability of AI outcomes using two this study. Significant trade-offs derive from high costs
concepts of explainable AI: feature importance and for recruiting talent and amassing resources and
feature attribution (SHAP, LIME). We further changes in organisational structures and decision-
enhanced explainability to wider audiences by using making processes. This often induces significant risks
visualisations such as PDPs, variable importance, and (Mikalef et al., 2022). It is thus crucial to curb manage­
topic modelling visualisations using pyLDAvis rial over-expectations and subsequent disappoint­
(Mabey, 2018). In the absence of explainability, gain­ ments. Managers should view AI neither as a magic
ing trust and confidence in AI is particularly challen­ bullet nor a quick fix.
ging (Burkart & Huber, 2021). User feedback should
always be used in updating models. A human-centred 5.2.6. DP6: Design for open knowledge and
design fosters stakeholder and user trust and aegis resource utilisation
(Bauer et al., 2023), overcoming algorithmic aversion Given the massive costs of full internal development
(Dietvorst et al., 2018). Such a design principle is (Tarafdar et al., 2019) and the necessity of
particularly useful for customer-centric business mod­ a multidisciplinary approach (Lyytinen et al., 2021,
els such as that of TBô. von Krogh, 2018), AI projects should follow an open
While embracing the benefits and possibilities of and collaborative design. By “open”, we mean utilisa­
integrating AI into decision making, organisations tion of community-developed open-source code, AI/
should also recognise that lack of human involvement ML libraries, platforms, datasets, tools, etc.
and over-reliance on AI could lead to decisionmakers Developing modern AI models (e.g., large language
losing their domain knowledge and autonomy and models) requires huge initial investments and many
deskilling of the workforce (Xue et al., 2022). One AI experts, which is beyond reach of most organisa­
way of mitigating that is introducing decision- tions. The core reason for adopting open resources –
making designs in which humans are involved (e.g., datasets, source codes, and models – to build corpo­
Choudhary et al., 2023, Te’eni et al., 2023). In such rate AI is the benefit of attracting external knowledge
designs, employees enhance their proficiency in work­ to supplement the internal knowledge while alleviat­
ing efficiently with the system, preserving their skills ing exorbitant development costs (Shrestha et al.,
and knowledge. 2023, von Krogh & Haefliger, 2010). In our ADR
study, technology reuse alleviated talent and resource
5.2.5. DP5: Design for continuous learning and constraints.
adaptation Furthermore, AI deployment in organisations is
Our ADR study revealed that, given the emerging a complex process that requires expertise in multiple
characteristics of technology that AI represents, its disciplines. As we observed, both data science com­
design and deployment cannot be fixed from the out­ petence and business domain competence are
set (Bailey et al., 2022). The organisation should needed to address these challenges. The data

Electronic copy available at: https://ssrn.com/abstract=4071519


16 S. HERATH PATHIRANNEHELAGE ET AL.

scientists bring extensive knowledge in areas such as 6. Discussion


natural language processing, ML algorithms, statisti­
6.1. Implications for research
cal inference, data analysis, and knowledge represen­
tation and reasoning. Business domain experts bring We provide a twofold contribution to the IS design
deep hands-on knowledge about the tasks, work­ literature. First, we investigate the organisational chal­
flows, and business models, and they reckon the lenges facing AIADM implementation by clearly doc­
logic of deriving business value from AI deployment. umenting our ADR approach and illustrating
AI technologies are evolving rapidly, and it is logical potential trade-offs and challenges that managers
to set up industry-academia collaborations and might face in AIADM system design and deployment.
external expert partnerships and to engage in open Second, based on the challenges we identified, we
innovation initiatives to be at the forefront of the AI propose a set of six design principles to guide organi­
frontier (Berente et al., 2021). TBô successfully led sations in designing and deploying AIADM systems.
the industry-academia collaboration by proactively Unlike the traditional view of DSS design as a passive
engaging with the researchers and subsequently con­ tool with static decision rules, lacking the ability to
ducting an ADR study within their firm. This strat­ learn, adapt, initiate decision-making processes and
egy will help uphold high standards for both accept decision rights and responsibilities for achiev­
operational and scientific excellence. ing optimal outcomes under uncertainty, our design
Table 4 stipulates design goals, as well as the principles take into consideration the stochastic, adap­
mechanisms to achieve these goals for each design tive, and agentic nature of AI systems (Baird &
principle (Gregor et al., 2020). Maruping, 2021). This approach, in turn, captures

Table 4. Design principles, design goals, and mechanisms to achieve them.


Design principle Design goals/sub-goals Mechanisms to achieve goals
DP1: Design for alignment between the 1. Align the AIADM system with the Formulate an actionable strategic roadmap illustrating:
business model and organisational business model 1a. measurable and interpretable AIADM use cases
resources and capabilities 2. Align the AIADM system with the 1b. domain expertise needed for the identified business cases
organisational resources and 2a. technical feasibility of the (proposed) AI tech stack
capabilities 2b. clear and concrete goals, sub-goals, timeline, and
challenges in implementing AIADM
DP2: Design for synergy in input, model, 1. Overcome data constraints To demonstrate this principle, a firm should:
and output to ensure business value 2. Make the AI outcomes accurate and 1. Find relevant data for business problems or combine
understandable multiple datasets using common fields
3. Demonstrate value for business and 2a. Pick the right ML performance metric to optimise
stakeholders to help navigate through 2b. Try different models and prioritise according to the chosen
competing interests performance metric
2c. Make visualisations, e.g., PDPs and feature importance
3. Develop the prototype system and follow a comprehensive
evaluation (POC, POV, and POU)
DP3: Design for ethical AI governance 1. Ensure ethical management and use of 1a. Adopt extant regulatory guidelines (e.g., European
frameworks data Commission, 2019, 2021; OECD, 2021)
2. Establish trust and engagement with 1b. Establish an AI auditing and governance framework
customers and users (Grønsund & Aanestad, 2020)
2. Develop own AI guiding principles consistent with customer
and user expectations (Bessen, Impink, & Seamans, 2022);
Google, 2022)
DP4: Design for human involvement and 1. Integrate domain knowledge into 1. Keep the domain experts in the loop to actively integrate
engagement system design, operation, and tacit domain knowledge into system design and, when in
evaluation operation, identify instances in which AI errs. e.g.,
2. Enhance employees’ proficiency in implement interactive user interfaces with customisable
working efficiently with the system parameters
2. Facilitate user engagement and interpretability/
explainability of AI outcomes through elements of
explainable AI and visualisations, e.g., explainability libraries,
SHAP values, or LIME
DP5: Design for continuous learning and 1. Continuous improvement over new 1. Monitor model decay (e.g., through ML performance
adaptation advancements in AI technology metrics)
(algorithms, hardware, data, etc.) 2. Embrace an iterative process to overcome various challenges
2. Adaptability to changes in environment and uncertainties that may arise in different stages of AI
and decision-making contexts design and deployment. As data is accumulated over time,
AI models and corresponding use cases need to be updated
and improved iteratively
DP6: Design for open knowledge and 1. Alleviate knowledge and resource 1. Utilise community-developed open-source code (Github,
resource utilisation constraints Stackoverflow) for collaborative problem solving; AI/ML
2. Alleviate talent and domain expertise libraries (Scikit-learn, PyTorch, Keras, Carat, Rpart, etc.) for
constraints pre-built functionalities; and platforms (TensorFlow,
OpenML, MLflow), datasets (Kaggle, Datahub.io), and tools
(ClearML, CNTK)
2. Foster partnerships (e.g., industry-academia, solution
providers, outsourcing/offshoring)

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 17

contextual idiosyncrasies and practice-based domain humans in decision processes. For the humans in the
knowledge, ensuring practitioner relevance and accep­ loop to function effectively and efficiently, the results
tance (Miah et al., 2019). of AI systems should be sufficiently interpretable and
Our design principles speak to several important explainable. Our study showcased how AI outcomes
aspects of the AIADM phenomenon. DP1 addresses can be explained in practice. Thereby, we mitigate
the management of AIADM by listing the essential problems associated with the black-box nature of
elements of an AI adoption strategy. Managing AI is many contemporary AI systems while garnering
different to typical IT management due to AI’s agentic wider user and stakeholder acceptance (Bauer et al.,
nature, superior learning capabilities, and incompre­ 2023).
hensibility compared to that of IT artefacts in the past DP5 extends the understanding of data-driven
(Baird & Maruping, 2021, Berente et al., 2021). value propositions (Günther et al., 2022, Wiener
Pursuing the unprecedented opportunities realised et al., 2020). Recent work in this domain attests that
through AI, managers of AI initiatives also grapple “the process of creating data-driven value propositions
with myriad new challenges. Therefore, in strategic is emergent, consisting of iterative resourcing cycles”
alignment, we considered five elements to guide man­ (Günther et al., 2022, p. 1). Realising value from data
agers in their AI adoption strategy formulation: use relies on reconstruction and repurposing of both data
cases, domain expertise, goals, feasibility, and likely and algorithms, as it is an interconnected process of
challenges. trial and error (e.g., Chapman et al., 2000). Moreover,
DP2 recommends synergy in the input elements, AI unlike conventional IS, AI systems improve over time
modelling, and output elements as the pathway to and use as they learn from the accumulated deeper
ascertaining business value. This delves into the pools of data. Therefore, the extant literature concurs
polarised academic debate on whether AI fosters about the need for a flexible and iterative design pro­
intended performance improvements. Scholars cess for data projects including AI from a purely tech­
express scepticism that much of the traction on nical standpoint (Abbasi et al., 2016). Our study
AIADM is merely hype and that it may fail to produce broadens the scope of examination by including orga­
measurable performance gains (Aaen et al., 2022, nisational processes surrounding the technical AI
Ermakova et al., 2021, Rana et al., 2022). Attempts to development process. Our study uncovers the organi­
integrate AI into the decision-making process and sational challenges encountered in the process and
value chains often fail (Joshi et al., 2021, Ransbotham observes how these challenges create iterative back-
et al., 2019). Therefore, we emphasise that organisa­ and-forth workflows.
tions should make the system components congruent DP6 endorses open innovation (Chesbrough, 2003)
and rigorously scrutinise their AI systems for value in AI development (Shrestha et al., 2023). During AI
creation and usability. development, organisations can leverage free and open
DP3 underscores the notion of responsible AI – resources – data, code, models, and developer com­
principles that involve ethical, fair, secure, and munities – to overcome resource constraints.
accountable design and deployment of AI Subsequently, firms can decide to open their innova­
(Golovianko et al., 2022, Mikalef et al., 2022). tions to encourage community-driven improvements
Regardless of the elementary AI guidelines put forth to the system at minimal marginal costs.
by regulators and the continually evolving AI technol­
ogy stack, we emphasised three requisites: the adop­
6.2. Implications for practice
tion of extant regulations, developing AI policies in-
house to cater to customer and user needs, and estab­ Data generation, access, and collection is a hallmark of
lishing an AI auditing and governance framework. contemporary organisations. With the rapid scaling of
DP4 endorses two closely related concepts of data, AI becomes indispensable for value creation and
human-AI ensembles: human-in-the-loop frame­ value capture in firms (Iansiti & Lakhani, 2020).
works (Grønsund & Aanestad, 2020, Xin et al., 2018) Within organisations, the rapid scaling of data can
and explainable AI (Bauer et al., 2023, Senoner et al., become an obstacle unless new systems can be
2022). Our study demonstrates how human-in-the- designed and deployed to aid managers in making
loop frameworks unfold in practice to successfully timely and effective decisions (Agrawal et al., 2018).
integrate tacit domain knowledge into AI system Yet, the value from data can only be captured effec­
design and audit AI outcomes to prevent error propa­ tively when the quality of the data is matched with
gation. By opting for decision augmentation over well-designed and deployed AI systems (Bessen et al.,
automation, we held the human accountable and 2022). Our study demonstrates that the design and
responsible for action selection while the protocol deployment of an AIADM system needs to consider
development was vested in AI (Murray et al., 2020). technical, organisational, human, and social factors
By doing so, we successfully combined the benefits of equally. Our design principles are intended to offer
an efficient AI system with the unique abilities of guidance to managers in developing and adopting

Electronic copy available at: https://ssrn.com/abstract=4071519


18 S. HERATH PATHIRANNEHELAGE ET AL.

Figure 5. Design principle reusability evaluation.

powerful AIADM systems in their organisations while finance, pharmaceuticals, healthcare, fast-moving
remaining aware of this wide range of factors. consumer goods, and manufacturing and business
A key insight from our research is that although AI functions like hiring, marketing, distribution, and
algorithms are designed to automate or augment man­ quality assurance are likely to bring interesting AI
agerial decision making, the process of designing and use cases that are different from the case we studied.
deploying AI is itself filled with trade-offs and chal­ Therefore, for the sake of the generalisability of the
lenges that require critical managerial judgements. insights derived, future research needs to be con­
Our study shows some of the challenges and trade-offs ducted in a larger set of organisations with a cross-
managers may encounter when pushing AI within their comparison of identified mechanisms, challenges,
organisations. We found that the organisation requires and design principles. Second, the timeline of this
best practices (e.g., a strategic roadmap for AI adop­ research work coincided with the COVID-19 out­
tion), which we outline as our six design principles to break in Switzerland; as a result, most of the inter­
mitigate technical, social, and organisational barriers. views and meetings took place online via Zoom or
We adopted the Design Principle Reusability offline with protective measures in place. This set-up
Evaluation Framework by Iivari et al. (2021) to assess might have reduced capture of some social cues that
the transferability of our design principles to other are usually available in face-to-face interviews and
contexts (external validity). This encompasses five observations. Based on our work, we see promising
key criteria: (1) accessibility, (2) importance, (3) opportunities for design science research to generate
novelty and insightfulness, (4) actability and guidance, prescriptive knowledge pertinent to AI use in
and (5) effectiveness. The evaluation involved engage­ practice.
ment with 14 managers and developers of AIADM
systems – the target audience of the design principles –
with extensive experience in IT and digital transfor­ 7. Conclusion
mation. Substantial evidence (see Figure 5 and
Appendix F) indicates that our design principles are This study advances the argument that AIADM
helpful in practical applications. systems represent a novel class of information sys­
tems that exhibit unique socio-organisational
dynamics and technological complexity when juxta­
posed with conventional DSS. AIADM systems pre­
6.3. Limitations and future research
sent unprecedented opportunities and challenges for
Our paper also has some limitations which provide contemporary organisations. Accordingly, striking
opportunities for future research. First, we derive a balance between the potential gains and risks
our design principles from a single case sample, associated with AIADM systems requires identify­
characterised by a relatively small company that ing specific design principles to guide practice. To
predominantly works within an e-commerce set-up. this end, we offer actionable design principles fol­
As a result, some challenges of AI deployment that lowing a comprehensive design science-based inves­
we identified may be specific to this company. tigation of the design and deployment of AIADM
Organisations in other data-heavy industries like systems.

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 19

Acknowledgements Arnott, D., & Pervan, G. (2014). A critical analysis of deci­


sion support systems research revisited: The rise of design
We thank Roy Bernheim, Allan Perrottet, Matthew Soroka, science. Journal of Information Technology, 29(4),
and Nina Geilinger for their invaluable support in project 269–293. https://doi.org/10.1057/jit.2014.16
coordination. We acknowledge the exceptional assistance of Bailey, D., Faraj, S., Hinds, P., von Krogh, G., & Leonardi, P.
Noah Hampp, Esther Le Mair, Parinitha Mundra, and (2022). Special issue of organization Science: Emerging
Dmitry Plekhanov in preparing the manuscript. technologies and organizing. Organization Science, 30(3),
642–646. https://doi.org/10.1287/orsc.2019.1299
Baird, A., & Maruping, L. M. (2021). The next generation of
Disclosure statement research on is use: A theoretical framework of delegation
to and from agentic is artifacts. MIS Quarterly, 45(1),
No potential conflict of interest was reported by the author(s). 315–341. https://doi.org/10.25300/MISQ/2021/15882
Barkhi, R. (2002). The effects of decision guidance and
problem modeling on group decision-making. Journal
of Management Information Systems, 18(3), 259–282.
Funding Bauer, K., von Zahn, M., & Hinz, O. (2023). Expl (AI) ned:
The impact of explainable artificial intelligence on users’
This work was supported by the Swiss National Science
information processing. Information Systems Research, 34
Foundation under Grant [197763].
(4), 1582–1602. https://doi.org/10.1287/isre.2023.1199
Berente, N., Gu, B., Recker, J., & Santhanam, R. (2021).
References Managing artificial intelligence. MIS Quarterly, 45(3),
1433–1450.
Aaen, J., Nielsen, J. A., & Carugati, A. (2022). The dark side Bessen, J., Impink, S. M., Reichensperger, L., & Seamans, R.
of data ecosystems: A longitudinal study of the DAMD (2022). The role of data for AI startup growth. Research
project. European Journal of Information Systems, 31(3), Policy, 51(5), 104513. https://doi.org/10.1016/j.respol.
288–312. https://doi.org/10.1080/0960085X.2021. 2022.104513
1947753 Bessen, J., Impink, S. M., & Seamans, R. (2022). The cost of
Abbasi, A., Sarker, S., & Chiang, R. H. (2016). Big data ethical AI development for AI startups. In Proceedings of
research in information systems: Toward an inclusive the 2022 AAAI/ACM Conference on AI, Ethics, and
research agenda. Journal of the Association for Society, Oxford, United Kingdom (pp. 92–106).
Information Systems, 17(2), 3. https://doi.org/10.17705/ Bouschery, S. G., Blazevic, V., & Piller, F. T. (2023).
1jais.00423 Augmenting human innovation teams with artificial
Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & intelligence: Exploring transformer‐based language mod­
Kankanhalli, M. (2018). Trends and trajectories for els. Journal of Product Innovation Management, 40(2),
explainable, accountable and intelligible systems: An 139–153. https://doi.org/10.1111/jpim.12656
HCI research agenda. In Proceedings of the 2018 CHI Brynjolfsson, E., & McElheran, K. (2016). The rapid adop­
conference on human factors in computing systems, tion of data-driven decision-making. American Economic
Montreal, Canada (pp. 1–18). Review, 106(5), 133–39. https://doi.org/10.1257/aer.
Abouzahra, M., Guenter, D., & Tan, J. (2022). Exploring p20161016
physicians’ continuous use of clinical decision support Burkart, N., & Huber, M. F. (2021). A survey on the explain­
systems. European Journal of Information Systems, 33 ability of supervised machine learning. Journal of
(2), 1–22. https://doi.org/10.1080/0960085X.2022. Artificial Intelligence Research, 70, 245–317. https://doi.
2119172 org/10.1613/jair.1.12228
Ågerfalk, P. J. (2020). Artificial intelligence as digital agency. Chandra Kruse, L., Seidel, S., & Purao, S. (2016). Making use
European Journal of Information Systems, 29(1), 1–8. of design principles. In Tackling Society’s Grand
https://doi.org/10.1080/0960085X.2020.1721947 Challenges with Design Science: 11th International
Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Conference, DESRIST 2016. St John’s, NL, Canada, May
machines: The simple economics of artificial intelligence. 23-25, 2016, 11 (pp. 37–51). Springer International
Harvard Business Press. Publishing.
Altendeitering, M., Fraunhofer, I. S. S. T., & Chandra, L., Seidel, S., & Gregor, S. (2015). Prescriptive
Guggenberger, T. (2021). Designing data quality tools: knowledge in IS research: Conceptualizing design princi­
Findings from an action design research project at ples in terms of materiality, action, and boundary
Boehringer Ingelheim. In Proceedings of the 29th conditions. In 2015 48th Hawaii International
European Conference on Information Systems, Conference on System Sciences, Hawaii, USA (pp.
Marrakech, Morocco (pp. 17). 4039–4048). IEEE.
Arnott, D. (2006). Cognitive biases and decision support Chapman, P., Clinton, J., Kerber, R., Khabaza, T.,
systems development: A design science approach. Reinartz, T., Shearer, C., & Wirth, R. (2000). CRISP-
Information Systems Journal, 16(1), 55–78. https://doi. DM 1.0: Step-by-step data mining guide. SPSS Inc, 9
org/10.1111/j.1365-2575.2006.00208.x (13), 1–77. https://mineracaodedados.files.wordpress.
Arnott, D., & Pervan, G. (2005). A critical analysis of deci­ com/2012/12/crisp-dm-1-0.pdf
sion support systems research. Journal of Information Chen, C. W., & Koufaris, M. (2015). The impact of decision
Technology, 20(2), 67–87. https://doi.org/10.1057/pal support system features on user overconfidence and risky
grave.jit.2000035 behavior. European Journal of Information Systems, 24(6),
Arnott, D., & Pervan, G. (2008). Eight key issues for the 607–623. https://doi.org/10.1057/ejis.2014.30
decision support systems discipline. Decision Support Chesbrough, H. W. (2003). Open innovation: The new
Systems, 44(3), 657–672. https://doi.org/10.1016/j.dss. imperative for creating and profiting from technology.
2007.09.003 Harvard Business Press.

Electronic copy available at: https://ssrn.com/abstract=4071519


20 S. HERATH PATHIRANNEHELAGE ET AL.

Choudhary, V., Marchetti, A., Shrestha, Y. R., & Gorry, G. A., & Morton, M. S. (1971). A framework for
Puranam, P. (2023). Human-AI ensembles: When can management information systems. Sloan Management
they work? Journal of Management. https://doi.org/10. Review, 13(1), 1–22.
1177/01492063231194968 Gregor, S., & Hevner, A. (2013). Positioning and presenting
Collins, J., Ketter, W., & Gini, M. (2010). Flexible decision design science research for maximum impact. MIS
support in dynamic inter-organisational networks. Quarterly, 37(2), 337–355. https://doi.org/10.25300/
European Journal of Information Systems, 19(4), MISQ/2013/37.2.01
436–448. https://doi.org/10.1057/ejis.2010.24 Gregor, S., Kruse, L. C., & Seidel, S. (2020). Research per­
Cyert, R. M., & March, J. G. (1963). A behavioral theory of spectives: The anatomy of a design principle. Journal of
the firm. Englewood Cliffs, New Jersey: Prentice-Hall. the Association for Information Systems, 21(6),
Dasborough, M. T. (2023). Awe‐inspiring advancements in 1622–1652. https://doi.org/10.17705/1jais.00649
AI: The impact of ChatGPT on the field of organizational Grønsund, T., & Aanestad, M. (2020). Augmenting the
behavior. Journal of Organizational Behavior, 44(2), algorithm: Emerging human-in-the-loop work
177–179. https://doi.org/10.1002/job.2695 configurations. The Journal of Strategic Information
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Systems, 29(2), 101614. https://doi.org/10.1016/j.jsis.
Overcoming algorithm aversion: People will use imper­ 2020.101614
fect algorithms if they can (even slightly) modify them. Grove, W. M., & Meehl, P. E. (1996). Comparative efficiency
Management Science, 64(3), 1155–1170. https://doi.org/ of informal (subjective, impressionistic) and formal
10.1287/mnsc.2016.2643 (mechanical, algorithmic) prediction procedures: The
Ebel, P., Bretschneider, U., & Leimeister, J. M. (2016). clinical-statistical controversy. Psychology, Public Policy,
Leveraging virtual business model innovation: and Law, 2(2), 293. https://doi.org/10.1037/1076-8971.2.
A framework for designing business model development 2.293
tools. Information Systems Journal, 26(5), 519–550. Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., &
https://doi.org/10.1111/isj.12103 Nelson, C. (2000). Clinical versus mechanical prediction:
Ermakova, T., Blume, J., Fabian, B., Fomenko, E., Berlin, M., A meta-analysis. Psychological Assessment, 12(1), 19.
& Hauswirth, M. (2021). Beyond the hype: Why do https://doi.org/10.1037/1040-3590.12.1.19
data-driven projects fail? In Proceedings of the 54th Günther, W. A., Mehrizi, M. H. R., Huysman, M., Deken, F.,
Hawaii International Conference on System Sciences, & Feldberg, F. (2022). Resourcing with data: Unpacking
the process of creating data-driven value propositions.
Hawaii, USA (pp. 5081).
The Journal of Strategic Information Systems, 31(4),
European Commission. (2019). Ethics guidelines for trust­
101744. https://doi.org/10.1016/j.jsis.2022.101744
worthy AI. https://digital-strategy.ec.europa.eu/en/
Hevner, A., March, S., Park, J., & Ram, S. (2004). Design
library/ethics-guidelines-trustworthy-ai
science in Information Systems research. MIS Quarterly,
European Commission. (2021). Proposal for a regulation of
28(1), 75–105. https://doi.org/10.2307/25148625
the European Parliament and of the council: Laying down
Hevner, A., & Storey, V. (2023). Research challenges for the
harmonised rules on artificial intelligence (artificial intel­
design of human-artificial intelligence systems (HAIS).
ligence Act) and amending certain union legislative acts.
ACM Transactions on Management Information Systems,
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/
14(1), 1–18. https://doi.org/10.1145/3549547
?uri=CELEX:52021PC0206&from=EN
Huber, G. P. (1990). A theory of the effects of advanced
Fang, X., Gao, Y., & Hu, P. J. (2021). A prescriptive analytics information technologies on organizational design, intel­
method for cost reduction in clinical decision making. ligence, and decision making. The Academy of
MIS Quarterly, 45(1), 83–115. https://doi.org/10.25300/ Management Review, 15(1), 47–71. https://doi.org/10.
MISQ/2021/14372 2307/258105
Feuerriegel, S., Shrestha, Y. R., von Krogh, G., & Zhang, C. Iansiti, M., & Lakhani, K. R. (2020). Competing in the age of
(2022). Bringing artificial intelligence to business AI: Strategy and leadership when algorithms and networks
management. Nature Machine Intelligence, 4(7), run the world. Harvard Business Press.
611–613. https://doi.org/10.1038/s42256-022-00512-5 Iivari, J. (2015). Distinguishing and contrasting two strate­
Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2022). gies for design science research. European Journal of
Cognitive challenges in human-artificial intelligence col­ Information Systems, 24(1), 107–115. https://doi.org/10.
laboration: Investigating the path toward productive 1057/ejis.2013.35
delegation. Information Systems Research, 33(2), Iivari, J., Rotvit Perlt Hansen, M., & Haj-Bolouri, A. (2021).
678–696. https://doi.org/10.1287/isre.2021.1079 A proposal for minimum reusability evaluation of design
Galbraith, J. R. (1974). Organization design: An information principles. European Journal of Information Systems, 30
processing view. Interfaces, 4(3), 28–36. https://doi.org/ (3), 286–303. https://doi.org/10.1080/0960085X.2020.
10.1287/inte.4.3.28 1793697
Giermindl, L. M., Strich, F., Christ, O., Leicht-Deobald, U., & Jain, H., Padmanabhan, B., Pavlou, P. A., & Raghu, T. S.
Redzepi, A. (2022). The dark sides of people analytics: (2021). Editorial for the special section on humans, algo­
Reviewing the perils for organisations and employees. rithms, and augmented intelligence: The future of work,
European Journal of Information Systems, 31(3), 410–435. organizations, and society. Information Systems Research,
https://doi.org/10.1080/0960085X.2021.1927213 32(3), 675–687. https://doi.org/10.1287/isre.2021.1046
Golovianko, M., Gryshko, S., Terziyan, V., & Tuunanen, T. Joseph, J., & Gaba, V. (2020). Organizational structure,
(2022). Responsible cognitive digital clones as information processing, and decision-making:
decision-makers: A design science research study. A retrospective and road map for research. Academy of
European Journal of Information Systems, 32(5), 1–23. Management Annals, 14(1), 267–302. https://doi.org/10.
https://doi.org/10.1080/0960085X.2022.2073278 5465/annals.2017.0103
Google. (2022) Responsible AI Practices. https://ai.google/ Joshi, M. P., Su, N., Austin, R. D., & Sundaram, A. K. (2021).
responsibilities/responsible-ai-practices/ Why so many data science projects fail to deliver. MIT

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 21

Sloan Management Review, 62(3), 85–89. Communications of the Association for Information
Kamis, A., Koufaris, M., & Stern, T. (2008). Using an Systems, 49(1), 12. https://doi.org/10.17705/1CAIS.04914
attribute-based decision support system for Mandviwalla, M. (2015). Generating and justifying design
user-customized products online: An experimental theory. Journal of the Association for Information Systems,
investigation. MIS Quarterly, 32(1), 159–177. https:// 16(5), 3. https://doi.org/10.17705/1jais.00397
doi.org/10.2307/25148832 March, J. G. (1994). Primer on decision making: How deci­
Keding, C., & Meissner, P. (2021). Managerial overreliance sions happen. Simon and Schuster.
on AI-augmented decision-making processes: How the March, J. G., & Olsen, J. P. (1989). Rediscovering institutions.
use of AI-based advisory systems shapes choice behavior Free Press.
in R&D investment decisions. Technological Forecasting March, J. G., & Simon, H. A. (1993). Organizations. John
and Social Change, 171, 120970. https://doi.org/10.1016/j. Wiley & Sons.
techfore.2021.120970 Miah, S. J., Gammack, J. G., & McKay, J. (2019).
Keen, P. G. (1980). Decision support systems: A research A metadesign theory for tailorable decision support.
perspective. Decision support systems: Issues and chal­ Journal of the Association for Information Systems, 20
lenges, International Institute for Applied systems Analysis (5), 4. https://doi.org/10.17705/1jais.00544
(IIASA). Proceedings Series, 11, 23–27. https://books.google. Mikalef, P., Conboy, K., Lundström, J. E., & Popovič, A.
ch/books?hl=en&lr=&id=LF0hBQAAQBAJ&oi=fnd&pg= (2022). Thinking responsibly about responsible AI and
PA23&ots=n3GZiEbnSB&sig=9-r9bblS- ‘the dark side’ of AI. European Journal of Information
oMNS2UXUeAO9qCRqMA&redir_esc=y#v=onepa Systems, 31(3), 257–268. https://doi.org/10.1080/
ge&q&f=false 0960085X.2022.2026621
Klör, B., Monhof, M., Beverungen, D., Bräuer, S., Molloy, S., & Schwenk, C. R. (1995). The effects of informa­
Niehaves, B., Tuunanen, T., & Peffers, K. (2018). Design tion technology on strategic decision making. Journal of
and evaluation of a model-driven decision support sys­ Management Studies, 32(3), 283–311. https://doi.org/10.
tem for repurposing electric vehicle batteries. European 1111/j.1467-6486.1995.tb00777.x
Journal of Information Systems, 27(2), 171–188. https:// Murray, A., Rhymer, J., & Sirmon, D. G. (2020). Humans
doi.org/10.1057/s41303-017-0044-3 and technology: Forms of conjoined agency in
Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic organizations. Academy of Management Review, 46(3),
bias: Review, synthesis, and future research directions. 552–571. https://doi.org/10.5465/amr.2019.0186
European Journal of Information Systems, 31(3), Nguyen, A., Tuunanen, T., Gardner, L., & Sheridan, D.
388–409. https://doi.org/10.1080/0960085X.2021. (2021). Design principles for learning analytics informa­
1927212 tion systems in higher education. European Journal of
Kuguoglu, B. K., van der Voort, H., & Janssen, M. (2021). Information Systems, 30(5), 541–568. https://doi.org/10.
The giant leap for smart cities: Scaling up smart city 1080/0960085X.2020.1816144
artificial intelligence of things (AIoT) initiatives. Nunamaker, J. F., Jr., Briggs, R. O., Derrick, D. C., &
Sustainability, 13(21), 12295. https://doi.org/10.3390/ Schwabe, G. (2015). The last research mile: Achieving
su132112295 both rigor and relevance in information systems
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. (2022). To research. Journal of Management Information Systems,
engage or not to engage with AI for critical judgments: 32(3), 10–47. https://doi.org/10.1080/07421222.2015.
How professionals deal with opacity when using AI for 1094961
medical diagnosis. Organization Science, 33(1), 126–148. OECD. (2021). Artificial Intelligence. https://www.oecd.org/
https://doi.org/10.1287/orsc.2021.1549 going-digital/ai/principles/
Leidner, D. E., & Elam, J. J. (1995). The impact of executive Orlikowski, W. J., & Iacono, C. S. (2001). Research com­
information systems on organizational design, intelli­ mentary: Desperately seeking the “IT” in it research.
gence, and decision making. Organization Science, 6(6), A call to theorizing the it artifact. Information Systems
645–664. https://doi.org/10.1287/orsc.6.6.645 Research, 12(2), 121–134. https://doi.org/10.1287/isre.12.
Lilien, G. L., Rangaswamy, A., Van Bruggen, G. H., & 2.121.9700
Starke, K. (2004). DSS effectiveness in marketing resource Padmanabhan, B., Sahoo, N., & Burton-Jones, A. (2022).
allocation decisions: Reality vs. perception. Information Machine learning in Information Systems research.
Systems Research, 15(3), 216–235. https://doi.org/10. Management Information Systems Quarterly, 46(1), iii–
1287/isre.1040.0026 xix.
Lynch, T., & Gregor, S. (2004). User participation in deci­ Pan, S. L., Li, M., Pee, L. G., & Sandeep, M. S. (2021).
sion support systems development: Influencing system Sustainability design principles for a wildlife manage­
outcomes. European Journal of Information Systems, 13 ment analytics system: An action design research.
(4), 286–301. https://doi.org/10.1057/palgrave.ejis. European Journal of Information Systems, 30(4),
3000512 452–473. https://doi.org/10.1080/0960085X.2020.
Lyytinen, K., Nickerson, J. V., & King, J. L. (2021). 1811786
Metahuman systems= humans+ machines that learn. Peffers, K., Tuunanen, T., & Niehaves, B. (2018). Design
Journal of Information Technology, 36(4), 427–445. science research genres: Introduction to the special issue
https://doi.org/10.1177/0268396220915917 on exemplars and criteria for applicable design science
Mabey, B. (2018). pyLdavis: Python library for interactive research. European Journal of Information Systems, 27(2),
topic model visualization. port of the R LDAvis package. 129–139. https://doi.org/10.1080/0960085X.2018.
https://github.com/bmabey/pyldavis . 1458066
Maedche, A., Gregor, S., & Parsons, J. (2021). Mapping Power, D. J. (2001). Supporting decision-makers: An
design contributions in information systems research: expanded framework. Informing Science, 1(1), 1901–
The design research activity framework. 1915. https://doi.org/10.28945/2384

Electronic copy available at: https://ssrn.com/abstract=4071519


22 S. HERATH PATHIRANNEHELAGE ET AL.

Rai, A. (2016). Editor’s comments: Synergies between big challenges. Journal of Business Research, 123, 588–603.
data and theory. MIS Quarterly, 40(2), iii–ix. https://doi.org/10.1016/j.jbusres.2020.09.068
Rai, A., Constantinides, P., & Sarker, S. (2019). Next gen­ Shrestha, Y. R., von Krogh, G., & Feuerriegel, S. (2023).
eration digital platforms: Toward human-AI hybrids. Building open-source AI. Nature Computational Science,
MIS Quarterly, 43(1), iii–ix. 1–4. https://doi.org/10.2139/ssrn.4614280
Raisch, S., & Krakowski, S. (2021). Artificial intelligence and Simon, H. A. (1947). Administrative behavior: A study of
management: The automation-augmentation paradox. decision-making processes in administrative organization.
Academy of Management Review, 46(1), 192–210. Palgrave Macmillan.
https://doi.org/10.5465/amr.2018.0072 Simon, H. A. (1956). Rational choice and the structure of the
Rana, N. P., Chatterjee, S., Dwivedi, Y. K., & Akter, S. environment. Psychological Review, 63(2), 129–138.
(2022). Understanding dark side of artificial intelligence https://doi.org/10.1037/h0042769
(AI) integrated business analytics: Assessing firm’s opera­ Simon, H. A. (1960). The new science of management deci­
tional inefficiency and competitiveness. European Journal sion. Harper.
of Information Systems, 31(3), 364–387. https://doi.org/ Singh, J. V., Tucker, D. J., & House, R. J. (1986).
10.1080/0960085X.2021.1955628 Organizational legitimacy and the liability of newness.
Ransbotham, S., Khodabandeh, S., Fehling, R., Administrative Science Quarterly, 31(2), 171–193.
LaFountain, B., & Kiron, D. (2019). Winning with AI: https://doi.org/10.2307/2392787
Pioneers combine strategy, organizational behavior, and Sun, D., Ying, W., Zhang, X., & Feng, L. (2019). Developing
technology. MIT Sloan Management Review and Boston a blockchain-based loyalty programs system to hybridize
Consulting Group. business and charity: An action design research.
Remus, W. E., & Kottemann, J. E. (1986). Toward intelligent International Conference on Information Systems 2019
decision support systems: An artificially intelligent Proceedings, Munich, Germany (pp. 6).
statistician. MIS Quarterly, 10(4) , 403–418. https:// Tarafdar, M., Beath, C. M., & Ross, J. W. (2019). Using AI to
www.jstor.org/stable/249197 enhance business operations. MIT Sloan Management
Rinta-Kahila, T., Someh, I., Gillespie, N., Indulska, M., & Review, 60(4), 37–44.
Gregor, S. (2022). Algorithmic decision-making and sys­ Te’eni, D., Yahav, I., Zagalsky, A., Schwartz, D., Silverman,
tem destructiveness: A case of automatic debt recovery. G., Cohen, D., Mann, Y., & Dafna, L. (2023). Reciprocal
European Journal of Information Systems, 31(3), 313–338. human-machine learning: A theory and an instantiation
https://doi.org/10.1080/0960085X.2021.1960905 for the case of message classification. https://doi.org/10.
Samtani, S., Chai, Y., & Chen, H. (2021). Linking exploits 1287/mnsc.2022.03518
from the dark web to known vulnerabilities for proactive Tinguely, P., Shrestha, Y. R., & von Krogh, G. (2020). How
cyber threat intelligence: An attention-based deep struc­ does your labor force react to COVID-19? Employing
tured semantic model. MIS Quarterly, 46(2), 911–946. social media analytics for preemptive decision making.
https://doi.org/10.25300/MISQ/2022/15392 California Management Review. https://cmr.berkeley.
Schoemaker, P. J. (1982). The expected utility model: Its edu/2020/08/social-media-analytics/
variants, purposes, evidence and limitations. Journal of Todd, P., & Benbasat, I. (1999). Evaluating the impact of
Economic Literature, 20(2) , 529–563. https://www.jstor. DSS, cognitive effort, and incentives on strategy selection.
org/stable/2724488 Information Systems Research, 10(4), 356–374. https://
Seidel, S., Chandra Kruse, L., Székely, N., Gau, M., doi.org/10.1287/isre.10.4.356
Stieger, D., Peffers, K., Tuunanen, T., Niehaves, B., & Tushman, M. L., & Nadler, D. A. (1978). Information pro­
Lyytinen, K. (2018). Design principles for sensemaking cessing as an integrating concept in organizational
support systems in environmental sustainability design. The Academy of Management Review, 3(3),
transformations. European Journal of Information 613–624. https://doi.org/10.2307/257550
Systems, 27(2), 221–247. https://doi.org/10.1057/s41303- Tuunanen, T., & Peffers, K. (2018). Population targeted
017-0039-0 requirements acquisition. European Journal of
Sein, M. K., Henfridsson, O., Purao, S., Rossi, M., & Information Systems, 27(6), 686–711. https://doi.org/10.
Lindgren, R. (2011). Action design research. MIS 1080/0960085X.2018.1476015
Quarterly, 35(1), 37–56. https://doi.org/10.2307/23043488 Tversky, A., & Kahneman, D. (1974). Judgment under uncer­
Senoner, J., Netland, T., & Feuerriegel, S. (2022). Using tainty: Heuristics and biases. Science, 185(4157),
explainable artificial intelligence to improve process qual­ 1124–1131. https://doi.org/10.1126/science.185.4157.1124
ity: Evidence from semiconductor manufacturing. Van den Broek, E., Sergeeva, A., & Huysman, M. (2021).
Management Science, 68(8), 5704–5723. https://doi.org/ When the machine meets the expert: An ethnography of
10.1287/mnsc.2021.4190 developing AI for hiring. MIS Quarterly, 45(3),
Sharma, R., Mithas, S., & Kankanhalli, A. (2014). 1557–1580. https://doi.org/10.25300/MISQ/2021/16559
Transforming decision-making processes: A research Venable, J., Pries-Heje, J., & Baskerville, R. (2012).
agenda for understanding the impact of business analy­ A comprehensive framework for evaluation in design
tics on organisations. European Journal of Information science research. In International conference on design
Systems, 23(4), 433–441. https://doi.org/10.1057/ejis. science research in information systems, Las Vegas, USA
2014.17 (pp. 423–438). Springer.
Shrestha, Y. R., Ben-Menahem, S. M., & von Krogh, G. von Krogh, G. (2018). Artificial intelligence in organiza­
(2019). Organizational decision-making structures in tions: New opportunities for phenomenon-based
the age of artificial intelligence. California Management theorizing. Academy of Management Discoveries, 4(4),
Review, 61(4), 66–83. https://doi.org/10.1177/ 404–409. https://doi.org/10.5465/amd.2018.0084
0008125619862257 von Krogh, G., Ben-Menahem, S. M., & Shrestha, Y. R.
Shrestha, Y. R., Krishna, V., & von Krogh, G. (2021). (2021). Artificial Intelligence in Strategizing: Prospects
Augmenting organizational decision-making with deep and Challenges. Strategic Management: State of the Field
learning algorithms: Principles, promises, and and Its Future (pp. 625–646). New York: Oxford

Electronic copy available at: https://ssrn.com/abstract=4071519


EUROPEAN JOURNAL OF INFORMATION SYSTEMS 23

University Press. https://academic.oup.com/book/39240/ Xin, D., Ma, L., Liu, J., Macke, S., Song, S., &
chapter/338769107 Parameswaran, A. (2018). Accelerating human-in-the-
von Krogh, G., & Haefliger, S. (2010). Opening up design loop machine learning: Challenges and opportunities.
science: The challenge of designing for reuse and joint In Proceedings of the second workshop on data manage­
development. The Journal of Strategic Information Systems, ment for end-to-end machine learning, Houston, USA
19(4), 232–241. https://doi.org/10.1016/j.jsis.2010.09.008 (pp. 1–4).
Wiener, M., Saunders, C., & Marabelli, M. (2020). Big-data Xue, M., Cao, X., Feng, X., Gu, B., & Zhang, Y. (2022). Is
business models: A critical literature review and multi­ college education less necessary with AI? Evidence from
perspective research framework. Journal of Information firm-level labor structure changes. Journal of
Technology, 35(1), 66–91. https://doi.org/10.1177/ Management Information Systems, 39(3), 865–905.
0268396219896811 https://doi.org/10.1080/07421222.2022.2096542

Electronic copy available at: https://ssrn.com/abstract=4071519

You might also like