100% found this document useful (1 vote)
88 views27 pages

2022 - Neumann Etal - Artificial Intelligence Adoption Public Organizations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
88 views27 pages

2022 - Neumann Etal - Artificial Intelligence Adoption Public Organizations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

PUBLIC MANAGEMENT REVIEW

https://doi.org/10.1080/14719037.2022.2048685

Exploring artificial intelligence adoption in public


organizations: a comparative case study
a b c
Oliver Neumann , Katharina Guirguis and Reto Steiner
a
Swiss Graduate School of Public Administration (IDHEAP), University of Lausanne, Lausanne,
Switzerland; bInstitute of Public Management, Zurich University of Applied Sciences, Winterthur,
Switzerland; cSchool of Management and Law, Zurich University of Applied Sciences, Winterthur,
Switzerland

ABSTRACT
Despite the enormous potential of artificial intelligence (AI), many public organiza­
tions struggle to adopt this technology. Simultaneously, empirical research on what
determines successful AI adoption in public settings remains scarce. Using the tech­
nology organization environment (TOE) framework, we address this gap with
a comparative case study of eight Swiss public organizations. Our findings suggest
that the importance of technological and organizational factors varies depending on
the organization’s stage in the adoption process, whereas environmental factors are
generally less critical. Accordingly, this study advances our theoretical understanding
of the specificities of AI adoption in public organizations throughout the different
adoption stages.

KEYWORDS Artificial intelligence; AI; public organizations; public administration; technology adoption; TOE
framework

1 Introduction
Whether and how new technologies subsumed under artificial intelligence (AI) could
be used in public organizations has been much debated in recent years. While there is
justified scepticism and fear that governments using AI may become too technocratic
(Janssen and Kuk 2016), jeopardize privacy (Maciejewski 2017), reinforce inequalities,
and even threaten democracy (Eubanks 2017; O’Neil 2016), it has also been pointed
out that AI offers a plethora of opportunities for the public sector.
Thanks to the availability and use of large data sets and transactional data1 and
hardware developments, governments could realize new goals (Ulnicane et al. 2021;
Margetts and Dorobantu 2019; Hitz-Gamper, Neumann, and Stürmer 2019), such
as better decision-making and forecasting, improved communication between
government and citizens, personalized public services, reduced administrative bur­
dens (Androutsopoulou et al. 2019; Margetts and Dorobantu 2019), a generally
better quality of public services, and improved public value creation (Bullock 2019;
Wang, Teo, and Janssen 2021). A number of AI application areas have been
identified, such as knowledge management, process automation, conversational

CONTACT Katharina Guirguis [email protected]


Supplemental data for this article can be accessed at https://doi.org/10.1080/14719037.2022.2048685.
© 2022 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.
org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work
is properly cited.
2 O. NEUMANN ET AL.

agents and assistants, predictive analytics, fraud and threat detection, resource
allocation, and supporting expert tasks (Mehr, Ash, and Fellow 2017; Wirtz,
Weyerer, and Geyer 2019). Unsurprisingly, public organizations are increasingly
considering adopting AI technologies (Sun and Medaglia 2019) and have started to
issue policy documents about the use of AI (Ulnicane et al. 2021). However, while
in certain early-adopter countries (e.g. the US or the UK), the use of AI in the
public sector is increasing, there are many public organizations where productive
applications remain rare (Mikalef et al. 2021; Oxford Insights 2020; Margetts and
Dorobantu 2019; Wirtz and Müller 2019). AI in government is often at an experi­
mental stage (Margetts and Dorobantu 2019), or traditional automation solutions
are wrongly labelled ‘AI’.
Even if the body of research about AI in the public sector has been growing recently
(Sousa et al. 2019), empirical studies in public sector settings are scarce (Campion et al.
2020; Sun and Medaglia 2019). Some notable exceptions have studied the role of AI in
administrative discretion and transparency (Ahonen and Erkkilä 2020; Bovens and
Zouridis 2002; Justin, Young, and Wang 2020; Criado, Valero, and Villodre 2020; de
Boer and Raaphorst 2021; Peeters, Giest, and Grimmelikhuijsen 2020), organizational
changes caused by introducing AI in predictive policing (Meijer, Lorenz, and Wessels
2021), chief information officer perceptions and expectations of AI in the public sector
(Criado et al. 2020), public value creation through AI (Wang, Teo, and Janssen 2021),
and the application of AI in a pandemic (Cheng et al. 2021). However, only a handful
of empirical studies exist on determinants of successful AI adoption within public
organizations (Campion et al. 2020; Chen, Ling, and Chen 2021; Schaefer et al. 2021;
Sun and Medaglia 2019; Wang, Zhang, and Zhao 2020). Given that AI is a highly
complex, general-purpose technology with many new potential application areas
(Jöhnk, Weißert, and Wyrtki), we believe that the lack of research on the mechanisms
of AI adoption constitutes a significant research gap. Particularly, empirical evidence
is needed about the specific challenges and facilitating factors in the adoption process
of AI projects in public sector practice (Wirtz, Langer, and Fenner 2021) to bridge
theoretical considerations about AI usage and practical implementation.
This study addresses this gap by empirically analysing the adoption process of AI
initiatives in eight different public organizations in Switzerland. It takes an interdisci­
plinary approach, connecting streams of research in Public Administration and
Information Systems. Using the AI-adapted technology organization environment
(TOE) framework by Pumplun, Tauchert, and Heidt (2019) as a theoretical basis,
our research question is: What are the technological, organizational, and environmental
factors that facilitate or hamper the adoption of projects involving AI technologies in
public organizations? Given the limited previous empirical research on this topic, we
have used an exploratory qualitative research design to gain in-depth insights. This
study’s main contribution is to better understand the sector-specific challenges and
favourable factors when public organizations adopt AI technologies. As we see adop­
tion as an ongoing process instead of a single point in time, we extend existing theory
by introducing a time dimension, allowing us to formulate propositions about which
factors are most relevant at each of three consecutive stages (‘assessing’, ‘determined’,
‘managed’) in the adoption process. As such, our study heeds the calls for ‘research
focusing on the wide variety of aspects involved in the phenomenon of AI adoption in
the public sector’ (Sun and Medaglia 2019, 379) and for a ‘distinctive approach to AI in
the public sector’ (Criado et al. 2020).
PUBLIC MANAGEMENT REVIEW 3

2 Theory
2.1 AI in the public sector
There is no universally accepted definition of AI (Wirtz, Weyerer, and Geyer 2019). AI
may be understood as machines or computer systems that think and act humanly by
performing tasks that commonly require human intelligence (e.g. decision-making and
learning) or that think and act rationally by focusing on logic and carefully considering
all options (e.g. finding the best solution to a problem) (Russell and Norvig 2021). In
a specific area, AI might outperform humans, but it is ‘unable to autonomously solve
problems in other areas’ (Kaplan and Haenlein 2019), so is understood as ‘weak AI’
(Wamba et al. 2021, 2). Others argue that AI will develop abilities that surpass human
intelligence (Kaplan and Haenlein 2019) and ‘will [. . .] supplant us as the dominant
species on the Earth’ (Bundy 2017, 285), which is known as AI singularity or ‘conscious/
self-aware AI’ (Kaplan and Haenlein 2019, 16). In this study, we lean towards the
understanding of ‘weak AI’ to argue that ‘AI applies advanced analysis and logic-based
techniques, including machine learning, to interpret events, support and automate
decisions, and take actions’ (Gartner 2021). Thereby, AI systems ‘correctly interpret
external data [,] [. . .] learn from such data, and [. . .] use those learnings to achieve
specific goals and tasks through flexible adaptation [. . .]’ (Kaplan and Haenlein 2019,
15). One aspect that is inevitably connected to AI is the access rights to the data and data
ownership (Martens 2018). Legal instruments such as data protection laws form the basis
to regulate data access and data ownership (Martens 2018).
Despite the growing debate, the actual diffusion of AI in public sector practice
remains low, particularly compared to private sector companies (Mikalef et al. 2021;
Wirtz and Müller 2019; Wirtz, Weyerer, and Geyer 2019). Challenges to adopting AI
in public organizations stem from factors more prevalent in the public context: (i)
a lack of technical staff to introduce and assess new technologies, (ii) the risk of
potential erroneous use of AI (e.g. security risks, privacy concerns), (iii) the need to
guarantee transparency in the context of AI, (iv) moral dilemmas such as when to use
AI, and (v) ethical considerations, (e.g. non-discrimination of citizens) (Margetts and
Dorobantu 2019).
Nevertheless, research on AI and closely related fields in the public sector has grown
recently (Sousa et al. 2019; Wirtz, Langer, and Fenner 2021). To date, most studies have
involved the what and why when discussing possible applications and advantages or
disadvantages of AI. Many of these studies are conceptual in nature (e.g. Agarwal 2018;
Androutsopoulou et al. 2019; Bullock 2019; Criado and Ramon Gil-Garcia 2019;
Kankanhalli, Charalabidis, and Mellouli 2019; Meijer and Wessels 2019; Peeters and
Schuilenburg 2018; Pencheva, Esteve, and Jankin Mikhaylov 2020; Wirtz and Müller
2019; Young, Bullock, and Lecy 2019; Newman, Mintrom, and O’Neill 2022). For
instance, Pencheva, Esteve, and Jankin Mikhaylov (2020), Criado and Ramon Gil-
Garcia (2019), Wirtz, Weyerer, and Geyer (2019), and Wirtz, Langer, and Fenner
(2021) reviewed the literature on big data and AI in the public sector, identifying key
themes and applications such as efficiency and process automation, legitimacy,
accountability, cost savings, fraud detection, decision-making, knowledge manage­
ment, digital agents, improved policy analysis and evaluation, and new transformative
business models. Criado and Ramon Gil-Garcia (2019) and Wang, Teo, and Janssen
(2021) emphasized the need and the mechanisms for public value creation through AI,
while Pencheva, Esteve, and Jankin Mikhaylov (2020) called for research supporting
4 O. NEUMANN ET AL.

practitioners by answering relevant questions about AI in public organizations.


Medaglia, Gil-Garcia, and Pardo (2021, 1) invited researchers to focus on ‘governance
of AI, trustworthy AI, impact assessment methodologies, and data governance’.
Similarly, Wirtz, Langer, and Fenner (2021) called for a better balance in research
methodologies and studies focusing on creating new government structures due to AI.
Agarwal (2018) outlined the challenges public administrations face given AI’s radical
changes. Arguing that many of the current processes in government may soon become
irrelevant, he stressed the ‘need to lay the groundwork for governments to rethink how
they will be able to best serve their constituents’ (Agarwal 2018, 917). Peeters and
Schuilenburg (2018) and Meijer and Wessels (2019) critically discussed algorithmic
tools in predictive policing and justice against the lack of empirical research and
questions regarding the role of human judgement, accountability, and transparency.
Relatedly, several studies (Bovens and Zouridis 2002; Bullock 2019; Justin, Young, and
Wang 2020; de Boer and Raaphorst 2021; Young, Bullock, and Lecy 2019) discussed
how AI systems affect street-level bureaucrat discretion, arguing that the context
determined whether to use artificial or human discretion. The former offers improve­
ments in scalability, cost-efficiency, and quality, while concerns regarding equity,
manageability, transparency, and political feasibility remain. Although caution is
necessary when utilizing AI in governance to prevent ‘administrative evil’, Bullock
(2019, 9) argued that both humans and algorithms may make imperfect choices.
Several studies have focused on challenges and risks of AI, such as privacy, legal,
and ethical issues (Bannister and Connolly 2020; Janssen and Kuk 2016; Wirtz,
Weyerer, and Geyer 2019), which mainly address questions of what and why (not).
In light of the negative consequences of faulty AI for society, these studies are of high
normative and practical relevance (see De la Garza (2020) for the example of the
Michigan MiDAS system that wrongly accused citizens of tax fraud). Janssen and Kuk
(2016) discussed the limitations and challenges of AI in governance, stating that with
autonomous algorithms, there are issues with accountability, bias and discrimination,
embedded political orientations, and other undesirable practices. Newman, Mintrom,
and O’Neill (2022) argued that instead of relieving administrative burdens, AI rein­
forces bureaucratic structures. Kernaghan (2014) recommended the development of an
ethics regime for robot applications in public organizations and evaluated the need for
regulation. Wirtz, Weyerer, and Geyer (2019) outlined different applications and the
associated challenges of AI in law and regulations, ethics, societal issues, and technol­
ogy implementation in public organizations, while Sun and Medaglia (2019) analysed
how different stakeholders perceive the challenges of applying AI in public healthcare,
proposing some guidelines for the governance of AI adoption in the public sector.
Lastly, Eubanks (2017) as well as Alon-Barkat and Busuioc (2022) explored how
automated decision-making in public services may negatively impact already disad­
vantaged groups and reinforce existing biases, while Bannister and Connolly (2020)
provided a taxonomy of decision-making algorithms in public organizations that help
control the risk of introducing such biases.
Other studies focus on how AI should be used by analysing processes and strategies
for the implementation and modes of AI technologies. Chen, Ling, and Chen (2021) used
the TOE framework to study the adoption of AI in Chinese state-owned companies.
They found that the innovation’s compatibility with adopter needs, the new approach’s
relative advantage, complexity, managerial support, government involvement, vendor
partnership, and organizational capability all support adoption. By drawing on the TOE
PUBLIC MANAGEMENT REVIEW 5

framework, Mikalef et al. (2021) examined contributing factors to AI capability-building


in public organizations based on data from German, Norwegian, and Finnish munici­
palities. The most important factors were perceived financial cost, organizational inno­
vativeness, governmental pressure, government incentives, and regulatory support. In
contrast, perceived public pressure and the perceived value of AI solutions were less
influential. Wang, Zhang, and Zhao (2020) used empirical evidence from Chinese
government chatbot projects to explore determinants of AI adoption. They found that
pressure and readiness factors play varying roles in the pre- and post-adoption stages and
that ‘pressure can encourage local governments to implement chatbots’ (Wang, Zhang,
and Zhao 2020, 1). Kankanhalli, Charalabidis, and Mellouli (2019) conceptually identi­
fied multiple challenges in adopting AI technologies in the public sector and called for
more domain-specific studies on the implementation and evaluation of AI systems,
challenges and quick-wins, and studies expanding methods and theories. In semi-
structured interviews with German municipalities, Schaefer et al. (2021) analysed per­
ceived challenges to AI adoption from a public employee perspective, identifying factors
such as technical compatibility, skills, costs, strategic alignment, government pressure,
and innovativeness. Campion et al. (2020) focused on inter-organizational collaborations
in AI adoption. The greatest challenges in such collaborations include data sharing
concerns, insufficient data understanding, and lack of motivation. Wirtz and Müller
(2019) formulated an integrated AI framework for public management, including layers
for public AI policy and regulation, applications and services, functions, and technology
infrastructure, aiming to better understand the ideal embedment of AI systems into
administrative procedures. Similarly, Androutsopoulou et al. (2019) suggested a model
and technical system based on natural language processing for improving communica­
tion between governments and citizens. Finally, Desouza, Dawson, and Chenok (2020)
provided reflections on issues that public organizations face when adopting AI, struc­
tured along the dimensions of data, technology, organization, and environment – includ­
ing for instance, complexity in stakeholder management, public value creation,
transparency requirements, and due oversight.

2.2 IT innovation adoption


AI adoption is an example of IT innovation adoption – a process that results in an
outcome that is new to the adopting organization, such as the introduction and use of
a technology, product, process, or practice (Hameed, Counsell, and Swift 2012, 359;
Damanpour and Schneider 2009) and that involves productively ‘using computer
hardware and software applications to support operations, management, and decision
making’ (Thong and Yap 1995, 431). In public sector innovation, outcomes can
typically be new processes, new products, a new positioning of an existing product
or service, or even new paradigms (Bason 2018). The ultimate purpose of adopting
innovations is often to increase organizational performance (Hameed, Counsell, and
Swift 2012), but in public contexts, it is also about creating societal value (Ulnicane
et al. 2021), making processes more efficient and better tailored to citizen needs
(Newman, Mintrom, and O’Neill 2022), or designing new policies to solve societal
problems and introducing and delivering new services and platforms to users (e.g. for
citizen collaboration) (Chen, Walker, and Sawhney 2020; Walker 2007).
6 O. NEUMANN ET AL.

Studying IT innovation adoption mechanisms at the individual and organizational


level has a long tradition in information systems research (Lai 2017; Oliveira and
Martins 2011). Over time, the field has developed numerous widely used theoretical
models, such as the technology acceptance model (TAM) (Davis 1989), the diffusion of
innovation (DOI) theory (Rogers 1995), the unified theory of acceptance and use of
technology (UTAUT) (Venkatesh, Davis, and Davis 2003), and the TOE framework
(Tornatzky, Fleischer, and Chakrabarti 1990) – all used to explain different kinds of
technology adoption (see e.g. Oliveira and Martins 2011; Ma 2014; Mergel and
Bretschneider 2013; Grimmelikhuijsen and Feeney 2017; Demlehner and Laumer
2020; Nam et al. 2020).
Compared to other IT innovations, AI is a general-purpose technology with ‘high
implementation complexity [. . .] which differentiates it from other digital technologies
that are typically easy-to-use and easy-to-deploy’ (Jöhnk, Weißert, and Wyrtki
2021, 6), such as social media use (Mergel and Bretschneider 2015). Furthermore,
the adoption of AI requires concerted and sustained efforts across different organiza­
tional units or with external parties, especially between IT and expert units in the AI
application area, and significant changes in strategic direction, resources, knowledge,
culture, and data (Jöhnk, Weißert, and Wyrtki 2021), highlighting the need for
a theoretical framework that considers not only technological but also organizational
and environmental factors.

2.3 The TOE framework


Contrary to other technology adoption frameworks that view adoption from an indivi­
dual point of view (e.g. TAM or UTAUT), TOE views technology adoption from an
organizational perspective (Al Hadwer et al. 2021). It postulates that an organization’s
technological, organizational, and environmental context influences the technology
adoption processes (Baker 2012) while not specifying particular influence factors
(Aboelmaged 2014). Therefore, relevant factors for any specific research question must
be defined based on previous studies and theoretical implications since ‘[d]ifferent types
of innovations have different factors that influence their adoption’ (Baker 2012, 236).
Several studies in public administration have used the TOE framework to study AI
adoption (Chen, Ling, and Chen 2021; Desouza, Dawson, and Chenok 2020; Mikalef
et al. 2021). Many other studies discussed above have investigated factors that can be
assigned to technological, organizational, and environmental dimensions. The relative
popularity of the TOE framework over other approaches might lie in the explicit
emphasis on organizational and environmental factors – alongside the technological
ones that tend to dominate in most other frameworks – and its focus on organizational
rather than individual technology adoption.
Pumplun, Tauchert, and Heidt (2019) present an adaption of the TOE framework
specifically geared towards AI that is grounded in earlier research, as the factors selected
are reflected in many other studies (see e.g. Stenberg and Nilsson 2020). Their framework
includes the technological items relative advantage of the AI solution over the conven­
tional technology and compatibility with existing business processes and the business case.
While a relative advantage implies improvement potential and increases the chances of
adopting a new technology (Greenhalgh et al. 2004), the mechanisms behind the factor
compatibility mainly pertain to complications in the interplay with existing systems,
whereby lack of compatibility leads to hesitation regarding a new technology (Alsheibani
PUBLIC MANAGEMENT REVIEW 7

et al. 2020). In the organizational dimension, the framework includes culture (namely top
management support), change management, and innovative culture, organizational size,
financial and human resources, data availability and quality, and organizational struc­
ture. AI adoption often needs far-reaching changes in organizational structures and
culture for employees and clients to accept the innovation and significant organizational
resources (e.g. skills and quality data) to develop AI solutions in cross-functional teams
(Jöhnk, Weißert, and Wyrtki). The fact that public organizations frequently struggle with
radical organizational and cultural changes underscores their importance (Mergel,
Ganapati, and Whitford 2020). The framework further includes the environmental
items competitive pressure, government regulations (GDPR and employee councils), indus­
try requirements, and customer readiness. Government regulations2 and other public
sector-specific requirements tend to be important in public organizations and their
consideration may hinder the adoption of new technologies. While there is usually less
competitive pressure to adopt new technologies in the public sector, customer readiness
and citizen expectations may still create pressure on public organizations.
Based on our own experience working with public sector organizations using AI
technologies and on frameworks by Jöhnk, Weißert, and Wyrtki () and Schaefer et al.
(2021), we added the items AI strategy, collaboration, and origin of project initiation to
the organizational factors of the framework (see Table 1). The availability of an AI
strategy is proposed by Jöhnk, Weißert, and Wyrtki () to influence AI adoption, and in
AI projects, it is common for organizations to work together with external partners
(Chatterjee et al. 2021). Therefore, factors like collaboration and initiation need to be
considered. To simplify matters, we removed the sub-dimensions of the government
regulations (GDPR and employee council) in the environmental factors as they seemed
too specific and of limited relevance in the Swiss context. At the time of data collection,
the Swiss equivalent of the GDPR had not yet entered into force (Guirguis et al. 2021),
while employee councils are not as widespread in Switzerland as in other countries
(Ziltener and Gabathuler 2018).

2.4 Assessing the AI maturity level


To assess the AI maturity levels in our study, we draw on Alsheiabni, Cheung, and
Messom (2019), who integrated different levels of AI adoption in an organization (see
Table 2). IT innovation adoption rarely refers to one single point in time but is
a process (Hameed, Counsell, and Swift 2012). Therefore, introducing a time dimen­
sion to measure the degree of innovation adoption is necessary to assess whether an
innovation can be integrated into daily practice (Hameed, Counsell, and Swift 2012).
Alsheiabni, Cheung, and Messom (2019) differentiated between five levels. At the
initial level, minimal functions based on AI exist, and there are no detailed plans to
use AI. At the assessing level, experimentation with AI technologies has begun, and the
organization is looking for possible applications. At the determined level, some
advanced AI projects have moved beyond the experimental phase, and the infrastruc­
ture requirements for larger-scale implementations are identified. At the managed
level, the necessary processes for organization-wide, large-scale AI applications are
defined. Finally, at the optimize level, the organization has the infrastructure and
architecture suitable for large-scale AI applications.
8

Table 1. The TOE factors investigated in this study based on Pumplun, Tauchert, and Heidt (2019).
O. NEUMANN ET AL.

Dimension Factors Explanation


Technology Relative advantage The advantage of AI technology compared to conventional technology.
Compatibility: Compatibility of the AI solution with existing business processes and the underlying business case.
with business processes and business
case
Organization Culture: Cultural aspects that influence AI adoption, like management support, change management efforts, and a general innovative
Top management support, strategy, culture within the organization.
change management and innovative
culture
Organizational size An organization’s size.
Resources: Availability of financial and human resources and high-quality data as a basis for AI solutions.
Budget, employees, and data
Organizational structure: An organization’s structure regarding the project, collaboration with internal and external partners and the question of who
Project structure, collaboration, and initiates an AI project (internal or external initiation).
initiation
Environment Competitive pressure The external pressure on an organization to launch AI projects.
Government regulations Government rules and regulations influencing AI adoption (e.g. data protection legislation).
Industry requirements The requirements of an industry that influence AI adoption – in our case, public sector requirements.
Customer readiness Readiness of customers and citizens for AI solutions by public organizations.
PUBLIC MANAGEMENT REVIEW 9

Table 2. Maturity levels and according AI function based on Alsheiabni, Cheung, and Messom (2019, 51).
Level AI functions
Initial Very limited or no AI function, and the organization has no plans to use AI.
Assessing Discovery of AI technology.
Determined AI project is at an advanced stage; determination of infrastructure needed to further implement
AI.
Managed Certain AI processes are defined throughout the organization. Preparation of large-scale AI
application.
Optimize Full AI infrastructure is ready for large-scale AI application.

3 Methodology
This study uses a qualitative multiple case study research design, suitable for cases
where previous research findings are insufficient for formulating concrete hypotheses
and where more general research questions guide the investigation (Yin 2018).
Furthermore, analysing multiple cases produces more robust results (Yin 2018).

3.1 Case selection and data collection


The cases (organizations) identified help answer our research question about identify­
ing technological, organizational, and environmental factors that facilitate or hamper
AI adoption in public organizations. We limited our case selection to Swiss public
organizations for several reasons. First, Switzerland ranks average among developed
countries in the Government AI Readiness Index – an index based on 33 indicators
across 10 dimensions (vision, governance and ethics, digital capacity, adaptability, size,
innovation capacity, human capital, infrastructure, data availability, and data repre­
sentativeness) ranking governments on how ready they are to implement AI in the
delivery of public services (Oxford Insights 2020, 4). It is also close to average in the
latest European country benchmark of how many (automated) public services are
offered online (European Commission 2020). This suggests that Switzerland is broadly
representative of countries and public organizations in other developed countries.
Second, we strove to keep factors outside the organizations – such as national policies –
as constant as possible since we are interested in organizational AI adoption processes.
Three main criteria guided our case selection. Initially, we included cases based on
their organizational type (as innovation adoption in the public sector is usually
associated with organizational characteristics) (Melitski, Gavin, and Gavin 2010),
and considered including ministries, public agencies and state-owned enterprises.
Second, we sought to include cases from different tiers of Swiss government (local,
regional, and national) to capture the specific conditions in each tier. While federal
government organizations are generally more centralized, local government in
Switzerland possesses a high level of autonomy, allowing for decentralized innovation
(Mueller 2011). State-owned companies, however, balance state-ownership with
autonomy (Rentsch and Finger 2015). In case selection, we sought an equilibrium
between the different state levels and legal structures and as a final requirement, chose
organizations working on at least one AI-based project.
Eight cases fulfilled all the criteria, and 17 interview partners were identified based
on their affiliation with the respective AI project (see Table 3). Where possible, we
interviewed multiple individuals per case to triangulate perspectives. Data collection
through qualitative semi-structured interviews took place between August 2020 and
10

Table 3. Case description.


Case A B C D E F G H
State-level National Regional Local
Legal structure State-owned enterprise Ministry Agency Municipal administration
Market Partial market environment Monopoly environment
situation
Employees >30,000 >60,000 >1,000 >4,000 >400 >20 >5,000 >200
O. NEUMANN ET AL.

No of AI 3 2 1 3 2 1 1 1
projects
studied
Type of task Optimization Optimization of Optimization Digitalization of Service delivery Service delivery Service delivery Service delivery
performed of resource allocation of people services, business through through through through
with AI disposition in an internal service allocation. process conversational conversational conversational conversational
and and voice optimization, and agent and agent. agent. agent.
scheduling recognition for solutions for automatization of
of customer service. specific business customer service
operations tasks, provision,
assets.
No of 3 2 2 3 2 3 1 1
interviews
Roles of
interviewees (a) (a) (a) (a) (a) (a) (a) (a)
Internal Internal innovation Internal pro­ Internal pro­ Internal pro­ Internal project Internal project External pro­
project manager ject lead gramme lead gramme lead lead lead ject lead (pri­
lead vate
(b) (b) (b) (b) (b) company)
(b) Internal project lead Internal gov­ Internal project lead Internal project lead External project
Internal pro­ ernment lead (private
ject lead moderniza­ (c) company)
tion expert External project lead
(c) (public organization) (c)
Internal pro­ Internal IT expert
ject lead
PUBLIC MANAGEMENT REVIEW 11

July 2021. The video call-based interviews lasted about one hour and were structured
using a theory-based questionnaire with open-ended questions to gain explorative
insights (see 3.3).

3.2 Case description


Our cases differ in the state level, legal structure, market situation, and size (see
Table 3). Cases A and B are state-owned enterprises at the national tier of
government in a partial market environment with tens of thousands of employees.
Cases C and D represent ministries at the national tier of government with over
1,000 (Case C) and over 4,000 employees (Case D). In Cases E and F, the
organizations are agencies at the regional level with over 400 and over 20 employ­
ees, respectively. Cases G and H are local municipal administrations, G being
a larger municipality with over 5,000 employees while H has 200. Cases C to
H operate in a monopoly environment.
The cases differ in the number of AI projects considered in this study, ranging
from one (Cases C, F, G, & H), over two (Cases B & E) to three projects (Cases
A & D). Furthermore, the cases vary regarding the task that is AI-assisted. In six
projects of the four cases (A, B, C, & D), the task performed is an optimization
task. In four other projects and cases (E, F, G, & H), the task is service delivery
through a conversational agent (chatbot) (Cases E, F, G, & H). In the remaining
projects, the tasks are voice recognition for customer service (Case B), digitaliza­
tion of services and solutions for specific business tasks (Case D), and automati­
zation of customer service delivery (Case E).
We included both the internal perspective of organizational representatives and the
external perspective of project partners to gain a more profound view on the cases we
studied, and all interviewees were directly involved in their respective AI projects.
Overall, we interviewed nine internal project leads, three external project leads, two
programme leads, one expert in government modernization (internal), and one IT
expert (internal).

3.3 Questionnaire and operationalization


The questionnaire contained six sections (see Appendix A in supplementary). First, we
introduced the interviewees to the subject and study context without revealing any
information that could have influenced their answers. To understand the AI projects
and assess the degree of AI adoption in the cases, the second block contained questions
regarding the AI project. Blocks three to five were dedicated to the TOE framework
dimensions as outlined in Section 2.3 (see Table 1 for the dimensions and factors),
translated into open-ended questions. We enquired about the relative advantage of AI
technology compared to conventional technology and compatibility with existing
business processes (technological factors). Culture, organizational size, resources,
and organizational structure were the concepts of interest regarding the organizational
factors. We also studied competitive pressure, government regulations, industry
requirements, and customer readiness (environmental factors). The questionnaire
ended with an outro section.
12 O. NEUMANN ET AL.

3.4 Data coding and analytical method


The interviews were recorded, transcribed, and coded using maxQDA qualitative
analysis software; coding followed the deductive category assignment method
(Mayring 2014). The category system was theoretically deduced from the extended
TOE framework by Pumplun, Tauchert, and Heidt (2019) (see Appendix B in supple­
mentary: coding scheme, incl. anchor samples). In the coding process, we included an
inductive component to the analysis by adding further codes for recurrent patterns in
the data. In total, we defined 24 codes and 505 codings. For consistency, coding was
conducted by one researcher and cross-checked by another.

4 Results
4.1 AI maturity level
First, we assessed the degree of AI adoption (see Table 4) by asking the interviewees
about the starting point of their AI projects. The earliest project was launched in 2012
(Case A). Some projects started in 2017 (Cases A, B, & C), some in 2018 (Cases E & F),
but most projects began in 2019 (Cases B, D, F, & H). Considering that the projects are
comparatively young, it is unsurprising that in five cases, it is unclear if the projects will
be able to reach their goals. Two projects have achieved their goals and two have not.
The number of AI projects per case also differed. Cases A and B had a comparably high
number of projects (Case A: ~50, B: ~100) ranging from early proofs of concepts to
fully operational projects. In contrast, the projects studied in Cases C, F, G, and H were
the only AI projects in their respective organizations. In Cases C and H, the projects
were still in their pilot phases, while in Cases F and G, the solutions were already
productive. Thus far, Case E has two projects with an AI component (both productive),
while Case D has implemented around ten AI projects and pilot projects.
This information allowed us to align the cases along the levels proposed by
Alsheiabni, Cheung, and Messom (2019, 51) introduced in Section 2.4 above. Cases
C, F, G, and H are in the AI technology’s discovery stage and belong to the assessing
level. Cases D and E are characterized by at least one AI project at an advanced stage
with the determination of infrastructure needed to implement AI further, representing
the determined level. Cases A and B were assigned to the managed level, as they
displayed defined AI processes throughout the organization (see Table 4).
The degree of AI adoption somewhat coincides with the organizational form, state-
level and organization size. Large state-owned companies constitute the managed level,
while one national ministry and a local agency are assigned to the determined level.
The assessing level consists of the local administrations together with one national
ministry and one cantonal agency. On the assessing level, three organizations
described the introduction of a conversational agent, while the organizations on the
managed level tackle more complex optimization problems.

4.2 Technological factors


Following the TOE framework, we assessed the role of technological factors for AI
adoption (see Table C1 in supplementary: structured overview incl. anchor samples).
The first factor we examined was the relative advantage of AI compared to
Table 4. Degree of AI adoption.
Case A B C D E F G H
Project start 2012; 2017 2017; 2019 2017 2019 2018 2019 2018 2019
Goal attainment n.a. Yes; n.a. n.a. n.a. Yes No No n.a.
No of AI projects ~50 ~100 1 ~10 2 1 1 1
Status Full range Full range Pilot phase Full range Productive Productive Productive Pilot phase
Level+ Managed Managed Assessing Determined Determined Assessing Assessing Assessing
Note: +Levels according to Alsheiabni, Cheung, and Messom (2019, 51)
PUBLIC MANAGEMENT REVIEW
13
14 O. NEUMANN ET AL.

conventional technology. Our findings reveal two ways for public organizations to
approach AI solutions – top-down through strategic initiatives or bottom-up for
technological reasons, the latter being more frequent in our cases. Usually, AI tech­
nologies are chosen because conventional technologies are not suited to solving
existing problems:
‘[N]o one has solved this problem yet. [. . .] It then turned out that [it] was more
complex to solve than assumed. [. . .]. That’s when it occurred to us that deep learning
could be helpful because the scalability is different with neural networks’. (Case A,
Interview a)
For many of the projects, there was no initial intention of solving the problem
with AI:
‘At first, we did not start with the intention of using AI [. . .]. The intelligent
component was only added in the course of the project when we could no longer achieve
our goal with conventional technology’. (Case B, interview d)
In some of the analysed cases, however, the public organizations actively prepared
for a future enhanced by AI technologies (e.g. hiring specialists and aligning the data
infrastructure and the data strategy to this goal), representing a top-down approach.
When asked if the AI solution impacted existing business processes, results were
mixed. While interference with current processes was actively avoided in some cases,
most felt no impact on existing processes.
‘At the moment, I’m not interested in internal processes. [. . .]’ (Case A, interview c)
Some cases were actively prepared for the adaptation of processes. Here, integration
into existing processes was a critical success factor for AI adoption:
‘Implementing something into existing processes is not easy. Implementing AI requires
different prerequisites: [. . .] high-quality data, the right infrastructure, [. . .] the right
APIs in place, etc. This can easily kill the business case of any AI component’. (Case B,
interview e)

4.3 Organizational factors


According to the TOE framework, organizational culture is a crucial factor (see
Table C2 in supplementary). When asked about top management support, all but
one respondent emphasized its importance for AI adoption (e.g. through guaranteed
funding, internal support, and clearing resistances).
When asked about active change management measurements (e.g. actively addres­
sing fears about AI), some interviewees reported that these were important for over­
coming resistance from various stakeholders like management, employees, and end-
users. Resistance can stem from a lack of understandability and explainability of the AI
solution (e.g. workers not understanding how an AI prioritizes their work, which they
would like to know). Since AI projects often disrupt daily routines, it seems essential to
actively address these concerns.
‘We are in the middle of a big transformation process. [. . .] From a leadership point of
view, we address this process with a great focus on our employees. Only if the employees
are happy can customers be satisfied’. (Case E, interview k)
As another cultural dimension, innovation culture also plays a role in AI adoption,
and most interviewees stated that agile project management methods and a culture that
tolerates some failure would support AI adoption.
PUBLIC MANAGEMENT REVIEW 15

As a last cultural dimension, interviewees were asked if their organization possessed


an AI strategy. While in some organizations had strategic documents promoting and
regulating the use of AI, others did not – and the AI projects emerged from
a technological rather than strategic considerations.
We also noticed additional cultural aspects beyond the pre-defined coding scheme.
First, while operating with publicly funded mandates, risk-avoiding behaviour might
pose a challenge to AI adoption. Due to their novel character, AI projects are often
associated with risks:
‘We are operating in an administrative context [and a politically sensitive terrain]
where one is concerned with limiting risks’. (Case C, interview f)
Other organizational factors like project size were considered next. The largest
organizations in our study were also the most mature regarding AI adoption.
However, we cannot substantiate the general assumption that the larger the organiza­
tion, the greater its maturity. We also assessed available resources regarding the
budgets, employees, data, and the remaining organizational dimensions (project
structure, collaboration, and initiation) (see Table C3 in supplementary).
Unsurprisingly, interviewees stated that lack of funds could hinder AI projects,
although some met these economic challenges by seeking other internal or external
funding sources. Financing was also reported to influence the form of collaboration
with external partners. In situations with a clear investment by the organization that is
related to an expected outcome, collaboration was closer. In situations without funding
of the partner, collaboration tended to be more fluid.
Access to data was not generally reported as challenging. Despite significant
variations among the cases, we could not identify an ideal project size in terms of
employee numbers. What was striking was the importance of mutual understanding
between employees and external partners. Similarly, in one case, technological
knowledge was promoted through internal events where employees presented their
work to interested colleagues. Each case we examined had at least one partner and
emphasized the importance of some technological understanding among employees
involved in the projects. Partners provided knowledge that is not otherwise present.
The initiative to collaborate could come from a project partner or the organization.
Partners were either public or private service providers or universities, and while
collaboration with academic partners was relatively informal, working with service
providers was usually regulated by contracts. The external service providers were
mainly small and highly specialized companies (with a few exceptions). Despite this,
collaboration was no guarantee for success, as one case reported. Generally, our
respondents said it was helpful to have a lean project organization that allowed goal-
orientated evolution:
‘Fortunately, there was no need to set up a large project organization [. . .]. Otherwise,
the project would probably not have succeeded so quickly’. (Case F, interview m)
During coding, we identified further patterns in the data – the communication and
intrinsic motivation of project members and the interviewees’ organizational affiliation
and proximity to the units affected by a solution seem important for AI adoption:
‘Usually, acceptance increases the further away you are from the affected units.
These units might not accept the solution, although it is supported by management’.
(Case A, interview c)
16 O. NEUMANN ET AL.

4.4 Environmental factors


To assess competitive pressure, we asked whether there were similar projects in
comparable organizations (see Table C4 in supplementary); in some cases, the solution
was unique, while in others similar solutions already existed elsewhere. Although
public organizations are not in competitive market environments, one organization
was actively preparing for potential future scenarios:
‘We expect the pressure to increase, although we are not currently in a competitive
situation’. (Case E, interview k)
Data protection was frequently mentioned as a regulatory challenge since any
potential threat might put a project on hold. Another factor was the unclear applica­
tion of regulations. As multiple interviewees stated, particularly in digital matters,
federal or cantonal law often leaves room for interpretation, making AI project
compliance challenging.
We also asked how operating in the public context influenced AI projects. Access to
financial and human resources were often cited as issues. Budgeting processes in the
public context are usually rigid, and planned budgets restrict innovative and sponta­
neous projects. Both the access to existing IT personnel and recruiting new employees
is stated to be challenging. Recruiting can be difficult because public organizations have
the reputation of not being particularly innovative. Existing resources are often
unavailable long-term and cannot be used for unplanned AI projects:
‘Most of the IT resources have already been planned for years for the digitalization of
our core processes. This does not leave many resources for [. . .] innovation projects,
which inhibits our innovative capacity’. (Case C, interview f)
A further characteristic of the public sector seems to be the project management
method. While many digital projects use agile practices, public organizations often
insist on traditional project management that lack the necessary trial-and-error cycles
for AI solutions. As one interviewee explained:
‘In the beginning, the difficulties were that our agile approach was rejected, arguing
that this was not possible within the federal government and that we had to work with [a
traditional project methodology] instead’. (Case D, interview h)
The interviewees did not name any other industry particularities, and some did not
feel like there were any at all, suggesting that industry requirements were little relevant
for AI adoption in the cases.
Lastly, we asked our interviewees about customer readiness. While customer feed­
back was not evident in some cases and therefore it is hard to judge the customer
readiness, none of the interviewees perceived customer readiness as an issue, although
some emphasized its importance.

4.5 Aggregation of the findings by AI maturity level


When contrasting the findings against the different maturity levels described in
Section 4.1, differences in the importance of the individual factors depending on
the organization’s maturity level became apparent (see Table 5). While for orga­
nizations on the assessing level, technological factors are generally of medium
importance, they are more critical for organizations on the determined and
managed levels. The exception is ‘business processes’, which is of low relevance
for organizations at all levels.
PUBLIC MANAGEMENT REVIEW 17

Table 5. Findings aggregated by AI maturity level.


Assessing Level Managed
(Cases C, F, G, & Determined Level (Cases Level
Dimension Factor H) D & E) (Cases A & B)
Technological Relative advantage Medium High High
Factors Business case Medium High High
Business processes Low Low Low
Organizational Top management Medium High High
Factors support
Change management Low High Low
Innovation capacity Low Medium Medium
Strategic alignment Low High Low
Organization size Low Low Low
Budget Low High Medium
Employees Low High Low
Data Medium Medium Medium
Project structure High Medium Medium
Collaboration High High High
Initiation Medium Low Medium
Communication Low Medium Medium
Organizational n.a. n.a. High
affiliation
Intrinsic motivation High Medium Medium
Environmental Competitive pressure Medium Low Low
Factors Government Low Medium Low
regulations
Industry requirements Low Low Low
Customer readiness Low High Low

For organizations at the assessing level, ‘project structure’, ‘collaboration’, and


‘intrinsic motivation’ are critical organizational factors. Twice as many factors were
rated as particularly important in organizations at the determined level – ‘top manage­
ment support’, ‘change management’, ‘strategic alignment’, ‘budget’, ‘employees’, and
‘collaboration’, while organizations at the managed level emphasized ‘top management
support’, ‘collaboration’, and ‘organizational affiliation’.
Overall, we found none of the environmental factors to be highly relevant. Only the
organizations at the determined level reported an influence of customer readiness on
AI adoption.

5 Discussion
This study explores factors that facilitate or hinder the adoption of AI projects in public
organizations. Our analysis is structured according to an AI-specific adaptation of the
TOE framework (Pumplun, Tauchert, and Heidt 2019). By considering its dimensions
separately for different levels of AI maturity (Alsheiabni, Cheung, and Messom 2019),
we have expanded this framework, which is the essential theoretical contribution of
this study. As illustrated above, this enables us to provide more nuanced insights by
capturing shifts in the importance of various factors of the TOE framework across
different levels of experience with AI technology in public organizations.
For organizations with low AI maturity (on the assessing level), a pattern emerges
across all cases, indicating that these organizations are mainly concerned with admin­
istrative issues, such as finding the best way to launch the projects and attracting
intrinsically motivated staff and the right partners. Through the lens of the resource-
18 O. NEUMANN ET AL.

based view theory (Barney 2001), this can be explained: At this early stage of an area
that may be of future strategic importance, the organization seeks to acquire the
necessary initial resources and capabilities and creates an appropriate organizational
structure to deploy them (Kraaijenbrink, Spender, and Groen 2010). Despite their lack
of experience, three out of four cases successfully implemented AI-based conversa­
tional agents with the help of external partners, confirming that despite low AI
maturity, successful adoption of stand-alone and comparatively simple AI solutions
is possible. The fourth case is still in the process of implementing a more complex
people allocation AI project assisted by an external partner. This underscores the
importance of finding partners possessing the resources and skills the public organiza­
tion lacks (Desouza, Dawson, and Chenok 2020).
However, not all forms of collaboration may be equally likely to succeed in public
settings. In the smart city context and drawing on agency and stewardship theory,
Neumann et al. (2019) found that collaborations based on stewardship, aligned inter­
ests, and mostly voluntary are more likely to produce public value-oriented results
than profit-oriented, mandate-based, agency-type collaborations. However, in three
out of four cases at the assessing level, the cooperation was based on mandates,
indicating a risk that external partners may be more interested in financial reward
than outcome (Davis, Schoorman, and Donaldson 1997). The need to rally intrinsically
motivated staff behind the projects is also crucial at this level, as it is often individual
innovative forerunners who initiate the projects, indicating the importance of public
service motivation (Ritz, Brewer, and Neumann 2016). These findings lead us to the
following theoretical proposition:

Proposition 1: Organizations that are relatively inexperienced in AI technologies depend


on motivated staff and external partners to implement the AI project. Less complex AI
applications such as conversational agents often serve as an exploratory application of AI
in such organizations.

For organizations with intermediate AI maturity (at the determined level), we


observe a shift in the pattern of relevant factors compared to organizations with low
AI maturity. Technological factors, in particular, become more relevant. The AI
projects at this level address key challenges within the organizations (e.g. automating
processes, optimizing workflows), increasing the importance of the relative advantage
of AI and its relevance to the business case (Hofmann et al. 2020). However, this
simultaneously increases complexity and requires more profound internal knowledge,
which is presumably why we observe a tendency towards less dependency on external
partners and more insourcing or back-sourcing (Moe et al. 2014) in this group.
Within the organizational factors, importance shifts towards cultural and
resource-related factors, including top management support, change management,
strategic alignment, budgeting, and employees – all of which are elements of
strategic management (Ansoff et al. 2019) – while collaboration remains an impor­
tant but less decisive factor. Therefore, our results support previous studies empha­
sizing the importance of strategic management in AI adoption, particularly at the
determined level, since management can provide resources and deal with resistance
to change (e.g. Alsheiabni, Cheung, and Messom 2019; Pumplun, Tauchert, and
Heidt 2019). This finding is in line with resource-based view theory in the public
sector context, which postulates that for initiatives with the potential to improve the
PUBLIC MANAGEMENT REVIEW 19

organization’s performance and enhance public value, it is vital for management to


allocate the necessary resources (Bryson, Ackermann, and Eden 2007).
Environmental factors remain relatively unimportant, except for customer readi­
ness. In line with Carrasco et al. (2019), our results for organizations on this level
illustrate that the customer perspective is relevant for AI adoption. Success will
depend on whether internal or external customers are willing or able to use the new
AI-enabled services: ‘public services are best conceptualised as service systems in
which users co-produce and co-design’ as ‘public services are subject to public
scrutiny’ (Laitinen, Kinder, and Stenvall 2018, 58). This is also underscored by
agile practices that emphasize customer centricity (Mergel, Ganapati, and
Whitford 2020) in the adoption processes in the analysed cases, as agility allows
organizations to respond faster to citizen needs (Chatfield and Reddick 2018).
Another important factor influencing citizen acceptance is understandability and
explainability – explainable and understandable AI systems help to create transpar­
ency, decipher causalities, strengthen trust in unbiased and fair systems, and
increase security (Hagras 2018).
Our findings for organizations on the determined level lead us to the following
theoretical proposition:

Proposition 2: Organizations with intermediate experience with AI technologies require


substantial strategical management support to allocate key resources to move beyond the
exploratory stage. As AI applications begin to address core business functions and
complexity rises, a greater share of implementation is done internally, and the customer
perspective gains importance.

Compared to organizations at the determined level, we notice no shift in the


importance of technological factors in organizations with higher AI maturity (at the
managed level). As the number of AI projects and their complexity increase, we
observe a gradual intra-organizational diffusion of AI technology (cf. de Vries,
Tummers, and Bekkers 2018). Interestingly, both cases on the managed level are
larger state-owned enterprises that partially operate in market environments, suggest­
ing that such companies may be frontrunners within the public sector (cf. Neumann
et al. 2019). Technological innovations typically happen in stages (Mergel and
Bretschneider 2013), with innovators leading the way (Rogers 1983). If early adopters
introduce AI projects, other public organizations might mimic their behaviour
(March and Olsen 1989). The relatively low relevance of existing processes is surpris­
ing and contrary to earlier findings (Alsheibani et al. 2020). Our respondents were
possibly aware of the potential influence of existing processes, but able to avoid
complex integration (Hasselbring 2000) or maybe this aspect would still gain impor­
tance at the optimized level of AI maturity (Alsheiabni, Cheung, and Messom 2019).
Organizational factors appear to be less critical at this level, possibly because the
required resource allocation and organization takes place in earlier stages, except for
top management support and collaboration, which remain vital. In these cases,
significant internal resources are available to develop AI solutions, even if partner­
ships are still used for complex challenges. Additionally, organizational affiliation –
which refers to potential conflicts between the organizational units developing and
using the AI solutions – is more important. As AI becomes more widespread, issues
such as the ‘not-invented-here syndrome’ (Antons and Piller 2015) and technology
20 O. NEUMANN ET AL.

acceptance issues (Marangunić and Granić 2015) may become more relevant. Again,
we find little evidence of the relevance of environmental factors at this level, leading
us to the following theoretical proposition:

Proposition 3: More AI-experienced organizations may be viewed as inspiring early


adopters by other public organizations. State-owned enterprises may play a significant
role in this, as they often possess more innovation resources than other public organiza­
tions and can develop complex AI solutions in-house. However, with the intra-
organizational diffusion of AI, resistance may increase.

While ethical aspects of AI have recently received much scholarly attention, sur­
prisingly, none of our respondents mentioned ethics in the unstructured parts of the
interview. Given the inherent dangers of the reinforcement of inequalities (Eubanks
2017) and threats to democracy (O’Neil 2016), we consider it imperative for public
administrators to proactively ensure their AI applications safeguard public values such
as efficiency, fairness, accountability, transparency, and human responsiveness (Schiff,
Jackson Schiff, and Pierson 2021). Specific ethical concerns such as biases in AI-driven
decision-making are already noticeable in the political arena (Manyika, Silberg, and
Presten 2019); however, ‘[g]overnance of emerging technologies is a highly complex
endeavour’ (Ulnicane et al. 2021, 85). We anticipate that regulations and a closer
monitoring of AI initiatives in the public sphere will be introduced soon (Desouza,
Dawson, and Chenok 2020; Sun and Medaglia 2019; Wirtz, Weyerer, and Geyer 2019),
leading to an increased influence of these factors on AI adoption, as has been the case
with social media regulations (Mergel 2015).

Strengths and Limitations of the Study


One strength of this study has been to use exploratory qualitative methods to deepen
our understanding of the nascent and important topic of AI adoption in the public
sector and offer insights into different factors when public organizations adopt AI.
A second strength is understanding AI adoption as a process and distinguishing
between different adoption stages. Our interdisciplinary approach, connecting public
administration and information systems research, is the third strength. Lastly, as this
study focuses on how best to implement AI, our findings, theoretical contributions,
and propositions support practitioners by answering relevant questions about the use
of AI in public organizations, heeding the call by Pencheva, Esteve, and Jankin
Mikhaylov (2020).
This study also has its limitations. For example, all cases were Swiss and we did not
distinguish between different types of AI applications in the public sector, thereby
impeding broader generalization, especially beyond Switzerland and other developed
countries. Future studies on AI adoption should focus more on differences between
organizations in various nations (see Mikalef et al. 2021), between whole nations as in
the case of e-government (Lee, Chang, and Stokes Berry 2011), and consider AI
adoption from an individual citizen perspective (see also Wirtz, Langer, and Fenner
2021). Furthermore, our results reflect the views of those involved in the projects and
therefore constitute a form of self-assessment.
Subsequent research might focus on long-term evaluations involving more stake­
holders and striving for more generalizable results. Since our study deliberately focuses
on adoption factors of AI in the public sector, it does not consider further the
PUBLIC MANAGEMENT REVIEW 21

application risks within the public sector (Eubanks 2017; Janssen and Kuk 2016;
Maciejewski 2017; O’Neil 2016), nor does it discuss the question of how public
organizations might deal with algorithmic transparency (Giest et al. 2020).

6 Conclusion
AI could be described as a double-edged sword for the public sector. It has excellent
potential to improve the inner workings of public organizations as well as some key
outcomes such as the quality of public services and public value creation. Conversely,
AI implementation is more complex than other IT innovations, and many public
organizations face sector-specific obstacles. Against this backdrop, the present study
supports the call for more research on drivers and hindering factors of AI adoption –
shedding light on different factors and extending the TOE framework by adding a time
dimension to observe different stages of organizational AI maturity.
Despite this, public organizations should never lose sight of the broader impli­
cations of AI technology, such as fairness and accountability. Lastly, AI should be
adopted for the right reasons, as one of our interviewees succinctly summarized:
‘AI is a means to solve previously unsolved problems, not for solving problems you first
have to create’. (Case A, interview c)

Notes
1. See Pencheva, Esteve, and Jankin Mikhaylov (2020) for a review of literature about big data in
the public sector.
2. In the case of public organizations, government regulations could also be seen as an organiza­
tional factor. To be in line with previous research, we chose keep this classification as an
environmental factor.
3. The citations from the interviews are labelled by a-h depending on which interview they
originated in, according to Table 3: Case description

Disclosure statement
No potential conflict of interest was reported by the author(s).

Notes on contributors
Oliver Neumann is an Assistant Professor in the Swiss Graduate School of Public Administration
(IDHEAP) at the University of Lausanne (Switzerland). He received his PhD in Management at the
University of Bern, where he also worked as a postdoctoral researcher in Information Systems. His
current research interests include public sector innovation, strategy, organization, behavioural public
administration, and digital transformation.
Katharina Guirguis is a research associate at the Institute of Public Management at the ZHAW School
of Management and Law in Winterthur (Switzerland) and an external doctoral student at the Swiss
Graduate School of Public Administration (IDHEAP) at the University of Lausanne. Her main
research interests include the adoption processes of AI in public organizations as well as typical use
cases for AI in the public field.
Reto Steiner is Dean and Professor for Public Management at ZHAW School of Management and Law
in Winterthur (Switzerland). He received his PhD in Management at the University of Bern, where he
also worked as a Professor. He has been Visiting Professor at the University of Rome Tor Vergata, at
22 O. NEUMANN ET AL.

the Lee Kuan Yew School of Public Policy in Singapore and at the University of Hong Kong. His
current research interests are the organizational design of the public sector, public corporate govern­
ance, and local and regional governance.

ORCID
Oliver Neumann http://orcid.org/0000-0002-0988-9729
Katharina Guirguis http://orcid.org/0000-0003-3250-007X
Reto Steiner http://orcid.org/0000-0003-0260-3094

References
Aboelmaged, Mohamed Gamal. 2014. “Predicting E-readiness at firm-level: an analysis of technolo­
gical, organizational and environmental (TOE) effects on e-maintenance readiness in manufactur­
ing firms.” International Journal of Information Management 34 (5): 639–651. doi:10.1016/j.
ijinfomgt.2014.05.002.
Agarwal, P. K. 2018. “Public Administration Challenges in the World of AI and Bots: Public
Administration Challenges in the World of AI and Bots.” Public Administration Review 78 (6):
917–921. doi:10.1111/puar.12979.
Ahonen, Pertti, and Tero Erkkilä. 2020. “Transparency in algorithmic decision-making: ideational
tensions and conceptual shifts in Finland.” Information Polity 25 (4): 419–432. doi:10.3233/IP-
200259. Edited by Sarah Giest and Stephan Grimmelikhuijsen.
Alon-Barkat, Saar, and Madalina Busuioc. 2022. “Human-AI interactions in public sector decision-
making: “Automation Bias” and “Selective Adherence” to algorithmic advice.” Journal of Public
Administration Research and Theory.
Alsheibani, Sulaiman, Yen Cheung, Yen Cheung, and Chris Messom. 2020. “Re-thinking the compe­
titive landscape of artificial intelligence”. Proceedings of the 53rd Hawaii International Conference
on System Sciences, Hawaii, 10.
Alsheibani, Sulaiman Abdallah, Yen Cheung, Chris Messom., and Mazoon Alhosni. 2020. “Winning
Ai Strategy: Six-Steps to Create Value from Artificial Intelligence.” AMCIS 11. https://aisel.aisnet.
org/amcis2020/adv_info_systems_research/adv_info_systems_research/1
Androutsopoulou, Aggeliki, Nikos Karacapilidis, Euripidis Loukis, and Yannis Charalabidis. 2019.
“Transforming the communication between citizens and government through aI-guided chatbots.”
Government Information Quarterly 36 (2): 358–367. doi:10.1016/j.giq.2018.10.001.
Ansoff, H. Igor, Rick Ansoff, Roxanne Helm-Stevens, Daniel Kipley, and A. O. Lewis. 2019. Implanting
strategic management. 3rd Cham: Springer International Publishing : Imprint: Palgrave Macmillan.
doi:10.1007/978-3-319-99599-1.
Antons, David, and Frank T. Piller. 2015. “Opening the black box of “not invented here”: attitudes,
decision biases, and behavioral consequences.” Academy of Management Perspectives 29 (2):
193–217. doi:10.5465/amp.2013.0091.
Baker, J. 2012. “The Technology–Organization–Environment Framework.“ In Information Systems
Theory Integrated series in information systems 28, edited by Dwivedi, Y, Wade, M, Schneberger, S,
231–245. New York: Springer New York. https://doi.org/10.1007/978-1-4419-6108-2_12
Bannister, Frank, and Regina Connolly. 2020. “Administration by Algorithm: a risk management
framework.” Information Polity 25 (4): 471–490. doi:10.3233/IP-200249. Edited by Sarah Giest and
Stephan Grimmelikhuijsen.
Barney, Jay B. 2001. “Resource-based theories of competitive advantage: a ten-year retrospective
on the resource-based view.” Journal of Management 27 (6): 643–650. doi:10.1177/
014920630102700602.
Bason, Christian. 2018. Leading public sector innovation: co-creating for a better society. 1st ed. Bristol:
Bristol University Press. doi:10.2307/j.ctt9qgnsd.
Boer, Noortje de, and Nadine Raaphorst. 2021. “Automation and discretion: explaining the effect of
automation on how street-level bureaucrats enforce.” Public Management Review 1–21.
doi:10.1080/14719037.2021.1937684.
PUBLIC MANAGEMENT REVIEW 23

Bovens, Mark, and Stavros Zouridis. 2002. “From street-level to system-level bureaucracies: how
information and communication technology is transforming administrative discretion and con­
stitutional control.” Public Administration Review 62 (2): 174–184. doi:10.1111/0033-3352.00168.
Bryson, John M., Fran Ackermann, and Colin Eden. 2007. “Putting the resource-based view of strategy
and distinctive competencies to work in public organizations.” Public Administration Review
67 (4): 702–717. doi:10.1111/j.1540-6210.2007.00754.x.
Bullock, Justin B. 2019. “Artificial intelligence, discretion, and bureaucracy.” The American Review of
Public Administration 49 (7): 751–761. doi:10.1177/0275074019856123.
Bundy, Alan. 2017. “Preparing for the future of artificial intelligence.” AI & SOCIETY 32 (2): 285–287.
doi:10.1007/s00146-016-0685-0.
Campion, Averill, Mila Gasco-Hernandez, Slava Jankin Mikhaylov, and Marc Esteve. 2020.
“Overcoming the challenges of collaboratively adopting artificial intelligence in the public
sector.” Social Science Computer Review 089443932097995. doi:10.1177/0894439320979953.
December.
Carrasco, Miguel, Steven Mills, Adam Whybrew, and Adam Jura. 2019. “The citizen’s perspective on
the use of AI in government.” BCG Global. Accessed on 02 March 2021. https://www.bcg.com/
publications/2019/citizen-perspective-use-artificial-intelligence-government-digital-
benchmarking
Chatfield, Akemi Takeoka, and Christopher G. Reddick. 2018. “Customer agility and responsiveness
through big data analytics for public value creation: a case study of Houston 311 on-demand
services.” Government Information Quarterly 35 (2): 336–347. doi:10.1016/j.giq.2017.11.002.
Chatterjee, Sheshadri, Nripendra P. Rana, Yogesh K. Dwivedi, and Abdullah M. Baabdullah. 2021.
“Understanding AI adoption in manufacturing and production firms using an integrated
TAM-TOE model.” Technological Forecasting and Social Change 170: 120880. doi: 10.1016/j.
techfore.2021.120880. September.
Chen, Hong, Li Ling, and Yong Chen. 2021. “Explore success factors that impact artificial intelligence
adoption on telecom industry in China.” Journal of Management Analytics 8 (1): 1–33. doi:10.1080/
23270012.2020.1852895.
Chen, Jiyao, Richard M. Walker, and Mohanbir Sawhney. 2020. “Public service innovation: a typology.”
Public Management Review 22 (11): 1674–1695. doi:10.1080/14719037.2019.1645874.
Cheng, Jianxin, Haoming Luo, Wenyi Lin, and Hu. Guopeng 2021. “Pros and cons of artificial
intelligence—lessons from E-government services in the COVID-19 Pandemic.” In 2021 2nd
International Conference on Artificial Intelligence and Education (ICAIE), 167–173. Dali, China:
IEEE. 10.1109/ICAIE53562.2021.00042.
Criado, J. Ignacio, and J. Ramon Gil-Garcia. 2019. “Creating Public Value Through Smart Technologies
And Strategies: From Digital Services To Artificial Intelligence And Beyond.” International Journal of
Public Sector Management 32 (5): 438–450. doi:10.1108/IJPSM-07-2019-0178.
Criado, J. Ignacio, Rodrigo Sandoval-Almazan, David Valle-Cruz, and Edgar A. Ruvalcaba-Gómez.
2020. “Chief information officers’ perceptions about artificial intelligence.” First Monday 10.5210/
fm.v26i1.10648.
Criado, J. Ignacio, Julián Valero, and Julián Villodre. 2020. “‘Algorithmic transparency and bureau­
cratic discretion: the case of SALER early warning system’. edited by Sarah Giest and Stephan
Grimmelikhuijsen.” Information Polity 25 (4): 449–470. doi:10.3233/IP-200260.
Damanpour, F., and M. Schneider. 2009. “Characteristics of innovation and innovation adoption in
public organizations: assessing the role of managers.” Journal of Public Administration Research
and Theory 19 (3): 495–522. doi:10.1093/jopart/mun021.
Davis, Fred D. 1989. “Perceived usefulness, perceived ease of use, and user acceptance of information
technology.” MIS Quarterly 13 (3): 319. doi:10.2307/249008.
Davis, James H., David F. Schoorman, and Lex Donaldson. 1997. “Toward a stewardship theory of
management.” Academy of Management Review 22 (1): 20–47. doi:10.5465/amr.1997.9707180258.
De la Garza, Alejandro. 2020. “States’ automated systems are trapping citizens in bureaucratic night­
mares with their lives on the line.” TIME. 28 May 2020. https://time.com/5840609/algorithm-
unemployment/
Demlehner, Quirin, and Sven Laumer. 2020.”Shall we use it or not? explaining the adoption of
artificial intelligence for car manufacturing purposes.” In Proceedings of the 28th European
Conference on Information Systems (ECIS). An Online AIS Conference, June 15–17. https://aisel.
aisnet.org/ecis2020_rp/177
24 O. NEUMANN ET AL.

Desouza, Kevin C., Gregory S. Dawson, and Daniel Chenok. 2020. “Designing, developing, and
deploying artificial intelligence systems: lessons from and for the public sector.” Business
Horizons 63 (2): 205–213. doi:10.1016/j.bushor.2019.11.004.
Eubanks, Virginia. 2017. Automating inequality: how high-tech tools profile, police, and punish the
poor. First ed. New York, NY: St. Martin’s Press.
European Commission. 2020. “Egovernment Benchmark.” Accessed on 23 August 2020. https://
digital-strategy.ec.europa.eu/en/library/egovernment-benchmark-2020-egovernment-works-
people
Fosso Wamba, Samuel, Ransome Epie Bawack, Cameron Guthrie, Maciel M. Queiroz, and Kevin
Daniel André Carillo. 2021. “Are we preparing for a good AI Society? a bibliometric review and
research Agenda.” Technological Forecasting and Social Change 164 (March): 120482. doi:10.1016/j.
techfore.2020.120482.
Gartner. 2021. Gartner information technology glossary. Artificial Intelligence (AI).’ Accessed on 15
February 2021. https://www.gartner.com/en/information-technology/glossary/artificial-
intelligence
Giest, Sarah, Stephan Grimmelikhuijsen, S. Giest, and S. Grimmelikhuijsen. 2020. “Introduction to
special issue algorithmic transparency in government: towards a multi-level perspective.”
Information Polity 25 (4): 409–417. doi:10.3233/IP-200010.
Greenhalgh, Trisha, Glenn Robert, Fraser Macfarlane, Paul Bate, and Olivia Kyriakidou. 2004.
“Diffusion of Innovations In Service Organizations: Systematic Review And Recommendations.”
The Milbank Quarterly 82 (4): 581–629. doi:10.1111/j.0887-378X.2004.00325.x.
Grimmelikhuijsen, S G., and M K. Feeney. 2017. “Developing and testing an integrative framework for
open government adoption in local governments.” Public Administration Review 77 (4): 579–590.
doi:10.1111/puar.12689.
Guirguis, Katharina, L E. Pleger, Simone Dietrich, Alexander Mertes, and Caroline Brüesch. 2021.
“Datenschutz in der Schweiz – Eine quantitative analyse der gesellschaftlichen bedenken und
erwartungen an den staat.” Yearbook of Swiss Administrative Sciences 12 (1): 16. doi:10.5334/
ssas.153.
Hofmann, Peter, Jan Jöhnk, Dominik Protschky, and Nils Urbach. 2020. “Developing Purposeful AI
Use Cases–a Structured Method and Its Application in Project Management.” In WI2020 zentrale
tracks, 33–49. Potsdam, Germany: GITO Verlag. doi:10.30844/wi_2020_a3-hofmann.
Hadwer, Al, Madjid Tavana Ali, Dan Gillis, and Davar Rezania. 2021. “A systematic review of
organizational factors impacting cloud-based technology adoption using technology-organization-
environment framework.” Internet of Things 15 (September): 100407. doi:10.1016/j.
iot.2021.100407.
Hagras, Hani. 2018. “Toward Human-Understandable, Explainable AI.” Computer 51 (9): 28–36.
doi:10.1109/MC.2018.3620965.
Hameed, M A., Steve Counsell, and Stephen Swift. 2012. “A conceptual model for the process of it
innovation adoption in organizations.” Journal of Engineering and Technology Management 29 (3):
358–390. doi:10.1016/j.jengtecman.2012.03.007.
Hasselbring, Wilhelm. 2000. “Information system integration.” Communications of the ACM 43 (6): 32–38.
Hitz-Gamper, B S., Oliver Neumann, and Matthias Stürmer. 2019. “Balancing control, usability and
visibility of linked open government data to create public value.” International Journal of Public
Sector Management 32 (5): 451–466. doi:10.1108/IJPSM-02-2018-0062.
Janssen, Marijn, and George Kuk. 2016. “The challenges and limits of big data algorithms in techno­
cratic governance.” Government Information Quarterly 33 (3): 371–377. doi:10.1016/j.
giq.2016.08.011.
Jöhnk, Jan, Malte Weißert, and Katrin Wyrtki. 2021. “Ready or not, AI Comes— An Interview Study
Of Organizational AI readiness factors.” Business & Information Systems Engineering 63 (1): 5–20.
doi:10.1007/s12599-020-00676-7.
Justin, Bullock, Matthew M. Young, and Yi-Fan Wang. 2020. “Artificial intelligence, bureaucratic
form, and discretion in public service.” Information Polity 25 (4): 491–506. doi:10.3233/IP-200223.
Edited by Sarah Giest and Stephan Grimmelikhuijsen.
Kankanhalli, Atreyi, Yannis Charalabidis, and Sehl Mellouli. 2019. “IoT and AI for smart government:
a research Agenda.” Government Information Quarterly 36 (2): 304–309. doi:10.1016/j.
giq.2019.02.003.
PUBLIC MANAGEMENT REVIEW 25

Kaplan, Andreas, and Michael Haenlein. 2019. “Siri, Siri, in my hand: who’s the fairest in the land? on
the interpretations, illustrations, and implications of artificial intelligence.” Business Horizons
62 (1): 15–25. doi:10.1016/j.bushor.2018.08.004.
Kernaghan, Kenneth. 2014. “The rights and wrongs of robotics: ethics and robots in public organiza­
tions: ethics and robots in public organizations.” Canadian Public Administration 57 (4): 485–506.
doi:10.1111/capa.12093.
Kraaijenbrink, Jeroen, J.-C. Spender, and A J. Groen. 2010. “The Resource-based view: a review and
assessment of its critiques.” Journal of Management 36 (1): 349–372. doi:10.1177/
0149206309350775.
Lai, Pc. 2017. “The literature review of technology adoption models and theories for the novelty
technology.” Journal of Information Systems and Technology Management 14 (1). doi:10.4301/
S1807-17752017000100002.
Laitinen, Ilpo, Tony Kinder, and Jari Stenvall. 2018. “Co-Design and action learning in local public
services.” Journal of Adult and Continuing Education 24 (1): 58–80. doi:10.1177/
1477971417725344.
Lee, C-P., Kaiju Chang, and F S. Stokes Berry. 2011. “Testing the development and diffusion of
E-Government and E-Democracy: A Global Perspective.” Public Administration Review 71 (3):
444–454. doi:10.1111/j.1540-6210.2011.02228.x.
Ma, L. 2014. “Diffusion and assimilation of government microblogging: evidence from Chinese cities.”
Public Management Review 16 (2): 274–295. doi:10.1080/14719037.2012.725763.
Maciejewski, Mariusz. 2017. “To do more, better, faster and more cheaply: using big data in public
administration.” International Review of Administrative Sciences 83 (1_suppl): 120–135.
doi:10.1177/0020852316640058.
Mergel, Ines. 2015. “Designing Social Media Strategies and Policies.” In Handbook of public administra­
tion, 456–468. Wiley.
Manyika, James, Jake Silberg, and Brittany Presten. 2019. “What do we do about the biases in AI?.” Harvard
Business Review, 25 October 2019. https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
Marangunić, Nikola, and Andrina Granić. 2015. “Technology acceptance model: a literature review
from 1986 to 2013.” Universal Access in the Information Society 14 (1): 81–95. doi:10.1007/s10209-
014-0348-1.
March, James G., and Johan P. Olsen. 1989. Rediscovering institutions: the organizational basis of
politics. 1. ed. New York etc: Free Press.
Margetts, Helen, and Cosmina Dorobantu. 2019. “Rethink Government with AI.” Nature 568 (7751):
163–165. doi:10.1038/d41586-019-01099-5.
Martens, Bertin. 2018. “The importance of data access regimes for artificial intelligence and machine
learning.” SSRN Electronic Journal. Journal. doi:10.2139/ssrn.3357652.
Mayring, Philipp. 2014. Qualitative Content Analysis: Theoretical Foundation, Basic Procedures and
Software Solution. Klagenfurt. https://nbn-resolving.org/urn:nbn:de:0168-ssoar-395173
Medaglia, Rony, J.Ramon Gil-Garcia, and T A. Pardo. 2021. “Artificial intelligence in government:
taking stock and moving forward.” Social Science Computer Review 089443932110340. doi:10.1177/
08944393211034087.
Mehr, Hila, Ash H, and Fellow, D. 2017. “Artificial Intelligence for Citizen Services and Government
Ash Cent. Democr. Gov. Innov. Harvard Kennedy Sch . August 19, 1–12.
Meijer, Albert, Lukas Lorenz, and Martijn Wessels. 2021. “Algorithmization of bureaucratic organiza­
tions: using a practice lens to study how context shapes predictive policing systems.” Public
Administration Review June puar 13391. doi:10.1111/puar.13391.
Meijer, Albert, and Martijn Wessels. 2019. “Predictive policing: review of benefits and drawbacks.”
International Journal of Public Administration 42 (12): 1031–1039. doi:10.1080/
01900692.2019.1575664.
Melitski, J., Gavin, D. and Gavin, J. 2010. “Technology adoption and organizational culture in public
organizations.” International Journal of Organization Theory & Behavior. 13 No. (4): 546–568.
https://doi.org/10.1108/IJOTB-13-04-2010-B005
Mergel, Ines. 2015. “Designing Social Media Strategies and Policies.” In Handbook of public admin­
istration, Editors: edited by James L. Perry, Robert K. Christensen, 456–468. Jossey-Bass Wiley.
Mergel, I., and S I. Bretschneider. 2013. “A three-stage adoption process for social media use in
government.” Public Administration Review 73 (3): 390–400. doi:10.1111/puar.12021.
26 O. NEUMANN ET AL.

Mergel, Ines, Sukumar Ganapati, and Andrew B. Whitford. 2020. “Agile: a new way of governing.”
Public Administration Review. n/a (n/a). doi:10.1111/puar.13202.
Mikalef, Patrick, Kristina Lemmer, Cindy Schaefer, Maija Ylinen, S O. Olsen Fjørtoft, H Y. Yngvar
Torvatn, Manjul Gupta, and Bjoern Niehaves. 2021. “Enabling AI capabilities in government
agencies: a study of determinants for European municipalities.” Government Information
Quarterly 101596. doi:10.1016/j.giq.2021.101596. June.
Moe, N B, Darja Šmite, G K. Kjetil Hanssen, and Hamish Barney. 2014. “From offshore outsourcing to
insourcing and partnerships: four failed outsourcing attempts.” Empirical Software Engineering
19 (5): 1225–1258. doi:10.1007/s10664-013-9272-x.
Mueller, Sean. 2011. “The politics of local autonomy: measuring cantonal (de)centralisation in
Switzerland.” Space and Polity 15 (3): 213–239. doi:10.1080/13562576.2011.692579.
Nam, Kichan, Christopher S. Dutt, Prakash Chathoth, Abdelkader Daghfous, and M. Sajid Khan.
2020.“The adoption of artificial intelligence and robotics in the hotel industry: prospects and
challenges.” Electronic Markets, October. doi:10.1007/s12525-020-00442-3.
Neumann, Oliver, Christian Matt, B S. Simon Hitz-Gamper, Lisa Schmidthuber, and Matthias Stürmer.
2019. “Joining forces for public value creation? exploring collaborative innovation in smart city
initiatives.” Government Information Quarterly 36 (4): 101411. doi:10.1016/j.giq.2019.101411.
Newman, Joshua, Michael Mintrom, and Deirdre O’Neill. 2022. “Digital technologies, artificial
intelligence, and bureaucratic transformation.” Futures 136 (February): 102886. doi:10.1016/j.
futures.2021.102886.
O’Neil, Cathy. 2016. Weapons of math destruction: how big data increases inequality and threatens
democracy. first ed. new york: crown.
Oliveira, Tiago, and Maria Fraga Martins. 2011. “Literature Review of Information Technology Adoption
Models at Firm Level.” The Electronic Journal Information Systems Evaluation 14 (1): 110–121.
Oxford Insights. 2020. “Government AI Readiness Index 2020.” Accessed on 23 August 2020. https://
www.oxfordinsights.com/government-ai-readiness-index-2020
Peeters, Rik, S. Giest, and S. Grimmelikhuijsen. 2020.“The agency of algorithms: understanding
human-algorithm interaction in administrative decision-making.”Information Polity 25 (4): 507–
522. doi:10.3233/IP-200253.
Peeters, Rik, and Marc Schuilenburg. 2018. “Machine justice: governing security through the bureau­
cracy of Algorithms.” Information Polity 23 (3): 267–280. doi:10.3233/IP-180074.
Pencheva, Irina, Marc Esteve, and S J. Jankin Mikhaylov. 2020. “Big data and AI – A Transformational
Shift For Government: So, What Next For Research?” Public Policy and Administration 35 (1):
24–44. doi:10.1177/0952076718780537.
Pumplun, Luisa., Tauchert Christoph, and Heidt Margareta. 2019. ”A NEW ORGANIZATIONAL
CHASSIS FOR ARTIFICIAL INTELLIGENCE - EXPLORING ORGANIZATIONAL READINESS
FACTORS”. In Proceedings of the 27th European Conference on Information Systems (ECIS),
Stockholm & Uppsala, Sweden, June 8-14, 2019. ISBN 978-1-7336325-0-8 Research Papers.
https://aisel.aisnet.org/ecis2019_rp/106
Rentsch, Carole, and Matthias Finger. 2015. “Yes, no, maybe: the ambiguous relationships between
state-owned enterprises and the state: yes, no, maybe: the ambiguous relationships between
state-owned enterprises.” Annals of Public and Cooperative Economics 86 (4): 617–640.
doi:10.1111/apce.12096.
Ritz, A., G A. Brewer, and Oliver Neumann. 2016. “Public service motivation: a systematic literature
review and outlook.” Public Administration Review 76 (3): 414–426. doi:10.1111/puar.12505.
Rogers, Everett M. 1983. Diffusion of Innovations. 3rd ed. New York : London: Free Press; Collier
Macmillan.
Rogers, Everett M. 1995. “Diffusion of Innovations: Modifications of a Model for
Telecommunications.” In Die diffusion von innovationen in der telekommunikation, edited by
Matthias-W. Stoetzer and Alwin Mahler, 25–38. Berlin, Heidelberg: Springer. doi:10.1007/978-
3-642-79868-9_2.
Russell, Stuart J, and Peter Norvig. 2021. “Artificial Intelligence: A Modern Approach.” Fourth.
Pearson series in artificial intelligence. Hoboken: Pearson.
Schaefer, Cindy, Kristina Lemmer, Samy Kret Kret, Maija Ylinen, Patrick Mikalef, and
Bjoern Niehaves. 2021.“‘Truth or Dare?.” – How Can We Influence the Adoption of Artificial
Intelligence in Municipalities?’ In Proceedings of the 54th Hawaii International Conference on
System Sciences, 2347–2356. Maui, Hawaii.
PUBLIC MANAGEMENT REVIEW 27

Schiff, Daniel S., Kaylyn Jackson Schiff, and Patrick Pierson. 2021. “Assessing public value failure in
government adoption of <sc>artificial intelligence.” Public Administration April padm 12742.
10.1111/padm.12742.
Sousa, Weslei Gomes de, Elis Regina Pereira de Melo, Paulo Henrique De Souza Bermejo,
Rafael Araújo Sousa Farias, and A O. Oliveira Gomes. 2019. “How and where is artificial intelli­
gence in the public sector going? a literature review and research Agenda.” Government
Information Quarterly 36 (4): 101392. doi:10.1016/j.giq.2019.07.004.
Stenberg, L., and Nilsson S. 2020. Factors influencing readiness of adopting AI : A qualitative study of
how the TOE framework applies to AI adoption in governmental authorities (Dissertation).
Retrieved from Dissertation: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279583
Sulaiman, Alsheiabni, Yen Cheung, and Chris Messom. 2019. “Towards An Artificial Intelligence
Maturity Model: From Science Fiction To Business Facts. Pacific Asia Conference on Information
Systems, July 8-12, 2019 PACIS 2019, X'ian, China 46
Sun, T Q, and Rony Medaglia. 2019. “Mapping the challenges of artificial intelligence in the public
sector: evidence from public healthcare.” Government Information Quarterly 36 (2): 368–383.
doi:10.1016/j.giq.2018.09.008.
Thong, J.Y.L, and C.S. Yap. 1995. “CEO Characteristics, Organizational Characteristics and
Information Technology Adoption in Small Businesses.” Omega 23 (4): 429–442. doi:10.1016/
0305-0483(95)00017-I.
Tornatzky, Louis G, Mitchell Fleischer, and Alok K. Chakrabarti. 1990. “The Processes of
Technological Innovation.” In Issues in organization and management series. Lexington, Mass:
Lexington Books.
Ulnicane, Inga, D O Okaibedi Eke, William Knight, George Ogoh, and B C. Carsten Stahl. 2021. “Good
governance as a response to discontents? Déjà Vu, or lessons for AI from other emerging technol­
ogies.” Interdisciplinary Science Reviews 46 (1–2): 71–93. doi:10.1080/03080188.2020.1840220.
Venkatesh, V, Davis, and Davis. 2003. “User Acceptance of Information Technology: Toward
a Unified View.” MIS Quarterly 27 (3): 425. doi:10.2307/30036540.
Vries, Hanna de, Lars Tummers, and Victor Bekkers. 2018. “The diffusion and adoption of public
sector innovations: a meta-synthesis of the literature.” Perspectives on Public Management and
Governance 1 (3): 159–176. doi:10.1093/ppmgov/gvy001.
Walker, R. M. 2007. “An empirical evaluation of innovation types and organizational and environ­
mental characteristics: towards a configuration framework.” Journal of Public Administration
Research and Theory 18 (4): 591–615. doi:10.1093/jopart/mum026.
Wang, C, Thompson S.H. Teo, and Marijn Janssen. 2021. “Public and private value creation using
artificial intelligence: an empirical study of AI voice robot users in Chinese public sector.”
International Journal of Information Management 61 (December): 102401. doi:10.1016/j.
ijinfomgt.2021.102401.
Wang, Youkui, Nan Zhang, and Xuejiao Zhao. 2020. “Understanding the determinants in the different
government AI adoption stages: evidence of local government chatbots in China.” Social Science
Computer Review 089443932098013. doi:10.1177/0894439320980132. December.
Wirtz, B W, P F. Langer, and Carolina Fenner. 2021. “Artificial intelligence in the public sector -
a research Agenda.” International Journal of Public Administration 44 (13): 1103–1128.
doi:10.1080/01900692.2021.1947319.
Wirtz, B W., and W M. Müller. 2019. “An integrated artificial intelligence framework for public
management.” Public Management Review 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268.
Wirtz, B W, J C. Weyerer, and Carolin Geyer. 2019. “Artificial intelligence and the public sector—
applications and challenges.” International Journal of Public Administration 42 (7): 596–615.
doi:10.1080/01900692.2018.1498103.
Yin, Robert K. 2018. Case Study Research and Applications: Design and Methods. Sixth ed. Los Angeles:
SAGE.
Young, Matthew M, Justin B Bullock, and Jesse D Lecy. 2019. “Artificial discretion as a tool of
governance: a framework for understanding the impact of artificial intelligence on public
administration.” Perspectives on Public Management and Governance Octobergvz014. 10.1093/
ppmgov/gvz014.
Ziltener, Patrick, and Heinz Gabathuler. 2018. “Betriebliche mitwirkung in der Schweiz – Eine
untersuchung der Bestimmungen in Gesamtarbeitsverträgen.” Industrielle Beziehungen.
Zeitschrift für Arbeit, Organisation und Management 25 (1): 5–26. doi:10.3224/indbez.v25i1.01.

You might also like