0% found this document useful (0 votes)
1K views955 pages

The Routledge Handbook of Artificial Intelligence and International Relations (Brasioli, Diego Guercio, Laura Landini Etc.)

The Routledge Handbook of Artificial Intelligence and International Relations explores the impact of AI on global politics, including its role in conflict resolution, economic alliances, and ethical considerations. It emphasizes the need for regulations to ensure responsible AI use and discusses the geopolitical implications of AI technologies. The book features contributions from various experts, making it relevant for students, researchers, and professionals in multiple fields.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views955 pages

The Routledge Handbook of Artificial Intelligence and International Relations (Brasioli, Diego Guercio, Laura Landini Etc.)

The Routledge Handbook of Artificial Intelligence and International Relations explores the impact of AI on global politics, including its role in conflict resolution, economic alliances, and ethical considerations. It emphasizes the need for regulations to ensure responsible AI use and discusses the geopolitical implications of AI technologies. The book features contributions from various experts, making it relevant for students, researchers, and professionals in multiple fields.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 955

THE ROUTLEDGE HANDBOOK OF

ARTIFICIAL INTELLIGENCE AND


INTERNATIONAL RELATIONS

The Routledge Handbook of Artificial Intelligence and International


Relations examines how machines, algorithms, and data are reshaping the
way nations interact, negotiate, and navigate global politics.
In the 21st century, artificial intelligence (AI) has transformed from a
theoretical wonder to a real force, and with it the race to dominate new
technologies is proving to be a key geopolitical concern. This book looks at
both the ways in which AI is transforming the landscape of international
relations and the challenges this brings. The book includes discussions on:

The need for regulations and oversight to make sure that AI is used in
an ethical way.
AI's role in conflict resolution and peacekeeping and its influence on
economic alliances.
The ethical and moral dilemmas posed by autonomous systems making
life-or-death decisions.
Frameworks that ensure responsible and accountable use of AI.
How the choices we make today will define the contours of global
equilibrium for generations to come.

With a variety of contributions from policy analysts, philosophers,


government officials, scientists, researchers, and business representatives,
this book appeals to students and researchers of political science,
international relations, computer science, and ethics. It also holds interest
for professionals in government organizations and NGOs at national and
international levels.

Diego Brasioli is an Italian career diplomat, currently working for the


Italian Ministry of Foreign Affairs and International Cooperation on
cybersecurity and technological innovation.

Laura Guercio is a professor at the University Niccolò Cusano of Rome.


She teaches International Relations and Intelligence Systems.

Giovanna Gnerre Landini is an expert in international human rights law


and international humanitarian law, cooperating with several universities,
institutions, and organizations in Italy and abroad.

Andrea de Giorgio has a PhD in AI applied to production engineering and


is the founder and CEO of Artificial Engineering, an innovative European
startup in the field of AI.
ROUTLEDGE INTERNATIONAL
HANDBOOKS

THE HANDBOOK FOR THE FUTURE OF WORK


Edited by Julie MacLeavy and Frederick Harry Pitts

THE ROUTLEDGE INTERNATIONAL HANDBOOK OF HIMALAYAN


ENVIRONMENTS, DEVELOPMENT AND WELLBEING
Edited by Ben Campbell, Mary Cameron and Tanka Subba

THE ROUTLEDGE HANDBOOK OF ACCOUNTING FOR THE


SUSTAINABLE DEVELOPMENT GOALS
Edited By Andrea Venturelli and Chiara Mio

THE ROUTLEDGE HANDBOOK OF THE UNCERTAIN SELF, 2ND


EDITION
Edited by Patrick J. Caroll, Kimberly Rios and Kathryn C. Oleson

ROUTLEDGE HANDBOOK ON TRANSNATIONAL COMMERCIAL


LAW
Edited by Bruno Zeller and Camilla Baasch Andersen

THE ROUTLEDGE HANDBOOK OF IDENTITY AND CONSUMPTION


Edited by Ayalla A. Ruvio and Russell Belk

THE ROUTLEDGE INTERNATIONAL HANDBOOK OF


PSYCHOANALYSIS AND JEWISH STUDIES
Edited by Stephen Frosh and Devorah Baum

THE ROUTLEDGE HANDBOOK OF ARTIFICIAL INTELLIGENCE


AND INTERNATIONAL RELATIONS
Edited by Diego Brasioli, Laura Guercio, Giovanna Gnerre Landini and
Andrea de Giorgio
THE ROUTLEDGE HANDBOOK OF
ARTIFICIAL INTELLIGENCE AND
INTERNATIONAL RELATIONS

Edited by Diego Brasioli, Laura Guercio, Giovanna


Gnerre Landini and Andrea de Giorgio
Cover image: Jorg Greuel / Getty Images

First published 2025


by Routledge
4 Park Square, Milton Park, Abingdon, Oxon OX14 4RN

and by Routledge
605 Third Avenue, New York, NY 10158

Routledge is an imprint of the Taylor & Francis Group, an informa business

© 2025 selection and editorial matter, Diego Brasioli, Laura Guercio, Giovanna Gnerre Landini and
Andrea de Giorgio; individual chapters, the contributors

The right of Diego Brasioli, Laura Guercio, Giovanna Gnerre Landini and Andrea de Giorgio to be
identified as the authors of the editorial material, and of the authors for their individual chapters, has
been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by
any electronic, mechanical, or other means, now known or hereafter invented, including
photocopying and recording, or in any information storage or retrieval system, without permission in
writing from the publishers.

Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.

British Library Cataloguing-in-Publication Data


A catalogue record for this book is available from the British Library

Library of Congress Cataloging-in-Publication Data


Names: Brasioli, Diego, 1961- editor. | Guercio, Laura, editor. | Gnerre Landini, Giovanna, editor. | de
Giorgio, Andrea, editor.
Title: The Routledge handbook of artificial intelligence and international relations / edited by Diego
Brasioli, Laura Guercio, Giovanna Gnerre Landini and Andrea de Giorgio.
Other titles: Handbook of artificial intelligence and international relations
Description: First edition. | New York : Routledge, 2025. | Includes bibliographical references and
index. | Contents: Part 1. Foundations and Perspectives – Part 2. Philosophy, Policy and Governance
– Part 3. Military, Security and Defense – Part 4. Security of Data in Cyberspace – Part 5. Economic,
Social and Cultural Dimensions – Part 6. Ethics, Oversight and Accountability.
Identifiers: LCCN 2024054013 | ISBN 9781032850139 (hbk) | ISBN 9781032855134 (pbk) | ISBN
9781003518495 (ebk)
Subjects: LCSH: International relations–Philosophy–Handbooks, manuals, etc. | Artificial
intelligence–Handbooks, manuals, etc.
Classification: LCC JZ1305 .R68396 2025 |
DDC 327.0285/53–dc23/eng/20250305
LC record available at https://lccn.loc.gov/2024054013

ISBN: 978-1-032-85013-9 (hbk)


ISBN: 978-1-032-85513-4 (pbk)
ISBN: 978-1-003-51849-5 (ebk)

DOI: 10.4324/9781003518495

Typeset in Times New Roman


by KnowledgeWorks Global Ltd.
CONTENTS

List of Figures and Tables


Contributors Biographies
Acknowledgments

Introduction: The Transformative Impact of Artificial Intelligence on


Our World
Diego Brasioli, Laura Guercio, Giovanna Gnerre Landini, and Andrea de Giorgio

PART I
Foundations and Perspectives

1 From Entropy to International Relations: How Research into Artificial


Intelligence Is Improving the World
Andrea de Giorgio

2 The Geopolitical Impact of Artificial Intelligence: The UN SDGs and


China's Strategies in Africa
Settimio Stallone
3 Complex Reflections on Artificial Intelligence
Marco Emanuele

4 Artificial Intelligence and Supranational Law: Between


Anthropocentrism and Technological Development
Giovanni Zarra and Roberto Ruoppo

PART II
Philosophy, Policy, and Governance

5 Artificial Intelligence, Governance, and Race


Farai Chipato and David Chandler

6 International Relations in the “Age of Technologization”


Diego Brasioli

7 Harmony in the Helix: Navigating the “AI Race” through Collaborative


Geopolitics
Enzo Maria Le Fevre Cervini

8 Exploring the Recent Advancements and Transformative Impact of


Artificial Intelligence
Nicolas Griedlich, Liubomyr Bregman, and Anke Joubert

9 Artificial Intelligence as a Tool for Conflict Resolution


Ugo Tramballi

PART III
Military, Security, and Defense

10 Chemical and Biological Weapons in the Age of Generative Artificial


Intelligence
David J. Heslop and Joel R. Keep

11 Extended Meaningful Human Control of AI-Powered Warfare


Guglielmo Tamburrini

12 Use of Artificial Intelligence in Armed Conflicts: What Is ‘Lawful’?


Stefano Dominelli and Pietro Sanna

13 Artificial Intelligence in Humanitarian Action: Humanity and the


Machine
Massimo Marelli

14 The New Strategic Triad: Geopolitics, Diplomacy, and Artificial


Intelligence – The Uncertain Trumpet Redux
Alessandro Politi

PART IV
Security of Data in Cyberspace

15 Preserving Confidentiality and Privacy in Data-Driven Applications


Luigi Ruggerone and Valeria Ricci

Appendix A: “Multi-Party Computation” Design Details


Appendix B: “Federated Learning” Project Details

16 The Role of the Public Sector in Developing Supercomputing and


Innovation Ecosystems: The Case of Bologna Technopole within the
Italian and European Framework
Francesco Ubertini and Alessandra Poggiani

17 Artificial Intelligence and Cybersecurity: Navigating a Double-Edged


Relationship
Pascal Steichen
18 Artificial Intelligence and Earth Observation-Based Capability to Get
Fast Access to Large-Scale Transboundary Information
Marco Chini

PART V
Economic, Social, and Cultural Dimensions

19 Harnessing Artificial Intelligence for a More Equitable World


Joe Cerrell

20 Artificial Intelligence and Future Perspectives of the International


Humanitarian Law in Conflict Settings: A Focus on Children in Armed
Conflict
Laura Guercio

21 A 21st-Century Odyssey: Using Artificial Intelligence to Navigate the


Seas of International Development Cooperation Toward the Promotion
of the Rule of Law
Giulio Venneri

22 The ChatGPT Artificial Intelligence Model: Challenges and Solutions


Maria Amata Garito and Marco Breda

23 Traditional Knowledge, Environmental Challenges, and Artificial


Intelligence: Ethical Generative AI Use and Sustainable Approaches
Giovanna Gnerre Landini

24 Artificial Intelligence's Influence on the Economy: Balancing Potential


and Real-World Challenges
Angelo Federico Arcelli and Pietro Bossi
PART VI
Ethics, Oversight, and Accountability

25 The AI Act: A Geopolitical Tool to Foster Human Rights and Ethics


Alessandro Mantelero

26 Human Agency and Oversight – A Responsible and Trustworthy Use of


Artificial Intelligence: The AI Act Approach
Lucilla Sioli and Miriam D’Arrigo

27 The Role of Ethics in the Future Regulation of Artificial Intelligence


Laura Palazzani

28 Ethical Dilemmas and Regulatory Landscape in Healthcare Artificial


Intelligence
Yasaman Yousefi, Daniel Karpati, and Christoph Schommer

29 Artificial Intelligence Ethics: Challenges for a Computer Science Ethics


Board with a Focus on Autonomy
Gabriele Lenzini

ANNEX I: A Comparative Overview of Artificial Intelligence Regulatory


Systems

ANNEX II: Young Leaders’ Vision on the Future of Artificial Intelligence


Bram Koers, Beatrice Biffi, Sana Hadžić-Babačić, Stefan Marković, Vyara Noncheva, Hannah
Pauly, Devi Purwanti, Donada Rata, Roberta Risso, Jérôme Seibert, Juho Toppari, Luis
Engelmann, Sofiia Zhuchyk, Mahmoud Moussa, Lilly Grande, Emilie Uehara, Mariia Mirat,
Chiara Cupola, Sara Maria Barbaglia, Sophia Vermaas, Ricardo Nuno de Almeida Gens,
Matheus Fischer Meyer, Rafaela Lara Caetano Reinhardt, Alina Diachenko, Ricardo Filipe
Tavares Magalhaes, Adrian Wróbel

Index
FIGURES AND TABLES

Figures
8.1 RAG schema.
8.2 Finetuning schema.
8.3 Deloitte survey, top concerns companies have about Generative AI.
8.4 Deloitte survey in Switzerland, for Generative AI used for work.
8.5 Productivity growth rate in G7, OECD.
16.1 Leonardo's architecture.
17.1 ENISA Threat Landscape 2023. Breakdown of analyzed incidents by
threat type (July 2022–June 2023).
A1.1 Comparative analysis spread across an imaginary two-way line,
with economic/innovation on one side and values on the other one.

Tables
8.1 Summary of reviewed datapoints of impact of Generative AI with
impact on productivity
16.1 Leonardo's tech specifications
CONTRIBUTORS BIOGRAPHIES

Angelo Federico Arcelli is currently a full professor (“professore


straordinario a t.d.”) at Università Guglielmo Marconi (Rome) and lecturer
in economic policy at Università Cattolica del Sacro Cuore (Piacenza).
Federico holds a degree in economics (MSc) from Bocconi University
(Milan) and a PhD in economic history from the same university. In his
professional life, he has been senior adviser to the vice president (2004–
2008) of the European Investment Bank, adviser and member of the
Executive Board of the World Bank (2008–2009) and adviser in the
Independent Evaluation Group (2009–2010). He has worked for IMI Bank,
McKinsey & Company and currently as a partner for Oliver Wyman (MMC
Group). Federico is a full member and member of the executive committee
of Aspen Institute Italy (Rome), senior fellow at the Center for International
Governance Innovation (“CIGI,” Waterloo, ON, CN), senior fellow at ISEA
(Rome) and senior fellow at the Transatlantic Leadership Network
(Washington, DC, USA).

Pietro Bossi is currently a Senior Consultant at Oliver Wyman. Pietro holds


a degree in engineering (MSc cum laude) from Politecnico di Milano.
Before joining consulting, he has worked in Private Equity (Italy), and
Asset Management (France).

Diego Brasioli (Rome, Italy, 1961) is an Italian career diplomat, currently


serving as Special Envoy for Cybersecurity of The Italian Minister of
Foreign Affairs and International Cooperation / Head of the Unit for
Technological Innovation and Cybersecurity of the Italian Ministry of
Foreign Affairs. He has worked in various diplomatic posts abroad such as
Pakistan, Jordan, Lebanon, USA (Los Angeles) and, as ambassador, in
Romania (2013–2017) and Luxembourg (2020–2024). At the Italian
Ministry of Foreign Affairs, he has served in different capacities, such as
President of the Inter-ministerial Committee for Human Rights, Director for
North Africa and the Middle East, Director for Security and Disarmament,
and Deputy Director General for Political Affairs.

Marco Breda is currently the Head of the University Laboratory on


Artificial Intelligence of the International Telematic University
UNINETTUNO. Until early 2024, he served as the Director of the
Advanced Analytics & AI Area at the Data & Analytics Center of
Excellence at Engineering Ingegneria Informatica, where he oversaw the
design and development of complex analytical systems for public and
private industrial companies across various ICT sectors using machine
learning paradigms. Prior to joining Engineering in 2009, he worked as a
project manager specializing in Data Warehousing, Business Intelligence,
and Data Mining for various IT companies since 1998. Before that, starting
in 1991, he worked in the Research Division of Ericsson
Telecomunicazioni, focusing on Broadband Networks and Complex
Systems. Since 1990, he also contributed to TIM by designing advanced
private networks for large users. In parallel with his professional career,
since 1997, he has been an Associate Researcher at the Semeion Research
Center, primarily engaged in basic research in Machine Learning and
Artificial Intelligence, resulting in about thirty publications. Since 2010, he
has been a lecturer at the “Enrico della Valle” School, teaching courses on
Data Warehousing and Advanced Analytics. He earned his MSc, summa
cum laude, in Electronic Engineering from the “Sapienza” University of
Rome in 1989.

Liubomyr Bregman is a data scientist specializing in the application of AI


and machine learning technologies across various industries, such as the
public sector, financial industry, and supply chain. Currently, he is serving
as a Senior Manager in the AI and Data team at Deloitte Luxembourg. At
Deloitte, he works on the application and development of generative AI
technologies, focusing on solutions that boost productivity. Before this, he
held the position of team leader and senior data scientist at Amazon, where
he invented, crafted, developed, and owned solutions that helped transform
operations in European and North American company sides. Liubo
graduated from CERGE-EI (Charles University and Czech Academy of
Science) where he obtained a master's degree in applied economics and
later interrupted his PhD studies focusing on high dimensional
econometrics. Liubo is originally from Ukraine and speaks English, Czech,
native Ukrainian, and basic Italian.

Joe Cerrell, managing director for Europe, the Middle East, and East Asia
(EMEEA), is based in the Gates Foundation's London office, which he
opened in 2010. In this role, Joe oversees the foundation's government
relations, policy, communications, and partnerships with 16 countries across
the EMEEA region. His team seeks to increase engagement in the
foundation's global health and development priorities, and drive progress on
global health issues through partnerships with governments and other non-
state actors. Since joining the foundation in 2001, Joe has held a number of
roles, including director for donor government relations and director of
Global Health Policy and Advocacy. In 2017, Joe helped to found
Goalkeepers, an initiative that brings together leaders from around the
world to accelerate progress toward the UN's 2030 Sustainable
Development Goals. Prior to his time at the foundation, he served in a
variety of senior roles in government and strategy consulting practices,
including positions in the Clinton White House under former Vice President
Al Gore and at APCO Worldwide. Joe currently serves on the board of
directors for the ONE Campaign and Global Citizen in Europe, and the
Global Institute for Disease Elimination (GLIDE). He received a bachelor's
degree in political science from the University of Southern California.

David Chandler is Professor of International Relations, University of


Westminster. He edits the journal Anthropocenes: Human, Inhuman,
Posthuman. His recent monographs include: The World as Abyss: The
Caribbean and Critical Thought in the Anthropocene (2023); Anthropocene
Islands: Entangled Worlds (2021); Becoming Indigenous: Governing
Imaginaries in the Anthropocene (2019); and Ontopolitics in the
Anthropocene: An Introduction to Mapping, Sensing and Hacking (2018).

Marco Chini earned the MSc degree in electronic engineering from the
Sapienza University of Rome, Italy, and the PhD degree in geophysics from
the University of Bologna, Italy. He is a lead remote sensing scientist at the
Luxembourg Institute of Science and Technology (LIST), where he is
responsible for acquiring, managing, and developing research and
innovation projects focusing on remote sensing, deep learning,
classification, and geophysical parameter estimation. The projects he is
involved in, both fundamental and applied in nature, focus on
understanding natural and anthropic phenomena at large scale with use
cases spanning from floodwater detection, natural disasters monitoring,
urban sprawl mapping, humanitarian aid, maritime surveillance and
defense.

Farai Chipato is a lecturer in Black Geographies at the School of


Geographical and Earth Studies at the University of Glasgow. He researches
Black social and political thought and the Anthropocene. His work has been
published in Security Dialogue, Global Studies Quarterly, and Political
Geography.

Miriam D’Arrigo is a Legal and Policy Officer in the AI Office,


Directorate-General for Communications Networks, Content and
Technology (DG CNECT), where she works in the Unit responsible for AI
regulation and compliance. She was a member of the European
Commission's legal team that negotiated the political agreement on the AI
Act regulation and is currently working on the regulation's implementation,
with a particular focus on the obligations related to the providers of general-
purpose AI models and the preparation of the first code of practice.
Previously, Miriam was involved in negotiating other digital files within the
remit of DG CNECT, including the regulation on the Chips Act. Before
joining the Commission, she served as Head of Cabinet for a Member of the
European Parliament, where she handled key legislative files related to the
Committees on Legal Affairs and Public Health. Miriam has also worked as
a legal advisor in law firms in Italy, France, and Belgium.

Andrea de Giorgio, PhD in engineering, is the Founder and CEO of


Artificial Engineering, leading innovation with artificial intelligence (AI)
for startups and small-medium enterprises. Andrea is an externally affiliated
researcher at the department of production engineering of KTH Royal
Institute of Technology in Stockholm, Sweden. He holds a PhD in
production engineering and an MSc degree in computer science and
engineering with a major in machine learning. His main research interests
focus on the application of AI and extended reality technologies to human-
machine collaboration and cross-disciplinary engineering.

Stefano Dominelli is Associate Professor in Public and Private


international law at the University of Genoa where he teaches, among
others, International Humanitarian Law. He is sworn-in Lawyer, Bar
Association in Savona, and is part of the Universities Network for Children
in Armed Conflict.

Marco Emanuele is the editor of The Global Eye


(https://www.theglobaleye.it, Lab-Think Tank on Complexity). Marco is
among the Italian exponents of complex thinking. He is passionate about
technological innovation and works on the issues of peace, democratic
evolution, and international politics.

Maria Amata Garito is Professor Emeritus of Psychotechnologies and


President and Rector of the International Telematic University
UNINETTUNO. From 1994 to 2013, she worked as full professor of
Psychotechnologies at the Psychology Faculty of the University of Rome
“La Sapienza” and for some years also at the Sorbonne University in Paris.
She was Director General of CATTID (Center for Applications of
Television and Technologies of Distance Education) at “La Sapienza”
University of Rome from 1984 to 1990. The results of her research
activities related to technologies and artificial intelligence applied to
teaching and learning processes enabled her to create: Consorzio
NETTUNO – Network Telematico per l’Università Ovunque – in which she
had the role of Director General from 1991 to 2016 and she realized the
organizational and of distance teaching model of Consorzio NETTUNO. In
2005, she established the International Telematic University
UNINETTUNO and designed and realized its psycho-pedagogical,
organizational models and its technological platform. As an expert, she was
member of several scientific committees at the European Commission for
designing the guidelines to be applied to the development of research
programs on new technologies applied to teaching and learning processes.
Her collaboration engagements with countries in the Euro-Mediterranean
Area are ongoing and began in 2001 as coordinator and director general of
the Med-Net’U – Mediterranean Network of Universities. For UNESCO, in
2002, she designed the organizational and psycho-pedagogic model of the
Arab Open University in Kuwait. She conducted several international
research activities; she authored about 200 scientific publications and
received many international prizes and awards.

Giovanna Gnerre Landini is an expert in international human rights law


and international humanitarian law. She holds a PhD in International Order
and Human Rights from Sapienza University of Rome (Italy). Main areas of
her research and training activity: multilevel protection (national, regional,
and international) of vulnerable groups, in particular children, migrants, and
indigenous peoples; environment and sustainability; development and
cooperation; interplay between new technologies and the respect of
fundamental rights. She is a member of the Steering and Scientific
Committee of the University of Luxembourg Diplomacy Lab. She
cooperates regularly as a lecturer, expert consultant, and analyst with
several universities, NGOs, and national and international institutions in
Italy and abroad.

Nicolas Griedlich is a Partner in Consulting Digital & Technology for the


Artificial Intelligence and Data sector at Deloitte, having joined the firm in
2008. With over a decade of experience, he has developed significant
business and technical competencies in the Financial Services Industry
(FSI). Nicolas is driven by enhancing data value and aligning data
management with applicable regulations and standards. His involvement in
Information Management & Analytics projects, coupled with his expertise
in Business Intelligence and Data Governance, brings added value to his
clients. Currently, Nicolas leads the Artificial Intelligence and Data practice
at Deloitte Luxembourg. Nicolas holds a master's degree from the ENSEM
engineering school and is fluent in both French and English.

Laura Guercio is professor of “International Relations and Intelligence


Systems” at the University Niccolò Cusano Rome. Member of the Council
of the European Law Institute in Vienna; Counsel (for victims) before the
International Criminal Court, The Hague; OSCE Moscow Mechanism
Expert (Member of the Moscow Mechanism Mission Ukraine); Secretary-
General of the Universities Network for Children in Armed Conflict
(UNETCHAC); Secretary-General of the Inter-Ministerial Committee for
Human Rights at the Italian Ministry for Foreign Affairs and International
Cooperation (2017–2020); Agent of the Management Board of the
European Agency for Fundamental Rights (2015–2020); Experts in
international projects and missions (carried out by the European Union;
OSCE; Italian Ministry of Foreign Affairs and International Cooperation, in
Libya, Yemen, Iran, Iraq, Afghanistan, Palestine, Jordan, Kosovo,
Swaziland. Laura holds a PhD in Social Science at the University of Genoa
in conjunction with Trinity College Dublin; master's degree of law at the
University of Rome La Sapienza; master's degree of political science
“Politics and Economics of the Mediterranean Region” at the University of
Genoa; master's degree of international relations at the Queen Mary
University London.

David J. Heslop is Associate Professor at the School of Population Health


at UNSW Sydney. He is a clinically active vocationally registered General
Practitioner and Occupational and Environmental Medicine Physician.
During a military career of over 15 years, he has deployed into a variety of
complex and austere combat environments, and has advanced international
training in Chemical, Biological, Radiological, Nuclear and Explosive
(CBRNE) Medicine. He has experience in planning for and management of
major disasters, mass casualty and multiple casualty situations. He is
regularly consulted and participated in the development and review of
national and international clinical and operational general military and
CBRNE policy and doctrine. His research interests lie in health and medical
systems innovation and research using computational modelling and
simulation to address otherwise intractable problems.

Anke Joubert is an integral part of Deloitte's Artificial Intelligence & Data


team since 2018. As a member of the Deloitte AI Institute leadership, she
uses her industry knowledge to spearhead AI discussions and uncover
pertinent insights for clients across various sectors, always emphasizing the
importance of Trustworthy AI. She has extensive expertise in delivering
natural language processing (NLP), visual intelligence, and conversational
AI solutions. Her broad client base has allowed her to develop
comprehensive expertise across multiple industries, including Financial
Services, Public Sector, Automotive, and Telecommunications. In addition
to her project work, Anke is a regular author and public speaker,
contributing to AI publications and facilitating training. She holds a
master's degree in Big Data and Business Analytics from ESCP Europe,
France, and a master's degree in economics from the University of
Stellenbosch, South Africa.

Daniel Karpati is a PhD candidate in artificial intelligence at the


University of Luxembourg. Karpati graduated from LSE reading EU
Politics and formerly studied philosophy and mathematics. He is
researching how responsibility and liability allocation are intertwined with
interpretations of probabilities in current policy-making directives and
regulations in the medical device domain featuring AI-based technologies,
and what environment such allocation could create for all stakeholders.
Karpati authored a wide range of interdisciplinary projects, securing more
than 1.2M EUR funding in total. Alongside his supervisor, Prof. Schommer,
Karpati is the Co-Principal Investigator of the commercialization project
REMEDIS, funded by the Luxembourg National Research Fund (FNR).
REMEDIS is based on his technological innovation, and Karpati is
currently leading the prototype development of a secure, automated
medication dispenser with an AI-featured pill recognition system in the
hope of bringing this product to the EU markets soon.

Joel R. Keep (MPH) is PhD candidate at the University of New South


Wales, where he assists Dr Heslop in the teaching of postgraduate studies in
bioterrorism and health intelligence. He holds a 2024 Mid-Career
Biodefense Fellowship from the Council on Strategic Risks.

Enzo Maria Le Fevre Cervini is Senior Researcher at the Laboratory on


Innovation and artificial intelligence of the University of Buenos Aires and
Adjunct Professor of AI and Ethics at the Pontifical University of Buenos
Aires. He currently serves as Head of Sector of Information Management in
the Directorate General for Digital Services of the European Commission.
He holds a master's degree in artificial intelligence for public services, a
PhD in political sciences, and a master's degree in international relations.

Gabriele Lenzini is an associate professor in sociotechnical cybersecurity


research at the University of Luxembourg, and Chief Scientist at the
Interdisciplinary Center for Security Reliability and Trust (SnT). With
decennial expertise in security design and analysis, Lenzini's interest in
research ethics roots in his studies on ransomware and malware and in a
series of lectures on “Research Ethics in the time of Internet and
Digitalization” given as part of the seminars on “Interdisciplinary Approach
to Human Rights” promoted by the UNESCO Chair of the University of
Luxembourg. Lenzini is vice-chair of the University of Luxembourg's
Ethical Review Panel and an independent member of the scientific
evaluation group of the NATO Science for Peace and Security (SPS)
Program.

Alessandro Mantelero is Associate Professor of Private Law and European


Commission Jean Monnet Chair in Mediterranean Digital Societies and
Law at Polytechnic University of Turin. He is a member of the Support
Pool of Experts of the EDPB, UNDP expert on human rights and AI, and
from 2016 to 2022 served as scientific expert on AI for the Council of
Europe (Convention 108 Consultative Committee, Ad hoc Committee on
Artificial Intelligence).

Massimo Marelli is the Head of the Data Protection Office at the


International Committee of the Red Cross (ICRC). Massimo is a member of
the Advisory Board and a Fellow of the European Centre on Privacy and
Cybersecurity (ECPC) at the University of Maastricht, co-director of the
Humanitarian Action Programme at ECPC, and the co-editor of the
DigitHarium, a global forum to discuss and debate digital transformation
within the humanitarian sector, with a focus on humanitarian protection,
policy, ethics, and action. He is also a member of the Brussels Privacy Hub
Advisory Board, and the co-editor of the Handbook on Data Protection in
Humanitarian Action. Prior to his current role, Massimo was the Head of
Delegation for Cyberspace in Luxembourg of the ICRC, and held several
assignments with the ICRC in the field and at the headquarters. He also
worked as Referendaire at the EU General Court in Luxembourg and as
lawyer in private practice, specializing in EU and International Law.

Laura Palazzani is Professor of Philosophy of Law at LUMSA University


of Rome; she is a Member of the European Group of Ethics in Science and
New Technologies, European Commission (since 2011); Representative of
the Italian delegation to the Bioethics Committee, Council of Europe (since
2014); she has been a Member and Vice-chair of the Italian Committee for
Bioethics (2002-2022) and Member of the UNESCO International
Bioethics Committee (2016–2023).

Alessandra Poggiani, Director General of Cineca, has previously served as


Director General of the National Agency for Digital Italy and several other
government-owned ICT agencies. She has been a lecturer on innovation in
public administration and digital transformation for almost 20 years and is
now a PhD candidate in computer sciences at the University of Venice.

Alessandro Politi is Director of the NATO Defense College Foundation,


the only NATO-recognized NGO think tank. Being a global political and
strategic analyst with 30 years of experience, he teaches geopolitics and
intelligence at the SIOI. He teaches conflict management, crisis, peace-
making and analysis at different governmental centers. His most recent
hearings at the House Foreign and the House Defence Committee were on
future orientations of the Italian foreign policy and European armaments
cooperation. He has been senior researcher for the Italian MoD (CeMiSS-
Centre for Military and Strategic Studies) regarding the strategic monitoring
of Latin America. He also created and has led the Global Outlook project
within CeMiSS, recently published in Italian and English (third edition,
2015). He was WEF Global Shapers facilitator and TEDxLUISS speaker.
He has directed the CEMRES research on CBMs in the framework of the
5+5 Defence Initiative, presenting the conclusions to the Ministers in
Granada. He has been acting director of a private school of intelligence
analysis. He has been advisor in Italy and abroad to four Defence ministers
(among which the actual President of the Italian Republic, Hon. Mattarella),
one National Armaments Director, one Intelligence Coordinator, one
Chairman of the Oversight Intelligence Committee, one Head of the Italian
delegation to the NATO Parliamentary Assembly. He was born in Germany,
and he lives in Luxembourg. He has published as author or co-author 50
books on strategic and security matters. His most recent book was a
geopolitical biography on Chancellor Angela Merkel.

Valeria Ricci graduated from Turin Polytechnic in Ecodesign in 2013.


After her degree, she joined the furniture company Ikea, where she served
as an interior designer within the Communication and Interior Design
department. After completing this work experience, she started working at
Intesa Sanpaolo in 2015 at the Service Design Laboratory. She is now part
of Intesa Sanpaolo's Innovation Center, specifically working in the
Artificial Intelligence Lab. Among the main activities she manages and
coordinates applied research projects activated in collaboration with
scientific partners with the aim of solving business needs (internal and
external to the bank) that require new “applied data science” skills.

Luigi Ruggerone, after completing his postgraduate studies in Economics


in the United Kingdom, joined Banca Commerciale Italiana in 1996,
working in Milan for the Research Department and then the Risk
Management Department. In 2013, with his wife Alessandra and his kids
Luca and Sara, Luigi moved to Washington, DC, to join the International
Monetary Fund, where he contributed to writing several issues of the
Global Financial Stability Report until 2015. In October 2015, he joined
Intesa Sanpaolo Group to open and manage its new Representative Office
in Washington, DC. On October 1, 2018, he was appointed Head of
Innovation Business Development with Intesa Sanpaolo Innovation Center,
a new company of Intesa Sanpaolo Group based in Turin and entirely
dedicated to scout, analyze accelerate and support innovative technologies.
On July 1, 2019, Luigi was appointed Director of Applied Research with
Intesa Sanpaolo Innovation Center. In December 2022, Luigi was promoted
to Senior Director and is now Responsible for the Business and Innovation
Research Area of Intesa Sanpaolo Innovation Center, reporting to the Board
of Directors.

Roberto Ruoppo is PhD in International Law, a lawyer and Assistant


Professor of International Law at Marche Polytechnic University of
Ancona, Italy, where he teaches Private International Law and
Fundamentals of IT law. He is also Professor at the LUISS University
Faculty of law, Rome, where he has held the course “Artificial Intelligence
and International Law” during the academic year 2023–2024 and currently
he teaches “Artificial Intelligence: National, International and EU Law
perspectives.” He is the author of a monographic work (La proprietà nel
diritto internazionale, Napoli, ESI, 2022) and several scientific articles, in
the field of Artificial Intelligence too. He has participated in several
scientific conferences as a speaker and has been visiting Professor in
foreign Universities as well. He is a member of the Italian Society of
International Law and of the Italian branch of the International Law
Association and works in the editorial boards of leading international law
journals.

Pietro Sanna is Post-Doc contract researcher in international law at the


University of Genoa where he co-teaches International Law. He is
practicing lawyer at the Genoa Bar Association.

Christoph Schommer is a professor of artificial intelligence. His interest in


machine ethics and personalized medicine is linked to his research on
artificial intelligence in general. As part of supervised doctoral theses and
funded research projects, with the help of his interdisciplinary research
team, he concerns questions about ethical, legal, and moral consequences in
the event of machine failure. Prof. Schommer is a regular visitor to the
annual meetings of the German Ethics Council in Berlin and has already
been invited to give a lecture on AI at the Luxembourg Ethics Council. In
the 2023 discussion series “LetsTalk100 - AI for our Future” he had
research colleagues from the Fraunhofer Institute, Price-Waterhouse (PwC)
as well as colleagues from the University of Bergen, Norway (Prof.
Slavkovik) and the Ludwig Maximilian University on two evenings Munich
(Dr. Antonio Bikic, working group of Prof. Nida Rümelin) on the topics of
AI, medicine, ethics as guests. Prof. Schommer maintains close contact
with his colleagues in applied philosophy and regularly gives a two-hour
lecture on this topic.
Lucilla Sioli is the Director of the “EU AI Office” within Directorate-
General CONNECT at the European Commission. She is responsible for
the coordination of the European AI strategy, including the implementation
of the AI Act and international collaboration in trustworthy AI and AI for
good. The directorate is also responsible for R&D&I activities in AI and for
the implementation of the AI Innovation package. Previous responsibilities
included policies in the areas of semiconductors, digital skills and the
digital economy. Lucilla holds a PhD in economics from the University of
Southampton (UK) and one from the Catholic University of Milan (Italy)
and has been a civil servant with the European Commission since 1997.

Settimio Stallone is Professor of International History at the Department of


Political Science of the University of Naples Federico II (where he directs
the Bachelor's degree in Political science and the Master's degree in
International relations and scenario analysis) and the Academy of Italian
Air Force. He taught at the Suor Orsola Benincasa University of Naples, the
University of Lecce, and the Higher School of Public Administration. He is
an associate researcher at the Institute of Mediterranean Studies of the
CNR. Member of research programs of national interest, he coordinates
international agreements with Albanian, Armenian, Bulgarian, Moldavian,
and Ukrainian universities. In his research, he has focused his attention on
Italian foreign policy in Eastern Europe, with particular reference to the
Cold War years. He is working on a monograph on political, economic, and
cultural relations between Italy and Albania from 1949 to 1991. Among his
publications: Minister in Saint Petersburg. Italian diplomats and diplomacy
in Russia (1861–1870) (2006); Proofs of Adriatic diplomacy: Italy and
Albania, 1944–1949 (2006); Return to Tirana (2011); Italo-Albanian
relations between Cold War and hypothesis of normalization (2014); So
close, so far (2017); A hope that was not there (2018); Realism and
diplomacy. The Greek-Albanian rapprochement of 1967–1974 (2019); An
Italian success. Operation Alba (2019). The unfinished friendship (2021); in
the country of the forgotten God's Army (2021). The Nixon presidency and
the origins of Global Value Chains. An interpretative proposal (2022).
Between Atlantic ambitions and European integration: post-communist
Albania in the 2000s (2023).

Pascal Steichen is Founder and CEO of the Luxembourg House of


Cybersecurity. Since 20 years, Pascal has been involved in the main
cybersecurity initiatives of the Luxembourg Government. Building on his
huge experience in cybersecurity, today, Pascal is involved in key
communities in Luxembourg and Europe. Member of the Luxembourg
Cybersecurity Board, lecturer in information security at the University of
Luxembourg, member of WomenCyberForce, and involved in the curricular
board of the BTS cybersecurity. At an international level, Pascal was
involved in the founding of ECSO, is an active member of the FIC advisory
board, was part of the 2021 judging panel of the IFSEC Global Influencers
in Security and Fire Award, and since 2022 represents Luxembourg at the
Governing Board of the ECCC, where he was elected as its first
Chairperson.

Guglielmo Tamburrini (PhD in Philosophy, Columbia University, 1987) is


Research Associate and retired Philosophy of Science and Technology
Professor at Università di Napoli Federico II in Italy. His research interests
focus on ethical and social issues in the context of AI, human-computer,
and human-robot interactions. Coordinator of the first European project on
the ethics of robotics (ETHICBOTS, 2005–2008), he was awarded in 2014
the Giulio Preti International Prize by the Regional Parliament of Tuscany
for his contributions to the dialogue between Science, Philosophy and
Democracy. Scientific Board member of USPID (Unione degli Scienziati
per il Disarmo), Fellow of the Nexa Center for Internet and Society at
Politecnico di Torino, Member of ICRAC (International Committee for
Robot Arms Control), and member of the ISODARCO Association
(International School on Disarmament and Research on Conflicts).

Ugo Tramballi is ISPI Senior Advisor. A veteran journalist, he joined the


Metropolitan section of “Il Giornale” in 1976. He served as the newspaper's
Middle East correspondent in Beirut from 1983 to 1987 with war
correspondences from Lebanon, Iran, Iraq, Afghanistan, and extensive
coverage in India and Pakistan. He was Moscow correspondent from 1987
to 1991 and then Global Correspondent and columnist with the Italian
Economic newspaper “Il Sole 24 Ore.” He covered the Middle East peace
process, India's Economic reforms, South Africa transition to democracy,
China rise and Hong Kong handover, the impact of globalization on World
Politics, the World after 9/11, Civil uprising in the Middle East, the
financial crisis in the Western World and rise of populism. He is a member
of the Istituto Affari Internazionali, IAI, Rome, as well as a World
Economic Forum's Media Leader. Among his books: Inside India (1988),
The Day Winter Is Over: Russia After Eltsin (1999), The Stones and the
Olive Trees – A History of Palestine (2002), Israel – The Incomplete
Dream, (2008). He regularly contributes to ISPI's annual report and has
recently edited the ISPI reports India: the Modi Factor (2018), India's
Global Challenge (2019) and Looking West: The Rise of Asia in the MENA
Region (2020). On ISPI's website, he publishes a weekly blog on
international affairs, “Slow News”
(http://ugotramballi.blog.ilsole24ore.com).
Francesco Ubertini, a full professor of structural mechanics since 2007,
was Rector of Alma Mater Studiorum – University of Bologna from 2015
to 2021 and currently President of Cineca and President of the International
Big Data Intelligence for Human Development Foundation IFAB.

Giulio Venneri is an EU official, currently the team leader for rule of law
and democracy in the Directorate General for Neighbourhood Policy and
Enlargement Negotiations, at the European Commission headquarters in
Brussels. Prior to joining the European civil service, he worked at the
“Global Issues” unit of the Italian Ministry of Foreign Affairs. Active in
academia, he is adjunct professor at the School of Government of LUISS
Guido Carli University in Rome. There he is also a member of the Steering
Committee for the degree in Policies and Governance in Europe at the
Political Science Department. He is coordinator of the policy lab at the
Diplomacy and Global Politics program of the University of Westminster in
London. His most recent publication includes the volume on “The European
Union's Engagement with the Southern Mediterranean: Integrating the
European Neighbourhood Policy,” released in September 2023.

Yasaman Yousefi is a PhD candidate pursuing degrees in Law, Science,


and Technology at the University of Bologna and Computer Science at the
University of Luxembourg. Her research adopts a multi-disciplinary
approach, delving into the fairness of AI systems from technical, ethical,
and European legal perspectives. Yasaman is dedicated to bridging the gaps
between law, technology, and ethics, showcasing her commitment to
navigating the intricate intersections of these fields. Additionally, her
research interests extend to bioethics and medical device regulations,
further solidifying her expertise in the ethical implications of AI
technologies. Yasaman has won several prizes, including the ISA PhD Prize
for interdisciplinary research, and the Marco Polo Scholarship from the
University of Bologna for her research stay in Luxembourg. She is a
member of the winning team for the first Grand Challenge in AI and Law at
the University of St. Gallen.

Giovanni Zarra is full professor of international law and international


litigation at the University of Naples Federico II, Italy and a lawyer in the
arbitration team of Hogan Lovells in Italy. Giovanni has a significant
experience in international arbitration law (focusing on both commercial
and investment arbitration) and international litigation (with particular
knowledge on matters involving issues of public and private international
law). He is considered to have a wealth of knowledge in international
arbitration, as testified by the appointment in the ICSID Panels of
Arbitrators and Conciliators, the appointment as the Republic of San
Marino's representative at UNCITRAL (in particular WGIII) and the
inclusion in the prestigious ISDS Academic Forum. He also gained
particular experience on shipping, international trade and trade on
cryptocurrencies. He has advised the Italian Government during the process
of reform of investment arbitration and the adoption of the new Model BIT.
He is author of five books, editor of six volumes and author of more than
seventy scientific articles published in primary international academic
journals.
ACKNOWLEDGMENTS

The task of editing a handbook on such a complex issue as that of the


impact of new digital technologies on international relations and its
implication on global geopolitical balances is certainly not easy, and is
possible only thanks to the collective contribution of a large number of
people.
We would like to first express our sincere gratitude to Ms. Hannah Rich,
Ms. Lauren Powell and Ms. Lydia de Cruz for their invaluable guidance,
encouragement, and support throughout the completion of this handbook.
Their expertise and insightful feedback have been instrumental in shaping
the direction and content of this work.
Furthermore, we wish to extend our heartfelt appreciation to Giulio
Venneri for his assistance, stimulus, and support in furnishing us with
remarkably beneficial recommendations that have substantially enhanced
the caliber of this publication.
A special mention goes to Dr. Federico Bonadonna, who wrote Annex I
(A Comparative Overview of AI Regulation Systems), as well as to Dr. Paul
Garwood, who was instrumental in supervising the copy-editing phase of
the various contributions and also ensuring the linguistic correctness of the
handbook.
The volume is enriched by Annex II, a “manifesto” on the future of AI
drafted by the brilliant young researchers of the Diplomacy Lab of the
University of Luxembourg, a dynamic, student-centered cenacle of studies
on international relations, which has allowed us to give voice to the leaders
of tomorrow on a subject so relevant for our future. Their enthusiasm and
fresh vision have been an extraordinary source of inspiration for all of us.
Furthermore, we would like to express our sincere appreciation to our
close circle of friends and to our families for their unwavering
encouragement and exceptional comprehension throughout the writing and
editing process. Their support has served as an everlasting wellspring of
motivation.
Finally, we would like to extend our sincere appreciation and respect to
every author, researcher, and academic whose scholarly contributions have
been instrumental in the development of this handbook.
The achievement of compiling it would not have been conceivable
without the insights, support, and inspiration so generously offered by all of
these individuals.
Thank you!
Diego Brasioli, Laura Guercio, Giovanna Gnerre Landini, and Andrea de
Giorgio
INTRODUCTION
The Transformative Impact of Artificial
Intelligence on Our World
Diego Brasioli, Laura Guercio, Giovanna Gnerre Landini, and
Andrea de Giorgio

DOI: 10.4324/9781003518495-1

Artificial intelligence (AI) is widely regarded as the most pivotal emerging


technology owing to its extensive applicability across all spheres of human
endeavor. Governments and enterprises alike consider the advancement of
AI to be a strategically vital priority.
Controlling AI-based applications can provide a substantial advantage in
current geopolitical dynamics. Therefore, it is easy to predict that nations
pioneering in their development and implementation will enjoy a lasting
and robust dominance in power relations with their direct competitors.
As a result, AI has become a critical component in the power struggle
between nations, as well as a vital instrument for improving
competitiveness and security. Unsurprisingly, numerous governments
consider it a crucial area for investment in the years to come.
Although much of the public discussion about AI centers around topics
like employment, productivity, and advancements in medicine or
engineering, the race for dominance in new technologies is emerging as a
major geopolitical worry. The enormous amount of processing power
required to stay ahead of the competition in this field already influences
major international players’ foreign policy decisions, affecting warfare and
deterrence.
The use of AI in diplomacy as a means to assist nations in navigating the
complexities of the modern world and ultimately gaining international
competitive advantages is growing in popularity. Nevertheless, the
implementation of AI systems in the realm of international relations poses
several challenges, including the need for regulations to guarantee their
responsible and ethical operation while preventing harmful outcomes.
Furthermore, using biased data sets to train AI systems may result in
conclusions that lack factual support, are not in the parties’ best interests, or
otherwise negatively affect their long-term interests.
Establishing appropriate international legislation and effective oversight
mechanisms is crucial to ensuring the responsible and ethical use of AI.
Legislative actions should incorporate provisions that ensure the training
of AI programs on real data, the clarity and defendability of algorithmic
system decisions, and the prohibition of AI use for illegal purposes.
Additionally, governments and international organizations should
collaborate, potentially including representatives from civil society,
industry, and academic and research centers, to formulate ethical guidelines
for AI use and promote the implementation of measures that uphold
fundamental human rights and international law.
Ultimately, such a partnership should aim to advance peace and stability
globally. The legal regulation of new technologies is at the heart of the
current debate over the definition of a global AI governance strategy. In this
context, although Europe is a very well-regulated and legally sound region,
it is still rather fragmented in terms of effective national interests and data
protection systems. In this largely uncharted arena, the acute bipolar
competition between the United States and China/Russia appears to be
gradually shaping Europe's role as a prominent actor in defining a global
framework of action.
What strategies can governments and international organizations employ
to address these delicate legal and ethical concerns in a post-globalized and
disordered world marked by the emergence of “new empires” and the
shifting balance of power among nations?
How can we determine the fundamental principles, goals, and strategies
for the immediate, intermediate, and long-term implementation of a shared
agenda for AI management?
Most importantly, how can we prevent AI from “overwhelming
humanity” and instead enable its applications to contribute to global
stability and the pursuit of the common good?
In an effort to provide answers to these crucial questions, we have sought
the perspectives and reflections of some of the top experts in the field on
the broad topic of AI's influence on international relations.
This handbook aims to provide a comprehensive overview of the key
issues and challenges at the intersection of AI and international relations,
drawing on insights from both academic research and policy practice.
We hope that by fostering a better understanding of these complex issues,
we can contribute to the development of effective strategies for harnessing
the potential of AI while mitigating its risks and challenges, while also
presenting a valuable opportunity to arouse the interest of a broader public
beyond the circle of “specialists” and spark a more extensive and
meaningful discussion on these crucial and pressing issues that will
inevitably impact all of us in the near future.
Diego Brasioli, Laura Guercio, Giovanna Gnerre Landini, and Andrea de
Giorgio
PART I
Foundations and Perspectives
1
FROM ENTROPY TO
INTERNATIONAL RELATIONS
How Research into Artificial Intelligence Is
Improving the World
Andrea de Giorgio

DOI: 10.4324/9781003518495-3

Introduction
This chapter is divided into two parts. The first part aims to accompany a
non-expert reader throughout the historical journey of artificial intelligence
(AI) from a human idea to a concrete technology. The second part addresses
an additional mathematical aspect of AI, that is, the connection between AI
algorithms and variations of entropy, before also discussing a broader
coupling between AI, entropy, and international relations. The hope of the
author is to convey the message that AI has become an essential science
that unites all disciplines with its smart computational algorithms that are
able to learn and adapt, thus, emulating the role of nature; for this reason,
AI research and development in connection to any field contributes to the
betterment of our world. The conclusion covers these last considerations
with regard to international relations.
Part I – What Is Artificial Intelligence?
AI has garnered tremendous attention in the last decade, to the point that
almost everyone is aware of its existence. However, there are still too few
people on this planet who really understand the actual technology that
scientists are researching and that entrepreneurs are developing year by
year.
The most common idea of AI, especially thanks to sci-fi books and
movies, is of a machine that should be able to think by itself. Depending on
the author, these intelligent machines have often been embodied in the form
of robots, and pictured to be as smart as humans, or even smarter, usually in
an evil scenario where humans ultimately decide to destroy the machine in
order to survive. That is not the case in the real world.
Human imagination, with its incredible ability to simulate both future and
alternative realities, can anticipate progress and catastrophes, and allows us
to design new technologies to face them. In my view, there are a limited
number of people who have the ability to turn some of these technological
ideas into reality because they need to have the will (i.e., they came up with
or they liked the idea), the power (Salancik & Pfeffer, 1977), and the
necessary resources (Sirmon et al., 2007). Let us discuss will and power
later on, and now focus on the final component. When I say resources,
many of you will immediately think about money, or some form of raw
materials needed to build AI. But the AI that we can achieve today is a new
technology that evolves from, and depends on, previously established
technologies. Thus, the question to ask is: do we have the necessary
technologies – other than money – to build new, extremely smart AI
technology?
If we look at history, around the third century BC, mathematicians and
engineers based in Alexandria began writing treatises on automaton-making
(an automaton, plural automata, is an automatic machine). The range of
applications varied from moving figures, to musical automata or
mechanical servants, and they could be powered by steam, water, air, and
mechanics (Bedini, 1964). When the first intelligent machines were built
merely as toys (Bur, 2016, p. 15), people naturally tried to use those to
automate actions and imitate intelligent behavior. A few attempts managed
to evoke a range of emotional responses, including wonder and awe (Marr,
2017), but all of them, sooner or later, were dismissed as mere tricks.
People understood that achieving real autonomous intelligence in a machine
would be harder than expected. Back then, and still today, even a fly is
hundreds of times smarter than any of these human-built machines.
In 1938, Alan Turing presented his doctoral thesis “Systems of logic
based on ordinals” (Turing, 1939), but he was already working on a bigger
idea: an automatic computational machine, later called the Turing machine,
which he presented in the article “On computable numbers” published in
1936 (Turing, 1936a). His model provided a mathematical description of a
very simple device capable of arbitrary computations: the universal
machine. He also presented the Church-Turing thesis (Turing, 1936b),
affirming that anything computable through an algorithm is also
computable by a Turing machine; in the same article, he also gave a proof
of the insolubility of the Entscheidungsproblem, that is, no algorithm can
tell if a certain mathematical proposition is provable or not. It was the very
beginning of what would become modern computers. This spark was
enough to revive the concept of intelligent machines in a computational key
with a series of ten conferences held in New York between 1946 and 1953
organized by the Josiah Macy Jr. Foundation (Abraham, 2020). The Macy
conferences were a set of meetings for scholars from various academic
disciplines with the aim to develop a cohesive and broadly applicable
theory of systems that would become equally applicable to living beings
and machines.
In 1948, Norbert Wiener published a book titled “Cybernetics or Control
and Communication in the Animal and the Machine” (Wiener, 1948), a
work that laid the theoretical foundations for information theory. The term
“cybernetics” was adopted from the Greek κυβερνητ ηζ (kybernētēs,
steersman, governor or pilot) to describe the behavior of dynamical systems
modified by environmental feedback. During those years, research in
computer science took off quickly. The industrial improvements were so
rapid that, in 1965, Gordon Moore, the co-founder of Fairchild
Semiconductor and Intel, and later CEO of the latter, predicted a doubling
every year in the number of components per integrated circuit (Moore,
1998). At the same time, many computational algorithms were invented,
and tested for efficiency against each other, to run on these fast
computational machines. For a while, the ability to program a machine to
do our will was more than sufficient to accelerate our daily operations in
any field. Bill Gates envisioned a computer on every desk and in every
home (Gates et al., 1996). Nowadays, we even have a smartphone in every
pocket.
If the majority of researchers were improving and harnessing the power
of the established technology, a few others stayed focused on a more
specialized topic: intelligent behavior in computers. This niche was the
continuation of cybernetics, which obtained contributions from
neuroscience, biology, mathematics and statistics, to name a few. The
challenge was to modify computers (through specific algorithms) so that
they could learn from the environment. This research area was given the
new name of machine learning (ML) in order to distinguish it from the area
of AI that had seen a decline in popularity after the enthusiasm that derived
from the Macy conferences, and the overall shift in focus to fast (rather than
intelligent) machines. It was Arthur Samuel, an IBM employee and pioneer
in the field of computer gaming and AI, who introduced the term machine
learning (Samuel, 1959).
While ML scientists silently worked through many ups and downs to
restore the lost hopes of AI, just recently, in the first decade of this century,
a few major breakthroughs happened. A first one was the idea to move the
matrix calculations from the central processing unit (CPU) to the graphical
processing unit (GPU), an additional external processor initially made for
the acceleration of graphics, which was mostly needed for gaming
applications (Oh & Jung, 2004). Contrary to the CPU, the GPU was
optimized to deal with fast matrix calculations, because each graphic image
is a matrix of numbers associated to the pixels to be displayed on a screen.
ML algorithms hit a natural stop when even the super-fast CPUs took a long
time, sometimes days, to compute a complex ML algorithm used to, for
example, identify faces in images. These complex algorithms could be
likened to, and performed as, a large amount of matrix calculations, which
in theory was a great solution under many engineering aspects, except that
CPUs could not handle them efficiently. Nvidia provided the first accessible
GPUs for computations with its Compute Unified Device Architecture, also
known as CUDA (Harish & Narayanan, 2007), rather than handling just
graphic acceleration, and soon they became the undisputed leader of the
GPU market. More efficient versions of almost all the known ML
algorithms were now possible (Catanzaro et al., 2008; Cireşan et al., 2010;
Do et al., 2008; Hong-Tao et al., 2009; Kuang & Zhao, 2009; Lopes et al.,
2010; Strigl et al., 2010; Yan et al., 2009; Zhou & Tan, 2009). These were
also the years in which Python became one of the preferred languages for
ML, with open access libraries specifically developed to support the GPU
matrix calculations (Abadi et al., 2016; Bergstra et al., 2010). A second
breakthrough was the advent of cloud computing and the ability to lower
computational costs by harnessing a large number of computational units in
pay per use, while paving the way to new technologies such as, for
example, Hadoop with its MapReduce programming paradigm that allows
the handling of very big data and the scaling of applications on hundreds of
servers (Dittrich & Quiané-Ruiz, 2012; Patel et al., 2012). Cloud computing
increased accessibility and allowed a much faster spread of AI research and
applications in the world. A third breakthrough happened with a method
called deep learning (DL) that was invented to solve a problem of a
particular algorithm known as backpropagation (BPG) (Rumelhart et al.,
1986). We have to dig a bit deeper to understand it. For many years, the first
and most common ML algorithm was based on the perceptron (Block,
1962), a mathematical model of a neuron in the human brain. Scientists
were extremely optimistic about the fact that once we had the model of one
neuron, connecting it with other neurons would reproduce the intelligent
behavior of a human brain. One artificial neuron was not enough, but an
artificial neural network (ANN) could accomplish a great deal (Jackel et al.,
1986). This was demonstrated by training a couple of layers of neurons
connected to each other with the BPG algorithm. The idea was to have an
input layer to provide data to the neural network, an output layer to retrieve
the answer, and a hidden layer in between those two that could learn
mathematical functions by a series of weight adjustments computed with
the BPG algorithm. ANNs were learning to approximate even some
complex mathematical functions with extreme ease, a characteristic that I
describe in more detail in the following pages. The major problem with this
approach arose when scientists tried to connect more layers to solve even
more complex mathematical functions (i.e., handling more input data),
because the complexity of the network made BPG unable to compute the
weights. The breakthrough came from a group of Canadian computer
researchers who saved BPG and other algorithms with the DL method
(LeCun et al., 2015). The idea was to freeze the weights of all the layers of
a multi-layer ANN except one layer at a time and then apply BPG to the
equivalent ANN composed of all the frozen layers from input to the
selected layer minus one, the selected active layer, and all the frozen layers
from the selected layer plus one and the output layer. By moving through
layers, one active layer at a time, BPG was now able to perform its weight
calculations without problems. The fast diffusion of DL algorithms, that is,
the use of many layers of ANN, became possible because of these two
advances. Three major DL-enhanced forms of architecture initially emerged
with the DL paradigm: convolutional neural networks (CNNs)
(Schmidhuber, 2015), deep belief networks (DBNs) (Hinton, 2009), and
stacked autoencoders (SAs) (Vincent et al., 2008). CNN was using
convolution (a mathematical operation consisting of matrix multiplications)
as well as BPG, but the operations were computationally heavy and halted
its large-scale adoption. Then GPU acceleration made it possible for CNN
to perform on big data, especially large databases of images, which boosted
the interest in CNN. At the same time, while DBN and SA performed very
similarly to each other on big data (de Giorgio, 2015), neither would have
been possible without the DL method. Later on, transformers (Vaswani et
al., 2017) emerged as another kind of DL algorithm and became the basis of
large language models (LLMs), which now underpin all AI chatbots such
as, for example, ChatGPT from OpenAI, or Gemini from Google.
To explain what AI – the technology – can do today, I have to talk about
another silent revolution that has happened since the advent of computers.
Until a decade ago, the majority of operations in any field supported by
mathematics (engineering, physics, chemistry, economics, etc.) were
studied and planned by means of modeling (Cohen & Cyert, 1961; Fraser &
Burnell, 1970; Kiviat, 1967). For example, if an engineer wanted to
calculate the resistance of a certain conductor, the formula R = V /I

would be used. This simple formula is a model of a natural phenomenon


(i.e., the electric resistance R), which does not account for all the elements
involved in a complex interaction but only for the few elements that have a
major role (i.e., the voltage drop V and the current I ). In particular, this
formula assumes that these elements are static (i.e., they do not change over
time). This means that when an engineer knows the average values of V

and I , they can tell with a certain accuracy the average value of R . In the
last decade, young engineers have also learned at university how ML
algorithms can now provide an alternative answer to what R is based on a
series of datapoints about V and I , but also the temperature T , and
whatever other observable data is available, without even knowing the
formula to obtain R. At this point one could argue that knowing the formula
is best, but computer scientists would reply: “Sadly not, the formula is not
even as accurate as the model we can build with machine learning
algorithms”. ML was found to be more accurate than dynamic models, and
better able to predict the behavior of the physical world (Carbonell et al.,
1983; Jurs et al., 1969). Our young engineers now have a dilemma: is it
better to use the formula that they know and can explain, aided by computer
modeling, or the ML algorithm that provides more accurate results
compared to the formula, but means giving up on explaining the outcome?
In truth, nobody has figured it out yet. Young engineers have started
treating ML models as black boxes that, given one or more inputs, provide a
very accurate output, no questions asked. The models do the job quite
efficiently, but they cannot provide any logical reasoning that has led to any
specific output.
At this point, if our young engineers have a small dilemma, the entire
planet has a bigger one: what to do with all the people who graduated
before the last decade and only understand physical phenomena and their
models through formulae, not with data science and ML methods? They are
all less well-prepared for a world where computers can do it better than
them. Not smarter, but better in terms of results. This has led to two
outcomes: on one side, because of the fear of missing out on the general
trend, many people have started pushing themselves and their colleagues to
explore the new possibilities with AI, but, on the other side – and please
allow me to speculate – many others have found ways to slow down the
adoption of these better solutions (if you cannot cope with it, slow it down
until you retire); an example of this is focusing on the regulation of the AI
field before its adoption.
We are now ready to answer the question: «What is AI today?». AI is a
new form of mathematics, something that has replaced Newton and
Einstein's formulations, with algorithms that converge to more accurate
solutions by means of a lot of data, also known as the big data approach
(Bollier & Firestone, 2010). Am I exaggerating? Yes, somewhat. I hope you
will forgive me if, for the sake of understanding, I simplify some aspects.
Of course, it still takes some effort to produce good results in the presence
of big data (Dumais et al., 2002). I am trying to separate what we can
effectively do with AI from what is just fictional or PR. One part that I have
not mentioned yet is why better, but not smarter, and the reason behind this
distinction is that in certain domains people still have the ability to connect
the dots faster than AI. While AI algorithms can determine very efficient
solutions for certain problems, including, for example, rule-based games
such as chess or go (Silver et al., 2018), it still requires too many
computational resources to produce a coherent solution to complex
problems, the rules of which are of a hierarchical or dynamic nature. What I
am saying is that in terms of explainability and chain of reasoning for the
complex world that we live in, humans are still tough to beat, therefore
smarter.
As an example of exaggerated expectations in AI, let us consider one of
the latest magic tricks. You might have heard about the LLMs that I
mentioned before, especially because ChatGPT from OpenAI made them
very popular by promising answers to any question; a prompt interface
made it look like as if we had finally reached the human level of interaction,
as whatever question a person can send, they always get a reply and often
the answer looks correct. As before in history, people take the wonder and
awe from some answers, and soon they start talking about magic. AI is
magic. Then others try to disprove the ability of this magic, and, sadly, as
history has taught us, they succeed. What are LLMs? They are black boxes
that learn to statistically associate efficient linguistic answers to some
questions. They do so by scanning billions and billions of texts (i.e., large
amounts of data) and associating numerical values with every word.
Computers are fast, but they do not understand anything more than numbers
and their statistical value. Thus, LLMs cannot directly cope with questions
such as: «If my brother has 3 apples, he sells 2, then divides one in equal
parts, of which he eats one and gives me the rest. How many (parts of)
apples do I get?». Why? Because it would require logic, a function that an
LLM does not possess. That is not even the tip of the iceberg: if you ask an
existential question to an LLM, and it replies: «I have a soul and I believe
in God», you can be sure that it is just quoting random texts from billions
and billions of human-made texts it has scanned through. There is no soul
running inside a GPU. What a nice trick! The statistical nature of the
learning process goes really well with the statistical nature of the questions:
only very few questions are particularly tricky to answer correctly and that
is where LLMs usually fail. The majority of questions that are asked are
about something that we are just too lazy to search for by other means and
the LLM has seen it over and over again during the learning process, thus it
can retrieve it quickly. On a separate note, a good way to deal with
recurring logic questions – often posed by humans to LLMs – is to detect
and correct the answer by applying a separated logic function that replaces
the wrong numbers with the correct ones. Here we enter the commercial
domain, where the solution is attributed to AI for marketing reasons, but
humans are behind it. Every time you receive an apology from an LLM, be
sure that someone has written a predefined script for it that covers up its
failure to provide the correct or ethical answer to your question. Have you
tried asking for instructions to build a bomb yet? Some people have
creatively found a way to jailbreak human-imposed limitations on an LLM
to answering unethical questions by creating prompts that first ask to
simulate the style of an entity that would give such unethical answers, and
then ask to provide such answers. The same human attempt at corrupting
LLMs by giving them specific roles gave birth to the commercial idea to
customize these LLMs as (paid) agents with specific roles that can serve as
a substitute of humans in several domains. Before you start worrying about
your job, remember that these agents are cheating: they are fast and can
trick people into thinking they are also intelligent, but without human
assistance they are not.
To summarize, the 21st-century magic called AI is a stable technology
based on computational power and the ability to process a high amount of
data. As long as a computer can compute faster and faster, especially with
the use of GPUs, AI will be able to achieve greater speeds. But that is it. An
analogy that I appreciate when explaining this technology is the following:
everybody understands how a bird flaps its wings to fly, while an airplane
does not, it flies with fixed wings and engines by exploiting several
technologies that we have discovered (mechanics, aerodynamics, etc.).
Airplanes can fly quite a lot faster than birds, but they cannot fly without a
human pilot sitting in the cockpit. Our AI does not flap its wings either. It is
based on statistics, computer science, and the ability to process data. Data is
what we feed it; synthetized knowledge is what we get, and often in form of
unexplainable black boxes. AI is incredibly fast at processing data, but at
the same time quite dumb without a human pilot to control it.
Part II – Artificial Intelligence and International
Relations
Just as mathematics cannot be tied to a specific field of application, the
same lack of restrictions applies to AI. In the previous part of this chapter,
we have seen how AI is a form of computational mathematics that solves
black box problems without an explainable formula. Now we can look at its
influence on international relations.
Since we are talking about a new technology, the first aspect that comes
to my mind is the technological advantage of a nation. For example, it has
been the case that the first country to become a nuclear power (i.e., a
country that has nuclear weapons) held incredible power over all the other
countries. The race to enter the nuclear domain began after the first
theoretical ideas found practical evidence and, nowadays, it looks quite
difficult to reduce the number of nuclear warheads in the world (Acton,
2012). The race to nuclear armament was quite fast. AI, on the other hand,
has theoretically been around for centuries, but now we are gathering the
first practical evidence that computers can support it to a level that
drastically enhances the competitive advantage of a nation. This is because
of the direct economic advantages of leading its development and
commercialization, but also the indirect advantages that will derive from
accelerating the ability of a nation to create innovation in many other fields.
I promised to address the aspects of will and power, and that is what I
focus on in this part. The starting point is accepting that, with both the pros
and cons that are always present with a new development, and exactly as
can be said for nuclear power, in terms of both warfare and energy
production, AI is a new technology that promises new advancements for
humanity. The good aspect of AI is that if an old-school engineer could
design and build an internal combustion engine with 35–45% efficiency at
most (Liu et al., 2018), an AI engineer, together with the old-school
engineer, will perhaps be able to push the efficiency to higher values based
on data optimization of the combustion process. With AI, we will be able to
enhance several aspects of our society, reaching a level of efficiency never
seen before. Isn’t this a good reason to push the accelerator on the
technology?
The less appealing aspect is that AI comes with negative effects too.
Nuclear energy, for example, can power entire cities, while at the same time
it can destroy them in seconds. Ultimately, the use of a technology is what
humans decide to make of it and, luckily for us all, there seem to be more
people willing to protect the entire planet than obliterate it. However, if
there is a general positive will, we need people with the power to execute it.
Power, in my opinion, is the only reason to regulate AI, not to slow it down
for the sake of the old-school engineer's career, but to make sure that
whoever has the power to use this technology is forced, by law, to use it for
good. Among the real risks of AI, we can list the abuse for personal gain
and/or against the rest of humanity, but also the possibility to create such an
efficient society that governments – or large corporations – will have too
much control over everything humans do.
Let me offer an example of the first kind of risk. One use of AI that has
become famous is the alleged influence that it played in distributing
targeted advertisements on social networks to convince British people to
quit the European Union (EU), which, in turn, led to several scientific
studies of this social phenomenon (Bastos & Mercea, 2018; Del Vicario et
al., 2017). Brexit happened because of the will and the power of a political
party, presumably representing the will of a majority of the British
population, but it was allegedly aided by AI, a technology that is able to
model the social fabric of a country better than hundreds of analysts
combined and to train itself on a specific target – converting votes – by
interacting with voters. There are no formulae in this plan, only data. Social
networks give access to large amounts of data about individuals, and the
black boxes of AI produce the targeted ads that are able to persuade them.
How does AI know this? It creates knowledge by successive adjustments, a
trial-and-error approach (Whitehead & Ballard, 1991). People become mere
numbers to play with. The more it succeeds in converting people's
responses, the more it adjusts its internal strategy to succeed. The more it
fails, the same happens. It does not matter for an AI algorithm whether it
succeeds or fails, both outcomes of a trial contribute to adjusting the aim.
After some training time, our technology is ready to launch the most
efficient of advertising strategies. Albeit I am not a lawyer, I would say that
AI alone – often embodied as an entity and not a technology – is not
capable of any of this if not adequately programmed by humans with the
will, means and the power to act, therefore AI cannot be held responsible
for any outcome from the application of its algorithms.
Considering AI as a technology, and absolutely not as a self-conscious
actor, does the political use of AI make it a threat? No, not necessarily.
Because the will to influence voters comes from a political party. Do
political parties have an unfair advantage when they use all the resources in
their possession, including AI algorithms, to influence voters? Probably not,
even though it might be unethical for them to do so, because it is their job to
win elections and popular votes. Democracy entails this pacific war of
persuasion until polling day. For Brexit, AI did not decide by itself to
change the political color of a country. There is neither God nor a soul
inside those GPUs. If not AI, do we need to regulate social media platforms
to avoid exploiting people's data for wrong purposes? We have arrived at
the paradox that social media platforms such as Facebook have started
asking for such regulations themselves, as they do not know how to
properly handle potential issues (Aral & Eckles, 2019). Do we need to
regulate AI to avoid creating behavioral models of humans with the aim of
influencing them? Perhaps so. Both the United States (US) and the EU are
currently engaged in this. The US prefers to apply pressure on corporates,
since US corporates are the leaders developing the AI algorithms that the
rest of the world is using, whereas the EU has no alternative than to regulate
the AI algorithms that they import (Mantelero, 2022). What about all the
other uses of AI? Can it be used to train autonomous weapons that can kill
specific targets without a human trigger? Definitely yes (Abaimov &
Martellini, 2020; Pedron & da Cruz, 2020). Is it an ethical use of AI
(Amoroso & Tamburrini, 2018; Asaro, 2020)? Probably. Do we need to
forbid it? I leave this answer to the reader as an exercise in self-reflection.
It is finally the moment to ask ourselves what makes AI useful for
international relations. In order to see a common denominator for all the
possible uses of AI in international relations, rather than talking about
specific applications, I want to give you one last piece of the general
science behind it. It has to do with how intelligence emerges from a human
brain, or from the smart little fly that I mentioned at the beginning of this
chapter.
The brain is a biological optimization device. Nature has embedded us
with a mechanism to cope with a constantly changing world. It goes from
simple feedback mechanisms such as those that allow plants to follow the
sunlight, up to the complex sensory and nervous systems piloted by a
dedicated brain that has made humans a very smart species. How does it
optimize? We do not really know, and we have a lot of theories at the
moment (Bechara et al., 2000; Braitenberg, 1990; Damasio, 1999, 2003;
Damasio & Damasio, 1992; Damasio et al., 2000; Goldman & Nottebohm,
1983; Minsky, 2007; Nedergaard et al., 2003). The only biological
mechanism that we have managed to study and model with success is a
neuron. Neuroscientists know that neurons fire electrical pulses if some
conditions are met. This is where AI got its inspiration from: the perceptron
model of a neuron, which made ANNs possible. A few scientists came up
with models capable of replicating some brain functions by using and/or
extending the ANNs (Donnarumma et al., 2016; Robinson, 1992; Schrimpf
et al., 2018), for example, we know that neuroplasticity plays a huge role,
so that a brain can change its structure over time (Fjell & Walhovd, 2010),
especially from childhood toward adulthood, and neuromodulation also
plays an important part (Lee & Dan, 2012), so that the functions of a certain
region can be almost instantly altered by chemical transmitters that travel
from one neuron, or nerve, to the other. A good number of computer
scientists tried with a simple approach based on interconnectivity (Morrison
et al., 2005): more neurons interconnected one to the other means more
complexity and, hopefully, greater intelligence. In the human brain there are
10
11
neurons (a figure with eleven zeros) (Braitenberg, 2001), so they are
far more numerous than what we can simulate with DL algorithms and all
the computational power we have nowadays. Sadly, the high
interconnectivity approach ended up requiring parallel computing and
dedicated hardware (Babcock & Westervelt, 1987; Van Schaik, 2001) that
did not lead to huge breakthroughs in AI. The amount of parallel computing
that we have today, per se, is not enough to create the magic. Will AI be as
smart as a brain when we can get to those numbers? Maybe. What we know
that already works is a smaller network based on a few artificial neurons.
By trial and error, it can adjust its firing threshold so as to achieve an
optimal output, based on several inputs. This is the basic building block of
the brain: an adaptation mechanism. The next quest is to understand what
adaptation has to do with optimization. To answer this, I am forced to bring
in a quite difficult concept from physics called entropy that, with an adapted
meaning, was adopted as the base of information theory (Rényi, 1961). In
thermodynamics, the more a system is disordered, or to put it in different
terms, the more its components have random order or behavior, the higher
is the entropy of this system. The concept of entropy has played a role in
defining information. I use this link between them to help you understand
better what we mean by entropy. Imagine that we are playing a game, and I
need to guess which of 20 people in a room you have given my smartphone
to, but to figure it out I can only ask you a series of questions until I find
that person (and get my smartphone back). If I want to get it back as soon as
possible I have to ask you good questions. Is there a question that is better
than another? Of course there is! The strategy to win this game is to ask the
questions that minimize the entropy of the answer, that is the level of
“information”, “surprise”, or “uncertainty” contained in it. The more
qualitative the questions, the sooner I find out who has my smartphone. Did
you already figure out what these questions are? Probably not. So let me
help you. If I can see that 10 out of the 20 people in the room have a
distinguishing feature, for example, blond hair, a question that minimizes
the entropy (i.e., minimizes the disorder and so maximizes the order of the
system after the question) would be «Does the person with my smartphone
have blond hair?». Basically, this question splits the overall information
about the elements belonging to either of the two sets “possibly, has my
smartphone” and “cannot have my smartphone” exactly 50/50. The system
goes from a disorder of 20 people possibly hiding my smartphone, to 10, a
much better organized scenario for me to guess. If I continue with a similar
question based on a feature that only half of these 10 people have, and then
another question, soon I will have five, then two or three people (with three
people I will need an extra question to split it into one and two), and,
finally, the name of the person hiding my smartphone. It would take me
four or at most five questions to find the truth. Another strategy could be
asking specific questions such as: “is it Tom?”. Tom is one out of 20 people,
and my probability to guess right is 1/20 . Then it would take me, in the
worst case, up to 19 questions to find my smartphone. On the other hand, I
could win the lottery and guess right with one question! Fancy your
chances? To sum it up, in information theory, the entropy of a random
variable such as the hair color of the person with my smartphone is the
average level of “information”, “surprise”, or “uncertainty” inherent to the
variable's possible outcomes. An outcome that gives less surprise (the
opposite of the lottery winning ticket), is one that does not immediately
solve the problem but splits it in two equal smaller problems. Since ML
algorithms arise largely from statistics and information theory, in general,
the objective of most ML algorithms is to minimize entropy, which is an
optimization process. What about the brain? Let me anticipate the answer
and then explain it. The brain does two things that make it (locally)
intelligent: it maximizes and minimizes the entropy, depending on need.
Basically, it first explores the world and produces a great mess of ideas
about it (i.e., it increases the entropy), then it sorts these ideas out to create
an ordered (personal) understanding of the world (i.e., it decreases the
entropy). The presence of intelligence had already been associated with
decreases of entropy in thermodynamic systems (Szilard, 1964). On the
other hand, we have scientific evidence that exploratory behavior – in a
solution space – coincides with entropy maximization algorithms (Zhang et
al., 2021). The brain is an adaptive system. In order to solve problems
(reduce the entropy) of any kind and in any field, it has to learn about many
possible problems to solve (increase the entropy) in several fields and keep
them ready to be solved. If you wonder why a creative person can come up
with brilliant solutions, it is because of the messiness that they can achieve
in adding lots of randomness and disorder to their mental processes.
Imagine the crazy genius, that person who behaves totally out of the box,
and yet can do things that nobody else can. That is an extreme example of a
built-in functionality that we all possess: creativity. Many books describe
the left brain as logic and the right one as creative, although, no one has
found adequate scientific evidence of this (Corballis, 2014), but let me
speculate that we most probably have several parts of the brain that are
working as a dichotomy against themselves: one part produces solutions (a
logic optimizer that reduces entropy) and the other generates problems to
solve (a creative explorer that maximizes entropy).
Let me explain better with examples. Generating problems has a lot to do
with the ability to keep scenarios open to be solved. It is an exploratory
task. If we want to increase our chances to survive, that is, increasing our
resources to survive in a small, overpopulated world, we have to discover
the Americas, as Christopher Columbus did in 1492, or we have to explore
life on Mars. Or rather, humanity has to. One person instead can already do
a lot to ensure their survival against other people, for example, by studying,
exploring the world, learning about AI and new technologies. Anything that
makes our brain richer in problems to solve, gives us the ability to optimize
our survival, or simply our ability to afford a nice villa on the coast. Thanks
to my ability to increase entropy, before decreasing it, I can solve my
smartphone game better than a computer, with only one question! How? I
will simply ask my other friend to call my phone for me. When it rings, I
will know who has it in their pocket! Am I cheating on the rules of the
game? Yes. Am I thinking out of the (black) box? Definitely so. Have you
ever heard somebody saying that smart people cheat? Research shows that
high achievers tend to cheat more at their exams (Yaniv et al., 2017). Now
you can understand why: a game is a game, so I can perhaps stick to the
rules and play it, but when it comes to survival… If cheating makes me a
survivor, a predator instead of the prey, this is vital to me! Cheating is one
of the tools that nature gave humans to survive. For as long as lions are
strong and gazelles can run fast, humans will cheat for their survival.
Back to intelligence, how does the brain maximize our survival? By
keeping all the possible doors open for us (maximizing the entropy) and by
taking personal decisions that will minimize the entropy in the way we
access information about the world, by giving us a competitive advantage
over other brains. A fly will always react to movements, to avoid being hit.
If we hit it once, it will randomize its movements (increase the entropy of
its flight plan) in order to make it more difficult for us to follow its
movements and hit it again.
Going back to intelligence in AI, instead, is there any specific subset of
AI made of algorithms that work with the maximization of entropy? The
answer is yes, and it is called reinforcement learning (RL) (Sutton & Barto,
1999). RL algorithms consist of rule-based scenarios, with agents that act
on the environment and learn from the feedback they receive from it, based
on rewards. We define rules, environments and rewards, while the
algorithm learns by means of two behaviors: exploration (increase of
entropy) and exploitation (decrease of entropy). While RL acts on a balance
of the two behaviors, it is not as good as general DL algorithms in solving
open scenarios from data. A great drawback of this AI formulation is that it
requires a human to construct the rule-based scenario and set the rewards to
converge to specific solutions. In short, RL does not find anything new in
terms of the simulation, because a human has already pre-defined
everything through a set of precise rules; in this scenario, the RL algorithm
is only able to find an unexpected optimal path, or behavior, to achieve that
goal. This optimal path can be interesting to replicate, as sometimes it
resembles ideas that humans have not thought about, for example, a new
tactic to win at chess (Silver et al., 2018), while the game and its rules, for
example, chess, are fully pre-determined. RL is mature and quite
performant in several specific applications, but it is not yet able to
contribute to creating intelligent behavior without the assistance of an
intelligent human. That is because, we, humans, have an overall perspective
of the world that is much more complex than the simple scenarios that we
can simulate with RL algorithms, and reducing complex environments to
simple scenarios with simplified rules requires our intelligence.
The end of this tour of AI, computers, brains, entropy and survival,
allows me to offer one last interesting analogy to connect AI, entropy, and
international relations: what is a diplomat, if not an intelligent and creative
person who either increases or reduces the entropy of the world to favor
their nation? A diplomat makes the necessary arrangements so that the
nation they represent can keep as many scenarios open as possible to solve
its problems (e.g., access to resources), by dealing with a set of conditions
determined and influenced by the presence of other nations and their
representatives with competing interests. Diplomats explore new
opportunities while keeping a balance among the possible outcomes.
Diplomacy is about keeping as many doors open as possible (maximizing
the entropy) and finding trade-offs and solutions when needed (minimizing
the entropy). In any other discipline, we can call this diplomatic task
“finding a balance”: even nature prefers to balance a complexity of
phenomena in a way that all can coexist, rather than over-emphasizing one
phenomenon in particular. That is true for science, as much as for
humanistic fields in which it is well-known that no absolute knowledge or
truth exists, but rather many currents of thought around any topic. The
harmonies of the world, or Harmonices Mundi as per the title of a work by
Kepler – a seminal piece in the history of astronomy, but also a case study
in the creative process of discovery – are made of a balance of trade-offs
among many interacting systems (including humans, animals, natural
things, etc.) obeying physical laws. Before the advent of humans, these
choices were uniquely natural, however, the artificial part (i.e., governed by
humans) is taking over the natural evolution of our planet, from the peculiar
organization and geography of our civilization affecting local ecosystems,
to planetary effects such as climate change and maybe even the conquest of
space. While we cannot change the laws of nature, we can study and exploit
them to change the organization of the environment. A personal wish is that
in the near future, these evolutionary trade-offs will be determined more
and more by AI algorithms condensing super-human wisdom, and less and
less by human greed. These AI algorithms will be made by humans and will
be working for the advantage of humanity, while keeping an eye on the
stabilization of our ecosystem. The reason why we should place the survival
of our planet in the hands of AI algorithms is because no human in history
has been able to manage an empire without ending up favoring some
specific people, or regions, to the detriment of others. This could be
connected to the limited capability of a single human brain to depict all
scenarios for a multitude of actors on our planet. A brain is a machine made
and dimensioned in terms of memory and computational resources to
benefit mainly its (human) carrier. A solution could lie in the concept of the
digital twin (DT), that is, a digital copy of a physical system, able to
represent and simulate its real part in a computational environment
(Glaessgen & Stargel, 2012). If a multitude of DTs could track any aspect
of our reality, in parallel with the reality itself, and AI algorithms could be
used to solve complex scenarios and create larger and more balanced trade-
offs, this could be the key to maintaining a better order (lower entropy) in
our world. At the same time, we should never wish for a perfect (low
entropy) world, as it is the very messiness of the world itself that keeps us
“alive” and gives us the possibility to dream about changing it to a better
version for ourselves.

Conclusion
If international relations evolve into a new form of AI-based international
relations, it will be a field developed on the back of the computational
power of our new AI technology to handle the entropy of world diplomacy
and politics (Nishikawa-Pacher, 2023; Schweller, 2010): maximize entropy
and generate new opportunities for human survival; at the same time, it will
also be based on keeping a worldwide balance, that is, the ability to regulate
AI activities that minimize entropy in order to rebalance the (dis)advantages
of single nations.
While research in AI will surely offer numerous improvements in the
world, widespread education on the computational and mathematical
fundaments of AI will be essential to guarantee that international relations,
as much as any other field touched by the fast progress that we are
experiencing with AI, will be able to cope with the speed of the progress
itself and leave no one behind.

References
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M.,
Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J.,
Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P.,
Vasudevan, V., Warden, P., Wicke, M., Yu, Y. & Zheng, X. (2016).
TensorFlow: A system for large-scale machine learning. In 12th
USENIX symposium on operating systems design and implementation
(OSDI 16) (pp. 265–283). USENIX.
Abaimov, S., & Martellini, M. (2020). Artificial intelligence in autonomous
weapon systems. In M. Martellini & R. Trapp (Eds.), 21st century
Prometheus (pp. 141–177). Springer.
Abraham, T. H. (2020). The Macy conferences on cybernetics:
Reinstantiating the mind. In Oxford research encyclopedia of
psychology. https://doi.org/10.1093/acrefore/9780190236557.013.541
Acton, J. M. (2012). Bombs away? Being realistic about deep nuclear
reductions. The Washington Quarterly, 35(2), 37–53.
Amoroso, D., & Tamburrini, G. (2018). The ethical and legal case against
autonomy in weapons systems. Global Jurist, 18(1), 20170012.
Aral, S., & Eckles, D. (2019). Protecting elections from social media
manipulation. Science, 365(6456), 858–861.
Asaro, P. (2020). Autonomous weapons and the ethics of artificial
intelligence. Ethics of Artificial Intelligence, 212, 212–236.
Babcock, K. L., & Westervelt, R. M. (1987). Dynamics of simple electronic
neural networks. Physica D: Nonlinear Phenomena, 28(3), 305–316.
Bastos, M., & Mercea, D. (2018). The public accountability of social
platforms: Lessons from a study on bots and trolls in the Brexit
campaign. Philosophical Transactions of the Royal Society A:
Mathematical, Physical and Engineering Sciences, 376(2128),
20180003.
Bechara, A., Damasio, H., & Damasio, A. R. (2000). Emotion, decision
making and the orbitofrontal cortex. Cerebral Cortex, 10(3), 295–307.
Bedini, S. A. (1964). The role of automata in the history of technology.
Technology and Culture, 5(1), 24–42.
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins,
G., & Bengio, Y. (2010). Theano: A CPU and GPU math compiler in
Python. SciPy, 4, 1–7.
Block, H. D. (1962). The perceptron: A model for brain functioning. I.
Reviews of Modern Physics, 34(1), 123.
Bollier, D., & Firestone, C. M. (2010). The promise and peril of big data
(pp. 1–66). Aspen Institute, Communications and Society Program.
Braitenberg, V. (1990). Reading the structure of brains. Network:
Computation in Neural Systems, 1(1), 1–11.
Braitenberg, V. (2001). Brain size and number of neurons: An exercise in
synthetic neuroanatomy. Journal of Computational Neuroscience, 10,
71–77.
Bur, T. C. D. (2016). Mechanical miracles: Automata in ancient Greek
religion [Doctoral dissertation]. Faculty of Arts, University of Sydney.
Carbonell, J. G., Michalski, R. S., & Mitchell, T. M. (1983). An overview
of machine learning. In R. S. Michalski, J. G. Carbonell, & T. M.
Mitchell (Eds.), Machine Learning (pp. 3–23). Springer.
https://doi.org/10.1007/978-3-662-12405-5_1
Catanzaro, B., Sundaram, N., & Keutzer, K. (2008). Fast support vector
machine training and classification on graphics processors. In
Proceedings of the 25th international conference on machine learning
(pp. 104–111). Association for Computing Machinery.
Cireşan, D. C., Meier, U., Gambardella, L. M., & Schmidhuber, J. (2010).
Deep, big, simple neural nets for handwritten digit recognition. Neural
Computation, 22(12), 3207–3220.
Cohen, K. J., & Cyert, R. M. (1961). Computer models in dynamic
economics. The Quarterly Journal of Economics, 75(1), 112–127.
Corballis, M. C. (2014). Left brain, right brain: Facts and fantasies. PLoS
Biology, 12(1), e1001767.
Damasio, A. R., & Damasio, H. (1992). Brain and language. Scientific
American, 267(3), 88–109.
Damasio, A. R. (1999). How the brain creates the mind. Scientific
American, 281(6), 112–117.
Damasio, A. R. (2003). Looking for Spinoza: Joy, sorrow, and the feeling
brain. Houghton Mifflin Harcourt.
Damasio, A. R., Grabowski, T. J., Bechara, A., Damasio, H., Ponto, L. L.,
Parvizi, J., & Hichwa, R. D. (2000). Subcortical and cortical brain
activity during the feeling of self-generated emotions. Nature
Neuroscience, 3(10), 1049–1056.
de Giorgio, A. (2015). A study on the similarities of deep belief networks
and stacked autoencoders. Degree Project in Computer Science,
Second Cycle (DD221X). KTH, Royal Institute of Technology.
Del Vicario, M., Zollo, F., Caldarelli, G., Scala, A., & Quattrociocchi, W.
(2017). Mapping social dynamics on Facebook: The Brexit debate.
Social Networks, 50, 6–16.
Dittrich, J., & Quiané-Ruiz, J. A. (2012). Efficient big data processing in
Hadoop MapReduce. Proceedings of the VLDB Endowment, 5(12),
2014–2015.
Do, T. N., Nguyen, V. H., & Poulet, F. (2008). Speed up SVM algorithm for
massive classification tasks. In International conference on advanced
data mining and applications (pp. 147–157). Springer Berlin
Heidelberg.
Donnarumma, F., Prevete, R., de Giorgio, A., Montone, G., & Pezzulo, G.
(2016). Learning programs is better than learning dynamics: A
programmable neural network hierarchical architecture in a multi-task
scenario. Adaptive Behavior, 24(1), 27–51.
Dumais, S., Banko, M., Brill, E., Lin, J., & Ng, A. (2002). Web question
answering: Is more always better?. In Proceedings of the 25th annual
international ACM SIGIR conference on Research and development in
information retrieval (pp. 291–298). Association for Computing
Machinery.
Fjell, A. M., & Walhovd, K. B. (2010). Structural brain changes in aging:
Courses, causes and cognitive consequences. Reviews in the
Neurosciences, 21(3), 187–222.
Fraser, A., & Burnell, D. (1970). Computer models in genetics. McGraw-
Hill.
Gates, B., Myhrvold, N., & Rinearson, P. (1996). The road ahead. Penguin
Books.
Glaessgen, E., & Stargel, D. (2012). The digital twin paradigm for future
NASA and US Air Force vehicles. In 53rd
AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and
materials conference: Special Session on the Digital Twin. (p. 1818).
Aerospace Research Central. https://doi.org/10.2514/6.2012-1818
Goldman, S. A., & Nottebohm, F. (1983). Neuronal production, migration,
and differentiation in a vocal control nucleus of the adult female
canary brain. Proceedings of the National Academy of Sciences, 80(8),
2390–2394.
Harish, P., & Narayanan, P. J. (2007). Accelerating large graph algorithms
on the GPU using CUDA. In International conference on high-
performance computing (pp. 197–208). Springer Berlin Heidelberg.
Hinton, G. E. (2009). Deep belief networks. Scholarpedia, 4(5), 5947.
Hong-Tao, B., Li-Li, H., Dan-Tong, O., Zhan-Shan, L., & He, L. (2009). K-
means on commodity GPUs with CUDA. In 2009 WRI world congress
on computer science and information engineering (Vol. 3, pp. 651–
655). IEEE.
Jackel, L. D., Howard, R. E., Graf, H. P., Straughn, B., & Denker, J. S.
(1986). Artificial neural networks for computing. Journal of Vacuum
Science & Technology B: Microelectronics Processing and
Phenomena, 4(1), 61–63.
Jurs, P. C., Kowalski, B. R., & Isenhour, T. L. (1969). Computerized
learning machines applied to chemical problems. Molecular formula
determination from low resolution mass spectrometry. Analytical
Chemistry, 41(1), 21–27.
Kiviat, P. J. (1967). Digital computer simulation: Modeling concepts (p.
61). Rand Corporation.
Kuang, Q., & Zhao, L. (2009). A practical GPU based kNN algorithm. In
Proceedings of the second international symposium on computer
science and computational technology (ISCSCT ’09) (pp. 151–155).
Academy Publisher.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature,
521(7553), 436–444.
Lee, S. H., & Dan, Y. (2012). Neuromodulation of brain states. Neuron,
76(1), 209–222.
Levinthal, C. (1966). Molecular model-building by computer. Scientific
American, 214(6), 42–52.
Liu, H., Ma, J., Tong, L., Ma, G., Zheng, Z., & Yao, M. (2018).
Investigation on the potential of high efficiency for internal
combustion engines. Energies, 11(3), 513.
Lopes, N., Ribeiro, B., & Quintas, R. (2010). GPUMLib: A new library to
combine machine learning algorithms with graphics processing units.
In 2010 10th international conference on hybrid intelligent systems
(pp. 229–232). IEEE.
Mantelero, A. (2022). Regulating AI. In Beyond data: Human rights,
ethical and social impact assessment in AI (pp. 139–183). TMC Asser
Press.
Marr, A. (2017). Gentille curiosité: Wonder-working and the culture of
automata in the late renaissance. In Curiosity and wonder from the
renaissance to the enlightenment (pp. 165–186). Routledge.
Minsky, M. (2007). The emotion machine: Commonsense thinking,
artificial intelligence, and the future of the human mind. Simon and
Schuster.
Moore, G. E. (1998). Cramming more components onto integrated circuits.
Proceedings of the IEEE, 86(1), 82–85.
Morrison, A., Mehring, C., Geisel, T., Aertsen, A. D., & Diesmann, M.
(2005). Advancing the boundaries of high-connectivity network
simulation with distributed computing. Neural Computation, 17(8),
1776–1801.
Nedergaard, M., Ransom, B., & Goldman, S. A. (2003). New roles for
astrocytes: Redefining the functional architecture of the brain. Trends
in Neurosciences, 26(10), 523–530.
Nishikawa-Pacher, A. (2023). Diplomacy versus politics: Two mutually (in)
dependent systems. Political Studies Review, 22(3), 471.
Oh, K. S., & Jung, K. (2004). GPU implementation of neural networks.
Pattern Recognition, 37(6), 1311–1314.
https://doi.org/10.1016/j.patcog.2004.01.013
Patel, A. B., Birla, M., & Nair, U. (2012). Addressing big data problem
using Hadoop and Map Reduce. In 2012 Nirma university
international conference on engineering (NUiCONE) (pp. 1–5). IEEE.
Pedron, S. M., & da Cruz, J. D. A. (2020). The future of wars: Artificial
intelligence (AI) and lethal autonomous weapon systems (laws).
International Journal of Security Studies, 2(1), 2.
Rényi, A. (1961). On measures of entropy and information. In Proceedings
of the fourth Berkeley symposium on mathematical statistics and
probability, volume 1: contributions to the theory of statistics (Vol. 4,
pp. 547–562). University of California Press.
Robinson, D. A. (1992). Implications of neural networks for how we think
about brain function. Behavioral and Brain Sciences, 15(4), 644–655.
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning
representations by back-propagating errors. Nature, 323(6088), 533–
536.
Salancik, G. R., & Pfeffer, J. (1977). Who gets power—and how they hold
on to it: A strategic-contingency model of power. Organizational
Dynamics, 5(3), 3–21.
Samuel, A. L. (1959). Some studies in machine learning using the game of
checkers. IBM Journal of Research and Development, 3(3), 210–229.
Schmidhuber, J. (2015). Deep learning in neural networks: An overview.
Neural Networks, 61, 85–117.
Schrimpf, M., Kubilius, J., Hong, H., Majaj, N. J., Rajalingham, R., Issa, E.
B., & DiCarlo, J. J. (2018). Brain-score: Which artificial neural
network for object recognition is most brain-like? bioRxiv, 407007.
Schweller, R. L. (2010). Entropy and the trajectory of world politics: Why
polarity has become less meaningful. Cambridge Review of
International Affairs, 23(1), 145–163.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A.,
… & Hassabis, D. (2018). A general reinforcement learning algorithm
that masters chess, shogi, and go through self-play. Science,
362(6419), 1140–1144.
Sirmon, D. G., Hitt, M. A., & Ireland, R. D. (2007). Managing firm
resources in dynamic environments to create value: Looking inside the
black box. Academy of Management Review, 32(1), 273–292.
Strigl, D., Kofler, K., & Podlipnig, S. (2010). Performance and scalability
of GPU-based convolutional neural networks. In 2010 18th Euromicro
conference on parallel, distributed and network-based processing (pp.
317–324). IEEE.
Sutton, R. S., & Barto, A. G. (1999). Reinforcement learning. Journal of
Cognitive Neuroscience, 11(1), 126–134.
Szilard, L. (1964). On the decrease of entropy in a thermodynamic system
by the intervention of intelligent beings. Behavioral Science, 9(4),
301–310.
Turing, A. M. (1936a). On computable numbers, with an application to the
Entscheidungsproblem. Journal of Mathematics, 58, 345–363.
Turing, A. (1936b). Turing machine. Proceedings of the London
Mathematical Society, 242, 230–265.
Turing, A. M. (1939). Systems of logic based on ordinals. Proceedings of
the London Mathematical Society, 45, 161–228.
Van Schaik, A. (2001). Building blocks for electronic spiking neural
networks. Neural Networks, 14(6–7), 617–628.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.
N., & Polosukhin, I. (2017). Attention is all you need. Advances in
Neural Information Processing Systems, 30.
Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. (2008).
Extracting and composing robust features with denoising
autoencoders. In Proceedings of the 25th international conference on
machine learning (pp. 1096–1103).
https://doi.org/10.1145/1390156.1390294
Whitehead, S. D., & Ballard, D. H. (1991). Learning to perceive and act by
trial and error. Machine Learning, 7, 45–83.
Wiener, N. (1948). Cybernetics or control and communication in the animal
and the machine. John Wiley & Sons: The Technology Press.
Yan, F., Xu, N., & Qi, Y. (2009). Parallel inference for latent Dirichlet
allocation on graphics processing units. Advances in Neural
Information Processing Systems, 22, 2134–2142.
Yaniv, G., Siniver, E., & Tobol, Y. (2017). Do higher achievers cheat less?
An experiment of self-revealing individual cheating. Journal of
Behavioral and Experimental Economics, 68, 91–96.
Zhang, C., Cai, Y., Huang, L., & Li, J. (2021). Exploration by maximizing
Rényi entropy for reward-free RL framework. In Proceedings of the
AAAI conference on artificial intelligence (Vol. 35, No. 12, pp. 10859–
10867). AAAI Press.
Zhou, Y., & Tan, Y. (2009). GPU-based parallel particle swarm
optimization. In 2009 IEEE congress on evolutionary computation (pp.
1493–1500). IEEE.
2
THE GEOPOLITICAL IMPACT OF
ARTIFICIAL INTELLIGENCE
The UN SDGs and China's Strategies in
Africa
Settimio Stallone

DOI: 10.4324/9781003518495-4

Artificial intelligence (AI) has a long history, dating back to the 1950s. In
recent years, it has advanced to surpass human cognitive capabilities in
specific fields due to significant progress in computing and data-handling
capacities (Roehrl, 2022a). While many are excited about AI's potential,
others view it as a threat to our society's stability, citing risks related to a
potential decline in human thought. Governments have implemented
policies to protect individual privacy and consumer rights while expressing
enthusiasm for AI's applications in many fields, including science,
technology, and the military. Opinions vary, but a profound reflection is
absolutely necessary, as AI can outperform humans in many areas, bringing
progress and benefits for humanity but also causing confusion within
human society, especially as it becomes deeply integrated into our daily
lives (Ang & Feinholz, 2018). This reflection is crucial as it helps us
understand the implications of AI's continuous evolution on geopolitical
balance and assets. This topic has recently attracted attention from not only
experts but also from governments, policymakers, politicians, and the
general public.
AI transformation is approximately seven times faster than the Industrial
Revolution. This comparison offers essential insights for shaping policy
responses and designing institutions. Successfully navigating a future with
beneficial AI will require balancing actions that uphold common values of
freedom and autonomy, human well-being and quality of life, and justice
and equality (Roehrl, 2022b). However, billions of people are facing
challenges and are not reaping the benefits of the technological revolution.
In regions such as West and Central Africa, as well as parts of East Asia and
the Pacific, especially in remote areas, preparing for digital transformation
has been challenging due to issues such as the lack of connectivity and the
digital divide. This divide refers to unequal access to digital opportunities
and technologies based on factors such as age, skills, economics,
geography, geopolitics, or social circumstances (UNICEF, 2022).

Role, Responsibilities, and Initiatives of the


International Organizations
If we examine the consequences that the spread of AI can have on the
international system, we cannot ignore the role of international
organizations, particularly those that are part of the United Nations (UN)
system. Over the last few years, they have launched numerous actions and
published several documents on the topic, focusing mainly on the ethical
aspects and applications that AI can offer to lead to the achievement of
some previously set objectives.
International players must not be passive in this significant shift in the
global landscape. AI technologies have the potential to benefit humanity
greatly, and all nations can take advantage of them. International
organizations, especially those focused on human well-being and
development, have adopted a responsible approach to using AI to achieve
various goals. However, it is essential to acknowledge that AI could also
worsen inequalities, discrimination, digital divides, and exclusion and
threaten cultural, social, and biological diversity. Other potential risks
include safety risks (inherent vulnerabilities and limitations of AI systems),
security risks (intentional attacks aiming to compromise the way AI
systems learn or act), and human-machine interaction risks (inadequate use
of AI systems due to complex dynamics of humans operating or working
with AI systems) (Puscas, 2023).
Due to these dangers, many international organizations have developed
strategies and policies and published documents and guidelines to address
opportunities and issues related to AI. For example, the World Health
Organization (WHO) established an expert group to create a Guidance
Document on Ethics and Governance of Artificial Intelligence for Health;
the United Nations Center for Policy Research (UNU-CPR) published a
report in 2019 titled “The New Geopolitics of Converging Risks: The UN
and Prevention in the Era of AI,” which examines how the multilateral
system can better understand and anticipate the risks arising from the
convergence of AI with cyber and biotechnologies. Additionally, the United
Nations Environmental Program (UNEP) partnered with Global AI to use
document scraping techniques to assess the compliance of Corporate
Sustainability Reports with specific standards; the International
Telecommunication Union created a Focus Group on Artificial Intelligence
for Health; last but not least, the Food and Agricultural Organization (FAO)
in June 2021 organized the global forum “Artificial Intelligence for A
Digital Blue Planet” to share knowledge, skills, and innovative ideas and
ways aimed at using AI to improve the management of our aquatic
ecosystems. Furthermore, the Organization for Economic Cooperation and
Development (OECD) maintains a live repository of over 600 national AI
policy initiatives from over 60 countries, territories, and the EU (UN,
2021).
International organizations have expressed significant concern about
ethical issues. For instance, in 2021, UNESCO approved a document titled
“Recommendation on the Ethics of Artificial Intelligence.” This document
outlines monitoring and assessment activities and identifies various action
areas, including governance, data policy, environment, gender, culture,
development, education and research, communication and information,
economy and labor, and health and social well-being. These actions should
be promoted through a policy that encourages strong cooperation between
international organizations and agencies, even if they have different
purposes. The complexity of ethical issues related to AI makes these actions
absolutely necessary. Achieving this requires the collaboration of multiple
stakeholders across various levels and sectors of international, regional, and
national communities. This collaboration should occur in an environment
that enables stakeholders to share responsibility through a global and
intercultural dialogue.
It is essential that all participants in the AI system adhere to the values
and principles embraced by the global community. These should be
reinforced through updates to existing legislation and the development of
new laws, regulations, and business guidelines. This adherence must align
with international law, including the United Nations Charter and Member
States’ human rights obligations. Furthermore, it should align with
internationally agreed social, political, environmental, educational,
scientific, and economic sustainability objectives (UNESCO, 2021).

The Effects of Artificial Intelligence on the UN


Sustainable Development Goals
For many years, the UN has been pushing an ambitious plan designed to
achieve some objectives aimed at achieving sustainable development by
2030. AI's impact is significant because, through it, many previously
established strategies are proving to be no longer valid, and some goals
seem more achievable. However, this does not exclude the possibility that
substantial disparities may arise between various countries and areas inside
the international system due to the particular nature of AI and the
knowledge and skills it requires.
The UN system, including the UN itself and its technical agencies, has
recently adopted AI to achieve objectives set before current information
technology capabilities were available. AI presents new opportunities and
challenges for development, making the 20th-century models for economic
growth, human capital development, and responsive institutions less
relevant and useful. Future strategic opportunities must also address
complex interactions with factors such as climate change, technological
innovation, and social transformation. Strategic coordination and policy
coherence are necessary to deal with these complex interactions (Global
Centre for Public Service Excellence [GCPSE], 2018).
The 2030 Agenda for Sustainable Development is a crucial initiative
related to the potential uses of AI within the UN system. It was adopted by
all United Nations Member States in 2015 and serves as a shared plan for
advancing peace and prosperity for people and the planet, both now and in
the future. At its core are the 17 sustainable development goals (SDGs),
which present an urgent call to action for all developed or developing
countries to participate in a global partnership. These goals recognize that
efforts to eradicate poverty and other forms of deprivation must be pursued
alongside improvements in health and education, reductions in inequality,
and the promotion of economic growth. This must be done while addressing
climate change and safeguarding our oceans and forests. The Agenda 2030
builds upon the eight Millennium Development Goals adopted in
September 2000 with the Millennium Declaration at the Millennium
Summit in New York. The Millennium Development Goals focused on
poverty, hunger, child mortality, diseases, and gender inequalities. The new
agenda aims to ensure prosperity for everyone in a peaceful world,
balancing socio-economic progress with environmental sustainability. It
promotes themes related to economic development, such as industry,
modernization, infrastructure, work, production, and consumption, along
with matters traditionally linked to the protection of the planet, like water
resources, seas, land, wildlife, climate, and energy sources (UN, 2021).
When it comes to agricultural practices, AI technology has excellent
potential. Several factors influence the development and integration of AI
solutions for agriculture. These factors include ICT infrastructure, market
size, private and public investment, availability of human resources, and
aggregation of data (Elbehri et al., 2021).
AI will significantly impact the UN SDGs in various areas, such as
technology sharing, policy research, and open-source infrastructure. This
will be crucial in establishing common ethics and governance of AI,
creating shared applications, and promoting digital cooperation. While AI-
enabled technology can make it easier and faster for the UN to achieve its
goals, it could also exacerbate inequalities and hinder the realization of
specific targets. Additionally, some scholars believe that there may be
adverse effects, especially considering the disparities between countries in
resources, capabilities, funds, democracy, transparency, and cultural values.
Consequently, specific goals that were previously universally accepted may
empower certain actors and significantly influence geopolitical factors due
to the growing importance of AI in the international system and the new
dynamics it creates in processes, plans, actions, and schedules.
For instance, the evolution of modern connections between nations,
including energy flows and IT standards, has turned them into geopolitical
tools. Today, technology and communication tools are developed by a
limited number of global tech companies. This situation can lead to actions
and initiatives that may provoke suspicion among governments. As some
states become more vulnerable, they may try to impose tighter control,
resulting in a loss of connections among countries. In non-democratic
regimes, this could lead to restrictions on freedom and civil rights. This
creates a challenging situation where there is a conflict between sustained
economic growth, national security concerns, and demands for individual
rights. No government, community, or country can satisfy all three interests.

AI and Africa: A Challenging Opportunity


Considering the relevance of the SDGs and the impact that AI can have on
them and their realization, it seems clear that the continent most affected by
these possibilities will be Africa. For African nations, these scenarios
indeed appear exciting and positive, but the particularity of this continent
must also be considered. A range of conditions exists, the diffusion of
technology is erratic, and, above all, public institutions are weak, regimes
are often non-democratic, and some threats posed by non-state actors, such
as terrorism and radicalism, are persistent.
When considering the potential impact of AI on geopolitical assets, it is
crucial to pay attention to Africa. This continent poses a challenge for the
global community due to its underdevelopment. AI technologies can
undoubtedly have a positive impact on African problems. However, they
can also exacerbate existing divides and inequalities, as some countries may
not have fair access to them and thus miss out on their benefits. It is also
essential to respect the wishes of some African people who may choose not
to participate in all technological developments due to their cultural beliefs.
Additionally, African institutions are weak, and the widespread adoption of
AI technologies could have a significant, and not always positive, impact on
human rights, fundamental freedoms, democracy, and socio-cultural
processes (UNESCO, 2021).
The African continent has a unique opportunity to leverage AI
technologies for local socio-economic growth. This includes the potential to
leapfrog the technological infrastructure associated with the Third Industrial
Revolution. With a dynamic and growing youth population ready to
embrace new forms of digital work and entrepreneurship, African
governments should prioritize the adoption of AI solutions. These solutions
can help achieve national development goals and contribute to prosperous
and inclusive societies. Policy responses to AI are emerging across the
continent, with Egypt, Mauritius, and Rwanda being the first African
countries to publish national AI strategies (Adams, 2022).
AI has the potential to address critical sustainability challenges in Africa,
including:

Preventing and mitigating the impacts of flooding and wildfires


Preserving wildlife
Providing healthcare access in remote areas
Facilitating quality education in remote areas
Enhancing social mobility

However, Africa remains dominated by foreign technology and AI firms


that do not necessarily support the realization of national developmental
priorities (such as those outlined in the UN 2030 SDGs). Or worse, they
exacerbate exclusion and oppression for certain groups, particularly
women. African policymakers must prioritize the growth of local AI
capabilities and capacities that can be drawn on to advance inclusive
economic growth and social transformation meaningfully. This arrangement
requires AI policy responses to build on national digital agendas and to
focus on equitable access to digital, data, and computing infrastructure.
African people face daily challenges that are different from those in the
Northern Hemisphere. These challenges require solutions that may not be
highly advanced but are sometimes unavailable or non-existent. Given this
unique situation, international organizations should collaborate with IT
corporations to promote approaches, programs, and solutions tailored to the
specific needs of this continent (Markelova, 2018). In this case, even critics
of AI might find it easier to accept it, acknowledging its capacity for ethics
and morality and its ability to align business targets with the public good.
Moreover, developments in Africa are significantly influenced by internal
trends such as economic growth, increased conflicts, environmental
degradation, and the digital transition. New technologies and societal and
economic factors, such as establishing accessible trade areas, digitalization,
the labor market, and the information space, shape functional spaces. These
spaces are influenced by major global trends, including population growth,
climate change, systemic pressures from a multipolar world, and
unexpected events like the impact of COVID-19 (Mustasilta, 2022).
In Africa, many AI initiatives are still in their early stages, and the
landscape is expected to change significantly in the next few years. The
main challenge is not the access to AI technology but rather the lack of
data. Additionally, AI in Africa must be able to process a wide variety of
languages, as approximately 2000 languages are spoken on the continent.
Enabling more people to communicate and be understood in their own
language could lead to significant progress in digital and societal inclusion.
In the diverse social and cultural contexts of Africa, AI should prioritize
digital literacy and education, community development, comprehensive
reskilling programs, access to basic digital infrastructure, protection of
minority ethnic communities, and the promotion of diverse forms of
knowledge (Adams, 2022).
It is crucial to enhance connectivity and integration in Africa to avoid
geopolitical fragmentation, which could undermine sustainable growth and
prosperity. The African continent is currently at a pivotal point. Future
improvements in connectivity should aim to provide inclusive access to
digitalization, sustainable infrastructure development, poverty and
inequality reduction, better public services, improved governance, and
integration of previously isolated areas. Conversely, a lack of adequate
investments and plans to enhance connectivity may widen existing
divisions, creating or exacerbating gaps across geographical, functional, and
social areas, leading to increased instability and conflict. Disconnection
could also result in the emergence of two systems of governance in Africa:
one by state authorities and the other by self-proclaimed Islamic states or
other armed groups. It is essential to take action to achieve these goals and
maintain control of depopulated, contested peripheral areas that
governments can no longer access.
The inadequate physical and digital infrastructure is one of the main
challenges in using AI to improve conditions for African people in sectors
aligned with the SDGs. Africa also requires more data, increased
investment in AI, and AI-related education. Currently, African AI
companies are at risk of being less competitive than multinational rivals in
their own market. Africa also needs access to the latest global AI research
to become more competitive and independent. Vital AI ecosystems can
boost competitiveness, reduce development costs, and decrease reliance on
foreign AI companies.

Digital Innovation as a Political Weapon: China's


AI Strategies in the African Continent
The significance of Africa on the international stage has increased
remarkably in recent years. This is not only due to natural resources and
access to national markets but also because of the continent's digitalization.
As a result, major international players such as the United States of
America (USA), the European Union (EU), Russia, and China have
launched competing policies and initiatives. China appears to be the most
active in this regard, mainly due to its effective synergies with pre-existing
actions such as the Belt and Road Initiative. While technological innovation
can have positive effects on the development of the African continent and
the achievement of the SDGs, Chinese action is marked by a failure to
respect human and civil rights, a lack of interest in true democratization of
Africa, and the introduction of pervasive technologies that limit personal
freedoms. Last, these actions are often financed with loans that African
states struggle to repay.
AI is having a significant impact on the strategies and policies of global,
central, and regional powers. Its development is influenced by a wide range
of contexts, and understanding its effects and impacts on current
geopolitical dynamics is a crucial challenge. The global map of power
relations is no longer solely defined by geography and territorial control.
Instead, it is shaped by the movement of people, goods, money, and data, as
well as by the opportunities that technology presents. The USA and the
People's Republic of China (PRC) are, and will undoubtedly remain, at the
forefront in this field, using AI to bolster their global roles in the current
international system.
After the end of the Cold War and the fall of the Soviet Union, Africa's
significance as a strategic arena for great power politics decreased.
However, the situation has recently changed. Countries such as the USA,
China, and Russia are advancing their positions on the African continent for
financial, political, or military reasons or related to all three simultaneously.
In August 2022, the US government presented a strategy for Africa
focusing on the continent's rapidly growing population and its position as
one of the world's largest trading blocs with significant natural resources.
Additionally, Africa represents a sizeable voting bloc in the UN, and the
USA is positioning itself in power competition with China and Russia to
influence Africa (Nogueira Pinto, 2023).
States that are already vulnerable have become arenas for great power
rivalries and violent extremism. Boko Haram and al-Shabaab have been
present in northern Africa and the Sahel region for a long time. However,
new insurgent groups have recently emerged in central Africa, particularly
in Mozambique and the Democratic Republic of the Congo (DRC). This
change is affecting existing political and social tensions in the region. Much
of the conflict is related to the control of natural resources, which in turn
impacts relationships with other powerful entities, including China and
Russia. This not only affects security but also worsens longstanding
tensions and conflicts between Christians and Muslims, as well as the
relationship between the central governments and the outlying regions
(Thorslund & Lindsjö, 2023).
Russia employs ad hoc political strategies to undermine social cohesion
among African populations, sow instability, and gain control over specific
sectors for resource extraction. While Russia's overall trade with African
nations is relatively small compared to the USA, China, and the EU, its
arms exports to Africa are significant, constituting about 50% of all arms
imported by African countries. In addition, large state-owned Russian
companies, often controlled by oligarchs, invest heavily in the African
energy and mineral sectors. As permanent members of the UN Security
Council, Russia and China have a significant influence on discussions about
Africa. These solid and strategic ties give priority to Russia and China when
it comes to distributing mining concessions in certain African nations
(Thorslund & Lindsjö, 2023).
China has been involved in Africa's ICT sector for some time. However,
only recently has the USA and the rest of the world begun to recognize this
partnership's potential risks and challenges. With both significant tech
powers now competing for control in Africa's ICT landscape, African
countries find themselves in a position of increasing importance and
opportunity. However, they also face the challenge of navigating the
complex digital geopolitics involved. They need to balance strategic digital
cooperation with China and the USA while maintaining Africa's digital
sovereignty and upholding citizens’ rights, privacy, and other ethical values.
Even the EU's economic, diplomatic, and military role is not to be
underestimated. The perceived rewards are access to minerals, rare earth
elements, natural gas, and oil. These actions also relate directly to ongoing
conflicts that are strategically important for the world's leading powers. In
April 2018, the European Commission released a document outlining three
main goals: enhancing the EU's technological and industrial capacity and
promoting the use of AI across the economy, preparing for socio-economic
changes brought about by AI, and establishing an ethical and legal
framework for AI based on the EU's values and the Charter of Fundamental
Rights. It also mentioned plans for creating a European AI Alliance to work
on various aspects of AI, including ethics guidelines and cooperation with
stakeholders. The EU's AI strategy for Africa is based on these guidelines
(European Commission, 2024).
The USA used to play a significant role in Africa's digital development.
However, it can no longer compete with China's increasing investments and
commitments in the digital sector. As a result, it is now working more
closely with the EU and its member states to coordinate its digital approach
toward Africa. The US Strategy Toward Sub-Saharan Africa (August 2022)
emphasizes democracy and human rights as a way to counter Chinese
influence. However, over the past two decades, the West's already fragile
reputation in Africa has deteriorated. The renewed involvement of the USA
and EU in Africa has been met with skepticism by critics who view it as an
attempt to counteract China rather than a genuine interest in Africa's
development and well-being (Sguazzin, 2022).
In October 2021, China became Africa's second-largest trading partner
after the EU. China has strategically expanded its presence in Africa
through the Belt and Road Initiative, also known as the New Silk Road,
which was launched in 2013. This ambitious initiative aims to enhance
China's economic and political influence. As part of this plan, the Chinese
government enters into agreements with countries along the route and
frequently provides these nations with Chinese bank loans to support large-
scale infrastructure projects such as roads, ports, and 5G networks. Over the
past two decades, China has actively nurtured economic and political ties
across Africa with the goal of surpassing the EU as Africa's primary trading
partner by 2030. China has significant technological and economic
advantages in making agreements in Africa and other regions. The country
and its companies have established various corporate collaborations in data
infrastructure and biotechnology. This allows them to access new and
substantial data markets, including individuals’ consumption behavior and
biometric and biological data (Obe & Wallace, 2023).
Africa's growing online population and the importance of digital
transformation for growth and development have increased the continent's
focus on improving access to information technology infrastructure.
Governments and private sector entities are investing in expanding the
digital ecosystem across the continent, focusing on developing internet
infrastructure, including internet exchange points, undersea and terrestrial
cables, satellites, and wireless systems. Currently, 21 of the 25 least
connected countries in the world are in Africa. The underdeveloped
infrastructure in these countries leads to a greater reliance on international
connections, making them more susceptible to frequent internet outages,
ultimately hindering digital resilience (Pawlak, 2022).
The EU, the USA, and China are all trying to support African nations in
implementing their projects. All three powers are involved in various
programs to enhance the development of digital infrastructure across the
continent. They include the EU's Declaration for the Future of the Internet
in collaboration with the USA, as well as China's efforts to build a
Community with a Shared Future in Cyberspace. These initiatives are
respectively part of China's Belt and Road Initiative and its new Global
Development Initiative, the G7 Partnership for Global Infrastructure and
Investment led by the USA, and the EU's Global Gateway.
In July 2017, China revealed its national AI strategy, the New Generation
Artificial Intelligence Development Plan (AIDP). This plan outlines China's
geopolitical, fiscal, and legal/ethical objectives related to AI technologies.
The AIDP sets three milestones for development by 2020, 2025, and 2030.
It envisions AI as the primary driver behind China's industrial advancement
and economic transformation. The main goal of the AIDP is for China to
become a global leader in AI innovation by 2030, with an AI industry
valued at 1 trillion yuan ($147 billion) (Khanal et al., 2024).
The Ministry of Science and Technology has been actively involved in
supporting the Beijing AI Principles for Responsible AI Research and
Development, which the Beijing Academy of AI developed. Shortly after,
the National Governance Committee for the New Generation Artificial
Intelligence released the Governance Principles for the New Generation
Artificial Intelligence. In September 2021, this body published the Ethical
Norms for the New Generation Artificial Intelligence, which explains these
principles at a high level across the AI lifecycle. In 2019, the Ministry of
Industry and Information Technology championed a “Joint Pledge on AI
Industry Self-Discipline,” which contains several AI ethical principles.
Additionally, a committee under the Chinese Ministry of Civil Affairs is
discussing the development of AI ethics.
After the release of the AIDP, several policy documents have been
published to expand on the high-level goals set out in the development plan.
For example, in line with the 2020 objectives, the National Governance
Committee for the New Generation Artificial Intelligence released ethical
principles in June 2019. Additionally, the committee published an ethics
code in September 2021, aiming to integrate ethics into the entire lifecycle
of AI. China's cybersecurity standards body also issued the Guidelines for
Artificial Intelligence Ethical Security Risk Prevention in January 2021.
These initiatives indicate a shift toward implementing ethical standards into
law by 2025. For instance, the Cyberspace Administration of China (CAC)
recently introduced a recommender system regulation, and nine ministries
jointly released a guiding opinion on algorithmic governance. Their goal is
to establish a comprehensive governance ecosystem within the next three
years (Roberts et al., 2021).
The EU's approach to AI governance differs significantly from that of
China. The EU emphasizes “human-centric” AI based on fundamental EU
values such as human dignity, freedom, democracy, equality, the rule of law,
and respect for human rights. This means that AI should benefit individuals
and society as a whole, contributing to the UN SDGs and fostering
European business development. The Chinese governance documents
approach AI benefits differently. In the AIDP, individual rights are rarely
mentioned as the focal point. Instead, there is a more precise emphasis on
the benefits to China in terms of international competitiveness, economic
development, and societal improvement. However, the practical
mechanisms to achieve these aims reveal tensions with individual rights.
The AIDP can be seen as “human-centric” in a different sense, placing
greater emphasis on the benefits to China as a state and society, with a focus
on the individual being primarily secondary.
The Chinese government has also developed ambitious policies for
global leadership in AI and sustainable development. AI technologies have
the potential to support China in helping other countries achieve the SDGs.
To effectively leverage AI for the SDGs, Chinese policymakers should
align AI technologies with the SDGs, identify priority areas, and design
suitable business models and incentive structures to scale viable solutions.
In the current digital geopolitical landscape, China's involvement in
Africa is expanding beyond infrastructure to include digital governance
issues such as e-commerce, the digital economy, and cybersecurity.
However, limited access to electricity, low telephone density, and restricted
internet and broadband penetration are hindering the advancement of new
technologies in Africa. To tackle this issue, African countries have allowed
Chinese tech companies to dominate the African mobile market. China is,
in fact, the primary exporter of AI-driven technologies to the African
continent. By May 2021, 40 out of 54 African countries had signed Belt and
Road Initiative agreements, bringing smart city infrastructure, 5G networks,
surveillance cameras, cloud computing, and e-commerce to many African
cities (Adams, 2022).
Chinese companies provide loans through the Belt and Road Initiative to
connect countries and finance critical infrastructure such as ports and 5G
networks. By the end of 2020, the 97 countries with the highest external
debt to China involved in the initiative included Ethiopia and Kenya in the
top five. Djibouti and Angola had external debts to China amounting to
43% and 41%, respectively, as a percentage of their gross national income
in 2020.
African leaders have expressed concerns about the use of Chinese loans
to finance major infrastructure projects in Africa, leading to a debt trap for
some countries. The continent is currently grappling with high debt levels,
and some economies urgently require debt restructuring to avoid collapse.
In the past, African countries obtained loans and credits from the World
Bank and the International Monetary Fund. However, there has been a shift,
with a significant portion of African debt now being managed by private
institutions operating on a commercial basis, and they do not always
provide debt relief. China is Africa's leading provider of public-sector
loans. While Chinese lending peaked around 2017, it decreased by as much
as 30% between 2018 and 2019, but it remains substantial. Chinese lending
has become increasingly commercial, particularly in large and often high-
risk African infrastructure projects, through a model known as resource-
backed finance. This approach involves the lending country securing its
investment based on anticipated future revenues.
There has been a lack of transparency in Chinese loan agreements, which
has understandably led to skepticism about China's intentions in Africa. For
example, there is concern about the $5 billion loan agreement between
Kenya and the Export-Import Bank of China to fund the Mombasa-Nairobi
railway. This agreement has raised significant questions about
accountability because it has been kept from public view for a long time,
and the Kenyan government has granted excessive legal authority to China.
Another critical issue is at play here, involving the control of individuals,
the destruction of social cohesion, the influence on corporate financial
status, and the manipulation of government actions. In Africa, “information
disorders” during elections suggest close ties with China, which has the
economic power to orchestrate these efforts.
AI applications can be utilized by both criminal and extremist actors, as
well as by legitimate democratic states seeking to control conflict. In times
of conflict, hostile actors could employ emerging AI technologies to spread
disinformation, increase polarization, target individuals and their
information systems, manipulate data sets, and attack critical infrastructure.
AI programs and machine-learning algorithms can use biometric data, such
as fingerprints, photographs, and digital facial data, to accurately identify
violent extremists and terrorists. This can also serve as an early warning
signal to predict recurring patterns in hate speech. Furthermore, AI tools,
using metadata based on social media activity, location information, and
financial transactions, can help identify money laundering to finance
terrorism (Pauwels, 2022).
The widespread use of AI in Africa gives Chinese companies a
competitive advantage. This could potentially lead to Africa relying on
China and Chinese technology for its policies in the future. For instance,
Chinese firms are collecting data on the movement of people and vehicles,
and African policymakers may increasingly depend on this data to address
infrastructure and transportation issues. Some African researchers have
likened this to a new form of colonization, similar to the historical
exploitation of Africa by Western countries. The Chinese tech giant Huawei
has successfully implemented this model through its partnerships with local
companies such as MTN and Safaricom in African countries. On the other
hand, Washington will need to utilize its existing strategic alliances and
partnerships with other likeminded countries to promote an open,
democratic, inclusive, and transparent digital ecosystem (Alayande, 2023).
The use of facial recognition technologies in African cities presents
several issues. These technologies have been developed elsewhere and not
trained on local facial data, leading to privacy and human rights concerns.
In South Africa, Vumacam, a company using a Danish-built facial
recognition system, has established a comprehensive surveillance camera
network designed to profile suspicious behavior. However, these
technologies have been shown to misread African faces and significantly
limit human rights, such as freedom of movement, association, and the right
to equality and fair treatment. Their use in a racially divided country like
South Africa is highly risky and contradictory to the country's democratic
vision of a transformed and equal society. Similarly, in Uganda, AI-
powered facial recognition systems developed by Huawei were used in the
2020 elections to identify, track down, and arrest supporters of the
opposition leader, Bobi Wine (Parkinson et al., 2019).
AI not only poses a threat to fundamental human rights and political
stability, especially when developed without proper testing, impact
assessment, or local skills development, but AI technologies are also mainly
developed and offered by a small number of multinational monopolies. This
undercuts local businesses and domestic growth potential. For example,
Huawei is a dominant market player, as well as companies like AWS, a
leading global cloud services provider, that provide essential cloud services
for AI systems and storage for the large datasets needed to develop them.
The establishment of highly technical cloud services and the massive data
centers required to house and power these data-driven services demands
substantial resources, including land, electricity, water, and technical skills.
These resources tend to be accessible only to Big Tech companies (Adams,
2022).

Conclusion
The USA and the EU have launched important initiatives for the
digitalization of the African continent, many of which are based on AI.
However, China seems to have moved earlier and better, also managing to
intercept the technocratic, if not authoritarian, evolution of many African
regimes, which have sometimes justified the adoption of Chinese
technologies with the need to achieve the SDGs. In this situation, four
possible scenarios emerge regarding the impact that the diffusion of AI
technologies will have on the continent's future.
The competition to dominate Africa's AI and critical infrastructure
markets is geopolitical, and Beijing is racing to take the lead. Over the past
20 years, China has rapidly built its communications infrastructure and
advanced its data surveillance capabilities globally. It has shown a strong
interest in leading the development of Africa's technology markets. The
Chinese Communist Party aims to promote its authoritarian governance
model in Africa, which heavily relies on surveillance technology.
In this situation, considering the future implications on the geopolitical
equilibrium provoked by the use of AI, particularly in Africa, characterized
by different conditions across communities, it is possible to develop four
global scenarios, assessing their sustainability implications:

Scenario 1: dynamics continue as usual, resulting in most of the African


population being left behind because the contraposition between
different global powers provokes a paralysis of initiatives related to AI
on this continent.
Scenario 2: launching a genuinely multilateral initiative to promote AI
as a global public good, focusing on achieving the SDGs and leaving no
one behind.
Scenario 3: a winner-takes-all situation in which one country (and one
or two companies) “win” the AI race in Africa, monopolize the benefits,
and use its power to strengthen its position and role in this continent.
This scenario could be favored by an approach influenced by economic
considerations but finalized to be directed to exalt the political
prominence of a global power – China above all – in Africa.
Scenario 4: regionalization and fragmentation, with AI infrastructure,
regulation, and use fragmented into a few regional blocks with little
cooperation between them.

Numerous AI initiatives in Africa have brought various benefits with the


potential to transform African economies and help achieve the SDGs. The
many emerging African initiatives should be given priority and supported
by liberal democracies to help reduce threats associated with Chinese AI.
African AI researchers and scientists have established networks to share
knowledge, create ethical frameworks, and develop skills and capabilities
across the continent. Providing direct financial support to institutions is
crucial to ensure the inclusive engagement of Africans in the global AI
economy. Africans’ independent use of and meaningful contributions to AI
will help prevent Beijing from using the continent to solidify and further
export its brand of digital authoritarianism.

References
Adams, R. (2022). AI in Africa: Key concerns and policy considerations for
the future of the continent. Africa Policy Research Institute (APRI).
https://afripoli.org/ai-in-africa-key-concerns-and-policy-
considerations-for-the-future-of-the-continent
Alayande, A. (2023). Africa and the US-China tech competition. Dataphyte.
Ang, T. W., & Feinholz, D. (2018). Working for, not against, humanity. In
Artificial Intelligence: The promises and the threats. The UNESCO
Courier (vol. 3, p. 29–31).
https://courier.unesco.org/en/articles/artificial-intelligence-promises-
and-threats
Elbehri, A., Eskandar, H., & Chestnov, R. (2021). Adopting artificial
intelligence solutions for agriculture: Potential, the process, the
success factors, and key recommendations. In A. Elbehri & R.
Chestnov (Eds.), Digital agriculture in action. Food and Agricultural
Organization (FAO). https://openknowledge.fao.org/items/161422b5-
a493-4099-bbbb-8e8a87f02824
European Commission. (2024). European approach to artificial
intelligence. https://digital-strategy.ec.europa.eu/en/policies/european-
approach-artificial-intelligence
Global Centre for Public Service Excellence (GCPSE). (2018). Foresight
manual. Empowered futures for the 2030 agenda. United Nations
Development Program (UNDP).
Khanal, S., Zhang, H., & Taeihagh, A. (2024). Development of new
generation of artificial intelligence in China. Journal of Contemporary
China.
https://www.tandfonline.com/doi/full/10.1080/10670564.2024.233349
2
Markelova, K. (2018). Moustapha Cissé: Democratizing AI in Africa. In:
Artificial Intelligence: The promises and the threats. The UNESCO
Courier (Vol. 3, pp. 20–21). UNESCO.
https://unesdoc.unesco.org/ark:/48223/pf0000265211
Mustasilta, K. (2022). Peripheries. In G. Faleg (Ed.). African Spaces – The
new geopolitical frontlines. European Union Institute for Security
Studies (EUSS).
Nogueira Pinto, T. (2023). A new cold war in Africa. Geopolitical
Intelligence Service. https://www.gisreportsonline.com/r/us-strategy-
africa/
Obe, A. V., & Wallace, J. (2023). China-Africa relations. Chatham House.
https://www.chathamhouse.org/2023/01/china-africa-relations
Parkinson, J., Bariyo, N., & Chin, J. (2019). Huawei technicians helped
African Governments spy on political opponents. The Wall Street
Journal.
Pauwels, E. (2022). Counterterrorism and violence prevention:
Safeguarding against the misuse and abuse of artificial intelligence.
Global Center for Cooperative Security.
https://www.globalcenter.org/wp-
content/uploads/GCCS_PB_Safeguarding_Against_Misuse_Artificial_
Intelligence_web.pdf
Pawlak, P. (2022). Digital. In G. Faleg (Ed.), African Spaces – The new
geopolitical frontlines. European Union Institute for Security Studies
(EUSS).
Puscas, I. (2023). AI and international security. Understanding the risk and
paving the path for confidence-building measures. United Nations
Institute for Disarmament Research (UNIDIR).
Roberts, H., Cowls, J., Hine, E., Morley, J., Wang, V., Taddeo, M., &
Floridi. (2021). Governing artificial intelligence in China and the
European Union: Comparing aims and promoting ethical outcomes.
The University of Oxford, Alan Turing Institute.
Roehrl, R. A. (2022a). Beyond a black-box approach to artificial
intelligence policy – A simple guide to definitions, functions, and
technology types. UN Department for Economic and Social Affairs
(UNDESA).
Roehrl, R. A. (2022b). Conceptualizing future scenarios of artificial
intelligence: From energy servants to AI servants. UN Department for
Economic and Social Affairs (UNDESA).
Sguazzin, A. (2022). China surpasses US in eyes of young Africans, survey
shows. Bloomberg. https://www.bloomberg.com/news/articles/2022-
06-12/china-surpasses-us-in-the-eyes-of-young-africans-survey-shows
Thorslund, E., & Lindsjö, G. (2023). Artificial intelligence, sustainable
development and geopolitics in Africa. Mistra Geopolitics.
UN. (2021). Resource guide on artificial intelligence strategies. United
Nations Department of Social and Economic Affairs.
UNESCO. (2021). Recommendation on the ethics of artificial intelligence.
UNESCO Social and Human Sciences Sector.
UNICEF. (2022). Annual report. UNICEF Information and Communication
Technology Division.
3
COMPLEX REFLECTIONS ON
ARTIFICIAL INTELLIGENCE
Marco Emanuele

DOI: 10.4324/9781003518495-5

Introduction
Artificial intelligence (AI) and new wave technologies are the subject of
much analysis. Many people from different sectoral observatories discuss
the impacts of such technologies. There is an obvious barrier in thinking
that needs to be overcome. We still reason about reality as if we were in the
20th century: yet the transformations we are experiencing should induce us
to change paradigm, to embrace complexity. The challenges we face,
between technological revolution and polycrisis (first and foremost that of
the environment) demand a turning point in our approach, new
responsibilities, new policies, and strategies.
Polycrisis, New Technological Wave, Complex
Thinking
Humanity is in a situation of increasing complexity. A polycrisis runs
through our lives. There are several degenerative crises: this adjective is
used to emphasize that the term crisis is not in itself negative. A
degenerative crisis, on the other hand, is an essentially ungoverned,
politically unaddressed, historical transition.
We, humanity, are in a phase of profound transformation, not just change.
In this way, the cultural and operational paradigms we use are no longer
relevant: they are essentially exhausted, emptied, and do not help us
understand a reality that exceeds our thinking.
The technological revolution is what is raising the bar of complexity. AI
is part of the new technological wave (AI, biotechnology, robotics),
described by Suleyman and Bhaskar (2023) who make the challenges ahead
crystal clear. In a very effective summary, Buchanan and Imbrie (2022)
write that the three sparks that light the fire of AI are data, algorithms, and
computing power. Kissinger et al. (2021) note that AI will cause an
alteration of human identity and the human experience of reality as has not
happened since the advent of the modern age: the same authors argue that
AI constrains us to seek a form of logic that we have not yet identified in
order to explore unknown aspects of reality.
Risks and opportunities, in the polycrisis and emergence of the new
technological wave, are intertwined and interact in a kind of inter-in-
dependence. Everything is increasingly linked, inseparable: this is
complexity, the mosaic-world that can only be understood in the
relationship of all its elements. To do this takes complex thinking, no longer
linear, no longer 20th century.
Complex thinking is what allows us to look beyond, into the depths of a
transformation that is radically affecting all our ways of life, placing us
within a changing era.

Five Points
In this reflection, five points of the great transformation induced by the new
technological wave are analyzed.
There is, first of all, a moral point. When we speak of values, we are
referring to a theoretical-pragmatic system of moral tensions that enable us
to live worthily as human beings and to co-exist as humanity. If we think
about the world at large and our democratic systems, we realize that values
such as trust and relationships are being challenged. It is we ourselves, in
fact, who are the architects of our possibility of coexistence according to
peace, justice, and responsible and sustainable development; likewise, it is
we humans who can change course, go over the edge, allow crises to further
degenerate and close ourselves off in dangerous self-referential
individualism.
It is necessary to look at the complexity of the new technological wave
through the complexity of values expressed according to the times we live
in. When AI positively transforms sensitive areas such as healthcare,
guaranteeing results that were unimaginable just a few years ago, we are all
ready to stand up for it. By the same token, however, we are ready to
denigrate the progress of AI when it becomes a tool of war and risks, in the
crescendo of cyber risks, to call into question the democratic certainties we
have painstakingly acquired.
From a moral point of view, then, we need a ‘techno-realist’ attitude,
based on complex thinking and immersed in the complexity of a reality that
is changing ever more rapidly and profoundly. It is important to escape the
logic of declarations of intent. The new sense of a truly strategic policy is to
get down to business and technological development. It is easy to
understand why techno-appeasement or techno-antagonist attitudes are of
no use: they are, in fact, entirely framed in the binary logic of friend or
enemy, black or white.
It is precisely the inter-in-dependence between risks and opportunities
that calls for an innovative approach. Our values are lost if they are not
rethought in a complex key and if they are not rejoined in the reality that
polycrisis and the new technological revolution (both transversal
phenomena) transform.
Thus, we need an algor-ethic. Pope Francis (2024) writes:

An authentically humane outlook and the desire for a better future for our
world surely indicates the need for a cross-disciplinary dialogue aimed at
an ethical development of algorithms – an algor-ethics – in which values
will shape the directions taken by new technologies. Ethical
considerations should also be taken into account from the very beginning
of research, and continue through the phases of experimentation, design,
production, distribution and marketing. This is the approach of ethics by
design, and it is one in which educational institutions and decision-
makers have an essential role to play.

The second point is cultural. The world is in dire need of dialogue


between different cultures (including religions) and a recovery of mutual
trust within and between communities and institutional systems. Today, the
world appears as a summation of non-dialoguing realities.
We mean dialogue in a complex sense. It is, in fact, an inter-in-
dependence of dialectical dialogue (the mediation of power relations and
particular interests) and dialogical dialogue (the construction of common
ground that is shared as much as possible).
Can the new wave of technology help us? Right from the design of AI
systems, we must welcome all points of view, involve different cultural and
religious traditions, and not exclude them in the name of simplifying a
single language. This is an operation that requires a new cultural will and
the ability to break down fences. In reality, there is no separation and a
technology for the common good must be programmed to facilitate
complex dialogue, not to radicalize partisan positions. We must
progressively move away from the dangerous individualism that is
threatening us (by our own hand).
A technology for the common good is a technology for open and
communicating worlds, in a continuous relationship.
Much, if not everything, starts with our personal cultural responsibility.
Do AI models tell us the truth? How can we develop critical thinking? We
should pay more attention to our human intelligence, recovering our ‘plus’
compared to machines. Technology, in fact, is one of the fruits of our
creative potential.
The third point is political. A decisive element in the new technological
wave is the transformation of power. Who has power? How is it exercised
and in whose interest?
These are strategic questions that deserve further investigation.
One thing is clear, in the polycrisis and the new technological wave the
national level of states is no longer sufficient to contain what is happening:
the challenges are transnational and do not stop at borders.
The political point meets the degenerative crisis of representative
democracies. Abstentionism is the big problem we have to confront because
democracies mature in participation and in the ability of the ruling classes
to address and try to solve the vital problems of the communities that
inhabit them. Pervasive misinformation and disinformation accentuate the
no longer acceptable inequalities and, in addition to generating confusion,
create distrust in democratic systems. Technology and politics are
intrinsically intertwined.
While, thanks to emerging technologies, we aim to create increasingly
‘smart’ and efficient cities, the gap between social classes seems to be
widening: inequalities are a growing problem and a decisive political fact.
States, in short, no longer seem to have the power to make an impact with
respect to the history that surpasses them. A sustainable paradox?
Economically, the new technological wave shows us transformed
capitalism.
Technological research and production go hand in hand with the quest for
profit. If profit is part of business activity, the current capitalist model
cannot neglect the social aspects. Unprecedented technology requires
appropriate governance. In the age of exponential technologies, companies
of global interest are needed, for which transparency, accountability, and
ethics are basic and fundamental traits.
Suleyman and Bhaskar (2023) note that since we are going through the
most profound transition in value creation (from labor to capital) in history,
it is necessary to completely overhaul the taxation system to adequately
finance welfare and security.
Transforming capitalism demands new rules.
The last point concerns the legal aspect.
Here we encounter some very sensitive dynamics. The first concerns the
system of laws that can regulate the evolution of the new technological
wave. Nation states, as mentioned, are struggling in the face of the power
and transnationality of the phenomenon: national and regional laws,
although very advanced (such as the recent one in Europe), have to deal
with striking speed and radicality. In this sense, what is the future of
legislative activity on these matters? The second dynamic concerns judicial
activity and how it is being transformed: in the new technological wave, it
is the structure of processes that becomes very different. What guarantees
will the citizen have in the administration of justice within the technological
revolution?
The new wave of technology challenges all our established certainties.
Morally, culturally, politically, economically, and legally, we are called to
new thinking that is in line with the complexity of what is transforming.
And where the real transforming element, from our personal lives to
international relations, is technology.
This is the challenge of the future that is already present and calls
everyone to a renewed assumption of responsibility.
We have before us a universe of meaning and operation to rebuild. We
are living inside a real change of era. Nothing will be the same as before
because polycrisis and the new wave of technology are transforming all our
paradigms of reference.
The topic of the new technological wave carries a geostrategic weight
(something more than geopolitical because it invokes a worldview).
Pope Francis (2024) writes:

The remarkable advances in new information technologies, particularly


in the digital sphere, thus offer exciting opportunities and grave risks,
with serious implications for the pursuit of justice and harmony among
peoples. Any number of urgent questions need to be asked. What will be
the consequences, in the medium and long term, of these new digital
technologies? And what impact will they have on individual lives and on
societies, on international stability and peace?
A complex reflection must work on how AI will shape the art of
government and how (and if) the latter will shape AI. Following Buchanan
and Imbrie's analysis (2022), in the geostrategic competition for AI, the
United States and China are using AI to invent the future of peace and war.
In regard to war, with advances in AI, the human role in armed conflicts
is transforming. Weapon systems will become increasingly autonomous
(unmanned submarines, battles fought by swarms of drones, automatically
targeted missiles) and accelerate decision-making. The accuracy of
ordnance fired against adversaries will be amplified and humans will
increasingly focus on strategic decisions.
Battlefields are extremely complex, and it is worth emphasizing that
many analyses consider the frontier of net-centric warfare, with massive use
of autonomous weapons, to be dangerous. Too many dynamics are at play.
‘Fog and friction’ are inevitable elements of war and there can be no
certainty that autonomy is more efficient and decisive than the human
factor.
According to Buchanan and Imbrie (2022), it is the automation of
violence that poses the most sensitive strategic and ethical questions. They
reflect on the questions that government strategists and ethicists will have to
answer in the context of a potential transformation of warfare by AI: when
and under what conditions is it permissible for an autonomous machine to
kill a human being; will democracies be forced to cede battlefield advantage
to autocracies in order to safeguard their moral primacy, or will they be
forced to compromise on ethical principles in order to preserve their
superiority; what will be the geopolitical implications when machines make
more decisions; who will win?
Kissinger et al. (2021) consider that the introduction of non-human logic
into military systems and processes will transform strategy. Armies and
security services using AI will achieve astonishing results. If AI is
delegated some degree of control over cyber weapons or physical weapons,
it can quickly take over functions that humans only perform with difficulty.
In the geostrategic framework, a very important point is how the new
wave of technology will affect the relationship between democracies and
autocracies. Technologies are a central element in the redefinition of power
relations.
Within democracies, the debate between companies and institutions
concerning the use of AI for sensitive purposes can be particularly critical.
Think, in recent years, of the bitter debate in the United States over the
Maven project and the strong criticism within Google for continuing its
collaboration with the Pentagon. Checks and balances constitute the essence
of democratic systems.
In the technology competition between the United States and China,
issues such as government relations with large technology companies will
decide who holds the power and how they wield it.
What global order is being constructed? This question is fundamental. In
the times we live in there is a need for wide and deep reflection. According
to Kissinger et al. (2021), progress and competition in technology will
redefine the classical concept of security. Before the point of no return is
reached, it is necessary to define strategic doctrines related to AI and
compare them with those of other state and non-state realities that have AI.
The balance of forces that we should achieve will need to account for the
intangible elements of cyber conflict and mass disinformation, as well as
the nature of warfare facilitated by AI. Realistically, a substantial effort to
control AI weapons, which is not at odds with national security, is an
attempt to ensure that security is pursued in the context of a human future.
Many are questioning the future of humanity, and rightly so. But, as
specified before, we need to adopt the paradigm of a new realism. We must
never forget that the finalization of the new technological wave to support
the right evolution of humanity and the planet is entirely up to us.
Everything must be reconfigured. The time has come to make it clear that
there are no moral problems that do not have cultural, political, economic
and legal implications: the same applies to every other area. This realization
is the minimum foundation of any possible strategy for governing reality.

Security and Governance


To be operational, not limiting ourselves to reasoning from a theoretical
perspective, let us think of two areas that are particularly sensitive for all of
us: security and governance.
From the final communiqué of the G7 (2024) Italian Presidency:

(…) we recognize the need for approaches to AI governance that foster


inclusion, to help us harness the potential of AI in a way that reflects
these values and promotes its development while mitigating risks,
including with respect to human rights and avoiding governance
fragmentation. We will work toward these objectives by actively
cooperating with other stakeholders, organizations and initiatives as
relevant, such as the Global Partnership on AI (GPAI) and the OECD.
We will build on the outcomes of the AI Seoul Summit and upcoming
milestones, including this year's UN Summit of the Future and the AI
Action Summit in 2025. (…) We will step up our efforts to enhance
interoperability amongst our AI governance approaches to promote
greater certainty, transparency and accountability while recognizing that
approaches and policy instruments may vary across G7 members. We
will take a risk-based approach in these efforts as we seek to foster
innovation and strong, inclusive, and sustainable growth. To achieve this
goal, we will step up our coordination around the evolution of our
governance and regulatory frameworks, including by sharing best
practices. We will enhance our regular consultations. We are also
committed to deepening coordination between our respective institutes
and offices focused on AI, to work towards shared understanding of risk
management and advance international standards for AI development and
deployment. We welcome our Industry, Tech, and Digital Ministers’
efforts to advance the Hiroshima AI Process outcomes released last year,
including the development of a reporting framework for monitoring the
International Code of Conduct for Organizations Developing Advanced
AI Systems. We look forward to the pilot of the reporting framework,
developed in cooperation with the OECD, in view of the Industry, Tech,
and Digital Ministers’ Meeting in October. We will work towards
developing a brand that can be used to identify organizations that are
voluntarily participating in and implementing the Code's forthcoming
reporting framework.

How can binding rules be established in an increasingly fluid global


environment? Technology, while on the one hand it is something physical
that requires infrastructure, is essentially transversal because it crosses
territories, borders, and presents itself to us as an opportunity for a more
efficient and comfortable life.
We know very little about technology before we use it. And it is now
indispensable to us. We are unaware of the technology we use because, for
us as end users, it is a service. Yet there is much behind it: this contribution
does not seek to delve into the structure of emerging technologies. Here we
emphasize how they transform our lives and how they affect, for better or
worse, their quality.
Paying attention to security and governance means becoming aware of
the non-neutrality of the technological process, from research to production
to use. Following Pope Francis (2024), AI systems

should always be regarded as “socio-technical systems”. For the impact


of any artificial intelligence device – regardless of its underlying
technology – depends not only on its technical design, but also on the
aims and interests of its owners and developers, and on the situations in
which it will be employed.

Particularly at this disruptive stage of development, we need to have a


mature approach to technology and try to understand the complex
consequences of what we use.
In this technological era, unlike in the past, the tools we have must be
understood, not just used. Their disruptiveness shows us the importance of
working on security and governance. We need rules. We need complex
thinking because, at the same time, we have to move between supporting
research and development and protecting communities and the planet.
Security and governance are inextricably linked to the achievement of the
sustainable development goals. In fact, the new wave of technology affects
each of these goals: all are inter-in-dependent and none of them can be
achieved except in the overall and complex context of their unity.
From the Bletchley Declaration (November 1–2, 2023):

AI systems are already deployed across many domains of daily life


including housing, employment, transport, education, health,
accessibility, and justice, and their use is likely to increase. We recognize
that this is therefore a unique moment to act and affirm the need for the
safe development of AI and for the transformative opportunities of AI to
be used for good and for all, in an inclusive manner in our countries and
globally. This includes for public services such as health and education,
food security, science, clean energy, biodiversity, and climate, to realize
the enjoyment of human rights, and to strengthen efforts towards the
achievement of the United Nations Sustainable Development Goals.

Cultural and operational investment in global security through efficient


and effective governance rules and practices is required. A sort of ‘two-
faced Janus,’ the new technological wave can either enable the sustainable
development goals to be achieved or cause their ultimate collapse.
Geopolitical tensions, which have also risen beyond measure in the
technological sphere, cannot make us forget the issue of the planet's
systemic sustainability. The ‘body’ of humanity is increasingly fragile and
our ‘common home,’ the environment, is often abused.
The new technological wave moves from space into the deepest spaces of
our societies. This is why it is a transformative phenomenon.
The ruling classes (not just governments) should embody new
perspectives. Indeed, the sustainable development goals are not just moral
appeals but conditions for our future to exist.
Everyone evokes principles and values, but the action of nation states is
still ‘captive’ of partisan interests. There is a race by everyone to find
solutions to ‘their’ problems without realizing that the issue is transnational
and transversal.
Today more than ever, the triad of polycrisis – new technological wave –
sustainable development goals shows us the need for far more radical – in
the sense of realistically political – action.
Contain and Relaunch
Containing and relaunching are actions encapsulated in a single strategy.
The subject is certainly technical but, as said before, it is complex: the
desirable actions, which must not be aimed at blocking progress (that would
be a fatal mistake), must be thought of as inter-in-dependent, not added up
but integrated, not hierarchical but part of a virtuous and horizontal path.
Although there is still a long way to go, a great deal of work has been
done on Large Language Models in recent years: the theme is to limit
undesirable outputs, to conduct technological research ‘by design’ while
respecting different sensitivities and cultures in order to provide fewer and
fewer opportunities for exclusion and discrimination.
Suleyman and Bhaskar (2023) rightly note that the problem of technical
security is an unavoidable starting point.
But the bottom line and strength of any possible containment strategy is
the progressive and constant realignment of technological research with the
values of humanity and sustainable development. In short, a pragmatic ethic
is needed.
Run between 1961 and 1972, the Apollo Program was the third human
spaceflight program designed by NASA. The program stimulated advances
in many areas of science and technology, including avionics, computing,
and telecommunications. NASA, through this activity, had the opportunity
to show the reliability of the complex systems it was designing. Compared
to AI and emerging technologies, is it possible to imagine a similar program
in terms of security? Certainly this would be a choice that the international
community would have to adopt, expanding public budgets and involving
researchers from all over the world. Political will, operational difficulties,
and strategic necessity interpenetrate.
Another decisive point, with respect to security, is the possibility of an
‘off switch.’ How do we stop technologies that try to escape our control?
The big challenge concerns the possibility of preventive security for the
new wave of technology. In order to enhance the opportunities and prevent
risks from limiting or blurring them, the negative effects of technologies
cannot just be chased after the fact. Responsibility must be put upfront,
right from the design phase. This is a major political-strategic issue.
Of course, in order to guarantee security, it is necessary to devise and
construct external audit systems that verify the security, integrity, and
proper functioning of a system as it is developed. Significant data sets used
for model training must be checked. The verification, with mathematical
tools, of the non-damaging nature of the algorithms is desirable.
Looking at initiatives already underway, operational alliances should be
strengthened to progressively improve the security of new wave
technologies. All actors, at different levels, must be involved: governments,
international organizations, industry, universities, think tanks, civil society
organizations.
From a legal and political perspective, it is crucial to overcome the
uneven approaches to surveillance of emerging technologies by different
countries. Transparency, according to Suleyman and Bhaskar (2023), cannot
be optional. Shared legal paths are needed at all levels of the technology
process.
In general, given the potential and risks associated with new wave
technologies, and as at other times in history, the role of diplomacy is
particularly important.
Can a global institution dedicated to technology be established? Perhaps
starting with the key issue of global transparency.
Containment is not to be theorized but practiced. Its meaning is not to
create a suffocating system of rules but to understand and anticipate the
direction in which technology is going. To do this, we need to integrate
intellectual voices that take up the challenge of containment, security, and
governance of the new technological wave with critical and constructive
realism and complex thinking. Every contribution is crucial.
Addressing the issue of technological evolution is everyone's
responsibility, whatever their role. A virtuous path must be found between
the different interests of those involved and the need to address the issue
systematically and in the common interest.
Exercising responsibility means first of all developing a sense of limits.
Pope Francis (2024) writes:

In the ideological context of a technocratic paradigm inspired by a


Promethean presumption of self-sufficiency, inequalities could grow out
of proportion, knowledge and wealth accumulate in the hands of a few,
and grave risks ensue for democratic societies and peaceful coexistence.

Technological evolution, particularly the new wave, has consequences


from which no one can claim to be an outsider. The reference to the
sustainable development goals is an example of this.
Containment should be understood as a virtuous circle of mutually
reinforcing measures. As in complex systems, containment is what results
from the interaction of the elements in the process of technological
evolution. Containment should be understood as a tortuous and endless
path. The question is: which directions will Homo technologicus take? Only
by knowing the answer to that will it be possible to better understand and
attempt to govern the phenomenon. In such a dynamic process, security and
governance can never be taken for granted.
It is difficult to strike a balance between necessary and excessive control.
AI has revealed that reality can be known in perhaps more complex ways
than (only) human understanding. AI will be even more integrated into
everyday life and will shape our experiences in intuitively appropriate
ways.
Thanks to AI, new horizons are opening up before us. But, at the same
time, AI subtracts: it diminishes the role of the individual in examining,
verifying, and making sense of information. In short, it diminishes critical
thinking.
In the age of AI, according to Kissinger et al. (2021), human reason will
be both increased and diminished.
To relaunch, first of all by trying to understand the dynamic nature and
evolutionary trajectories of AI as far as possible, complex thinking is
needed. And we need a transdisciplinary (no longer sectoral,
multidisciplinary, interdisciplinary) approach: we need the different
disciplines to stand at each other's frontiers. The information scientists and
entrepreneurs who are developing the technology, the military strategists
who want to use it, the political leaders who are trying to shape it, and the
philosophers and theologians who are trying to fathom its deep meanings,
they all see some fragment of the bigger picture. All, however, should
participate in an exchange of views uninfluenced by preconceptions. This is
called dialogue.

Conclusion
The way the technological revolution is evolving, this reflection is endless.
Yet we humans must give ourselves a purpose, first of all by understanding
the global environment in which we are immersed. While containing the
technological wave is necessary to buy time in order to write appropriate
rules, technological development cannot be stopped. In the great difficulty
shown by the possibilities of containment, we all have the responsibility to
see the elephant in the room.
We are inside a change of era. We can no longer conceive of some of our
innovations as mere extensions of what we already knew. The digital
revolution and the development of AI have produced and will continue to
produce entirely new phenomena. We are not faced with more powerful or
more efficient versions of things we already have.
There is much we can do, but only together: at the different levels, from
researchers and producers, to rulers, and civil society, we need to organize
collective reflections, dialogues to get back to reality, knowing that reality
exceeds any of our ideas or inventions and that not everything is
measurable. The prospect of our future lies in achieving the sustainable
development goals and in our ability to consolidate them, also through
technologies for the common good. But our thinking must change,
according to complexity: it has taken such pervasive technologies to show
us that the 20th century is definitely over.
We must be aware that a new historical time has now opened.
Everything, in polycrisis and as a result of the impact of new wave
technologies (especially AI), is being re-configured and transformed. The
landscape proposed in this chapter is not exhaustive but is intended to give
a sense of a change of era underway. From our values to geopolitics and
geostrategy, AI is with us, indeed in us. Perhaps it is difficult to understand
all this: but it must be done. The times of history are not separate: our future
is already present, and the past can help us deal optimistically and carefully
with what is already a substantial part of our lives.
References
Buchanan, B., & Imbrie, A. (2022). The new fire. The MIT Press.
G7. (2024). Italian presidency. Final Communiqué.
Kissinger, H. A., Schmidt, E., & Huttenlocher, D. (2021). The age of AI.
Delphin LLC.
Pope Francis. (2024). Artificial intelligence and peace
https://www.rivista.ai/wp-content/uploads/2024/06/20231208-
messaggio-57giornatamondiale-pace2024-1.pdf.
Suleyman, M., & Bhaskar, M. (2023). The coming wave. Monition LLC.
The Bletchley Declaration. (November 1–2, 2023).
https://www.gov.uk/government/publications/ai-safety-summit-2023-
the-bletchley-declaration/the-bletchley-declaration-by-countries-
attending-the-ai-safety-summit-1-2-november-2023
4
ARTIFICIAL INTELLIGENCE
AND SUPRANATIONAL LAW
Between Anthropocentrism and
Technological Development
Giovanni Zarra and Roberto Ruoppo

DOI: 10.4324/9781003518495-6

Introduction: AI Legal Framework as an


International Matter
In recent years there have been numerous initiatives aimed at preparing a
legal framework for artificial intelligence (AI), the developments and
repercussions of which as regard users’ rights have urged the need to
regulate the phenomenon. It is not by chance that, within the framework of
various international organizations, the legal instruments that have been
drafted aim at defining a regulatory framework to set limits and conditions
for the development of AI systems, in order to reconcile the different
interests involved in the introduction of this new technology to the market,
such as, first and foremost, the promotion of economic and technological
growth on the one hand and the protection of user rights on the other hand.
The supranational genesis of such legal rules is consistent with the
intrinsic nature of digital relationships, which in making use of online tools
and information society technologies are not limited within well-defined
national boundaries, but rather seek to cross domestic borders involving a
plurality of actors of different nationalities (Adinolfi, 2020; D’Aloia, 2022).
Consequently, the only legal instruments capable of providing effective
regulation of such relations must be adopted at the international level in
order to overcome the fragmentation inherent in the proliferation of national
rules. This approach is useful to satisfy both the interests of companies,
reducing the risks and expenses inherent in a plurality of national
regulations, and the protection of users, imposing limits and conditions on
companies supplying these products that are applicable regardless of where
they are established.
However, on this subject, most of the rules adopted by international
organizations are non-binding in nature, representing soft law instruments,
such as OECD recommendation (OECD, 2019),1 UNESCO
recommendation (UNESCO, 2021),2 and the UN General Assembly
resolution (United Nations General Assembly, 2024).3 These initiatives are
aimed at establishing principles and guidelines to direct the work of
institutions and member States, having a mere orientational inspiration. For
this reason, in this chapter, attention will be devoted on the only binding
legal instruments adopted at the supranational level, the European Union
(EU) Regulation on Artificial Intelligence (AI Act) and the Council of
Europe Framework Convention on AI and Human Rights, Democracy and
the Rule of Law, the provisions of which partly overlap.

EU AI Act and Its Purposes


The EU Regulation on AI represents the first attempt to organically regulate
the phenomenon of AI, placing the EU in a privileged position in the global
regulatory scenario (Artificial Intelligence Act, 2024),4 in which the
ambitious proposal to reconcile the economic and ethical aspects involved
in the diffusion of this technology assumes central importance and is
certainly worthy of appreciation. The approach followed by the European
institutions is oriented toward the search for a balance between two distinct,
but not opposing, needs: on the one hand, to ensure the development of AI
through a legal framework that does not impose excessive constraints and
limitations on its use; on the other hand, to ensure an application of AI
consistent with the protection of the fundamental rights of the users
involved (Artificial Intelligence Act, 2024, Recital No. 8). Indeed, there are
several rights that are likely to be infringed by the misuse of this
technology, undermining the very trust of users and thus affecting
investments in the sector. A number of disputes that arose prior to the
drafting of the final text of the AI Act, and that were decided both outside
(Wisconsin Supreme Court, 2016)5 and within (The Hague District Court,
2020)6 European borders, demonstrate the potential of this technology to
affect many fundamental legal positions, with the associated need to
prepare a regulation to protect users’ rights.
The approach described above has been operationally translated into the
adoption of a discipline based on the risk determined by the various uses of
AI, modulating regulatory restrictions according to the type of interference
that specific AI tools are likely to create. It is not by chance that adoption of
the Regulation came at the end of a long process, preceded by the
elaboration of numerous non-binding acts by European institutions, in
which the beneficial potential of the technology under consideration and at
the same time the risks associated with its unregulated use were highlighted
(European Commission, 2020).7
The EU Regulation was adopted within this framework. This Regulation,
together with the proposed liability Directive, aims to serve as a global
regulatory model, representing a paradigm for the regulatory activity of
other international players, as has already been the case with the General
Data Protection Regulation (GDPR), the Data Act, the Digital Single
Market Act, and the Digital Service Act.
Through the AI Regulation, termed the “Artificial Intelligence Act,” EU
institutions intended to establish (a) harmonized rules for the placing on the
market, the putting into service, and use of AI systems; (b) prohibitions on
certain AI practices; and (c) specific requirements for high-risk AI systems
and obligations for operators of such systems (Art. 1). The AI Act
constitutes only a component of the articulated regulatory system through
which the EU is pursuing the objective of introducing an organic and
uniform discipline in the field of AI. Limiting the analysis only to legal
rules of a binding nature, the EU regulatory framework is composed not
only of the AI Act, but also of the proposal for the AI liability directive and
the directive on liability for defective products (European Parliament and
Council, 2022)8 (Chander, 2021; Liu & Lin, 2020). Between the AI Act and
the directives on AI liability a relationship of a complementary nature can
be seen: indeed, while the former aims at promoting the dissemination of
anthropocentric and reliable AI, ensuring a high level of protection of
health, safety and fundamental rights, the directives on AI liability aim to
regulate certain aspects of liability actions resulting from damage that may
arise from the use of such products and services (Artificial Intelligence Act,
2024, Recital No. 9). Thus, the AI Act operates at a specific point in the
lifecycle of AI systems, since it aims to establish a uniform legal framework
for their development, placing on the market, putting into service and use,
ensuring their compliance with the values of the Union enshrined in the
Treaties and in the Charter of Fundamental Rights. It thus intervenes on the
physiological level of regulating the use of such instruments, introducing a
series of differentiated due diligence obligations, modulated according to
the specific degree of risk determined by the various AI systems, which are
classified into prohibited, high-risk and non-high-risk systems. The
proposed directives, on the other hand, regulate the aspects of liability
resulting from the realization of damage determined by the outputs
produced by a specific AI system (Chiappini, 2022).
As anticipated, the European regulation on AI is primarily directed at
introducing a uniform framework in all EU member States, aiming to meet
the needs of legal certainty and predictability of regulatory compliance
obligations, facilitating the activities of companies in this sector (Ferencz et
al., 2022). Indeed, the AI Act has its foundation in Art. 114 TFEU, aimed at
improving the internal market through the establishment of a uniform
regulatory framework. The aforementioned purpose assumes a key role for
the development of companies engaged in the promotion and provision of
AI services, whose activities can only be encouraged by overcoming the
fragmentation of the European market and removing potential regulatory
barriers in this field (Recital No. 3 of AI Act). In this regard, it is worth
noting, moreover, that the European framework is intended to apply to all
firms developing or using AI systems, not limiting its effectiveness to
entities that reach a predetermined minimum level of turnover, unlike the
option chosen in other regulatory contexts (US Algorithmic Accountability
Act, 2023).9
The attempt to prepare a unitary discipline is evident just considering the
taxonomic effort made by European institutions to describe a single notion
of AI, while being aware that it is not possible to talk about it as a unitary
phenomenon (McCarthy, 2007).
The notion of AI is thus defined by Art. 3(1) as

a machine-based system that is designed to operate with varying levels of


autonomy and that may exhibit adaptiveness after deployment, and that,
for explicit or implicit objectives, infers, from the input it receives, how
to generate outputs such as predictions, content, recommendations, or
decisions that can influence physical or virtual environments.

Indeed, the purpose of EU institutions has been to define the


phenomenon as broadly as possible in order to configure a general
discipline that can be applied to the greatest number of cases, through an
all-encompassing approach. While this approach is the result of the
codification of a compromise between the various elements that
characterized the drafting (Trincado Castán, 2024), it also represents a
necessary tool in order to introduce a uniform discipline within Europe. It is
not by chance that the same approach has also been adopted in the proposal
for a directive on civil liability arising from AI (Recital No. 1 and Art. 2).
The certainty deficit, which is inherent in adopting an elastic notion of
AI, thus ensures the applicability of the European framework, and the
obligations therein, vis-à-vis a wide range of technological systems, and
aligns the work of EU institutions with that of other international
organizations that have dealt with AI (OECD, 2019; UNESCO, 2021), thus
facilitating a broad acceptance of the definition and ensuring the flexibility
needed to react to rapid technological developments in this field (Cole,
2024; Yeung, 2020). The hope behind this approach, consistent with the
choices made in other international fora, can be seen in the creation of a
shared regulatory landscape at the supranational level, in view of the
absence of national borders in digital relationships and in the use of AI
systems in particular, thus laying the foundations for a common legal
framework for the regulation of this technology.
At the same time, the Regulation identifies certain requirements that trace
the lowest common denominator of any AI system in order to distinguish
them from traditional software systems or simpler programming
approaches: the fundamental features of AI systems lie in their (i)
inferential capacity, that is, the process of obtaining outputs, which may
include predictions, content, recommendations, or decisions; (ii) the ability
to function with a degree of autonomy of action with respect to human
operator involvement; and (iii) adaptability, representing the attitude for
self-learning that allows the system to change during use (Artificial
Intelligence Act, 2024, Recital No. 12).
In the same vein of ensuring a uniform regulatory framework, the
Regulation describes the notion of provider and user of AI systems, that is,
the persons or entities who must comply with the obligations laid down by
the AI Act, together with the importers and distributors whose duties are
provided respectively by Arts. 23 and 24. The provider is identified as the
entity that develops the AI system and places it on the market; while the
user, referred to as the “deployer,” is a natural or legal person, including a
public authority, agency, or other body, that uses an AI system under its
authority, except where the AI system is used in the course of a personal,
non-professional activity (Art. 3).
A similar classification was used by the European Parliament in the
proposal for a regulation on AI liability, later transformed into a proposal
for a directive by the European Commission, where reference was made to
such entities through the use of the notions of “back-end operator” and
“front-end operator” (European Parliament, 2020)10: thus identifying,
respectively, the entity that holds control of the developing phase of the AI
system and the entity under whose authority the generated outputs are
produced.
The influence that the European legal framework will have on the
development and use of AI systems is significant when considering its
scope. The AI Act, in fact, is intended to produce its effects also with
respect to providers established in countries other than European member
States, provided that the relevant services and products are addressed to the
EU market and consumers (Art. 2) thus having an extraterritorial
application, often referred to, also critically, as the “Brussels effect”
(Bradford, 2020; Reusken, 2024).
In order to avoid evasive behavior aimed at circumventing the
application of European rules through the use of AI systems by operators
established in third countries, the reference criterion to be taken into
account is represented not by the commissioning of the AI system in
Europe – which could well be used in a non-European market – but by the
production of the output in the EU (Artificial Intelligence Act, 2024, Recital
No. 22).

Users’ Protection as the Lodestar of the


Regulation
In addition to the establishment of a uniform regulatory framework for the
internal market, there is a further objective pursued by EU institutions,
which constitutes the cornerstone of European regulatory efforts: the AI Act
is mainly addressed at the promotion and dissemination of an
“anthropocentric” AI, intending to affirm the need for this technology to be
placed at the service of users and people and not vice versa (Floridi, 2023).
From this perspective, the reference to the need for AI to be developed in
accordance with the values of the Union enshrined in Art. 2 TEU and with
the fundamental rights and freedoms recognized by the Charter of
Fundamental Rights, assumes crucial importance. Its recognition by Art. 1
of the Regulation and the introductory recitals as well is highly symbolic
(Recital No. 22).
Thus, this key value can condition the very structure of the Regulation
through acceptance of the risk-based pyramid. Pursuant to Art. 1 of the AI
Act

The purpose of this Regulation is to improve the functioning of the


internal market and promote the uptake of human-centric and trustworthy
artificial intelligence, while ensuring a high level of protection of health,
safety, fundamental rights enshrined in the Charter, including democracy,
the rule of law and environmental protection, against the harmful effects
of AI systems in the Union and supporting innovation.

The wording of the Regulation is particularly influenced by the Ethics


Guidelines for Trustworthy AI, under which a number of principles have
been developed to ensure that AI is reliable and ethically sound: a crucial
role is played by the principles of human oversight, respect for privacy and
data governance, transparency and the promotion of diversity (High-Level
Expert Group on Artificial Intelligence, 2019).11
The principle of human intervention and oversight (“human oversight” or
the “human in the loop principle”) is pivotal, alluding to the need for AI
systems to be developed and deployed in such a way that they can be
adequately controlled by humans, preventing users from suffering
detrimental treatment as a result of a mere algorithmic determination.
Although this safeguard finds its normative antecedent in Art. 22 of the
GDPR, it takes on new nuances in the context of AI. It implies, for
example, the need for the operators in charge of using an AI system to have
the necessary tools to ensure continuous surveillance of the system's
operation – avoiding any misuse – and to have the skills to be able to
override its outputs (Casonato, 2023; Molbæk-Steensig, 2022).
Respect for “privacy and data governance” intends to ensure that AI
systems are developed and used in compliance with privacy and data
protection regulations, through a form of management that ensures their
quality, integrity, and relevance to the sector in which the systems are
deployed. Again, we are dealing with rights already recognized by the EU
Charter of Fundamental Rights (Art. 8) and the European Convention on
Human Rights (ECHR, Art. 8). Confidentiality, however, takes on particular
features in the operation of AI systems, since in training the machine, the
developer is required to make use of appropriate datasets, so as to avoid
distortions and inaccuracies in its operation, thus contributing to the
development of the right to the protection of personal data and its
compliance with needs arising in the actual digital environment (Artificial
Intelligence Act, 2024, Art. 10 et seq.).
The principle of transparency is useful in fulfilling certain well-defined
tasks in the operation of AI systems, including the obligation to disclose the
interaction with an AI system and so-called “explainability.” First, the fact
that a user is interacting with an AI system must be made known to them,
and the providers of such systems must guarantee that the outputs
generated, in the form of audio, image or video content, are detectable as
artificially generated (Art. 50). The “explainability” of AI systems aims to
ensure understanding of the decision-making process for every recipient,
regardless of the level of education or prior technical knowledge. This
requirement assumes central relevance in the attempt to overcome the
natural opacity of algorithms, often identified through reference to the so-
called “black box effect” (Pasquale, 2018).
Finally, the promotion of diversity, non-discrimination, and equity aims
at ensuring the participation of diverse stakeholders throughout the life
cycle of an AI system, ensuring that it respects equal access, gender
equality, and cultural diversity throughout its operation. This implies that
the datasets used to train the system must take into account each component
of the society to which its target subjects belong, avoiding discriminatory
effects and unfair biases based on incorrect inputs, which are likely to give
rise to “cognitive bias” (Borgesius & Frederik, 2020).
As anticipated, consistent with the underlying inspiration of ensuring
adequate protection of the rights of users affected by the deployment of AI
systems, EU institutions have concretely set out the obligations of
providers, deployers, and all those involved in the AI value chain according
to the type of risk determined by each model. In deference to the principle
of proportionality, the intensity of regulation has thus been differentiated
according to the type of risk caused to users’ rights (Novelli et al., 2024)
imposing prohibitions in the case of unacceptable interference, prescribing
limits and conditions for uses likely to result in the impairment of
fundamental rights, and recognizing ample room for self-regulation for
systems that do not pose such risks (Artificial Intelligence Act, 2024, Art.
95).
The Council of Europe Framework Convention on
Artificial Intelligence and Human Rights,
Democracy, and the Rule of Law
On May 17, 2024, the Council of Europe adopted the Framework
Convention on Artificial Intelligence and Human Rights, Democracy and
the Rule of Law, the first legally binding international treaty aimed at
ensuring respect for human rights and rule of law principles in the use of AI
systems. The treaty will be opened for signature on September 5, 2024, in
Vilnius, Lithuania, and will enter into force, according to Art. 30, on the
first day of the month following the expiration of a three-month period after
five signatory states, including at least three member States of the Council
of Europe, have expressed their consent to be bound by the Convention.
The final text of the Convention is the outcome of more than two years’
work by a special intergovernmental body, the Committee on Artificial
Intelligence (CAI), which brought together the 46 member States of the
Council of Europe, the EU, and 11 non-member States (Argentina,
Australia, Canada, Costa Rica, Japan, Israel, Mexico, Peru, the Holy See,
the United States of America, and Uruguay), also involving representatives
of civil society and academia, who participated as observers.
Unlike the only other supranational regulatory instrument of a binding
nature (i.e., the AI Act adopted by the EU), the Framework Convention
focuses exclusively on the human rights aspects involved in the use of AI
systems, and not on the economic and commercial aspects. It is, thus,
anthropocentric. The purpose pursued by the drafters is to create a
regulatory framework that can extend to all phases of the life cycle of AI
systems – such as design, development, validation, implementation, and
monitoring – the standards and obligations already in place regarding
human rights, both when the systems are adopted by public actors and
entities and when they are exploited by private operators (Art. 3, para. 1,
lett. a). This scope of application reveals the general vocation of the
Convention, hence the name “Framework Convention,” which is intended
to apply with regard to all phases of the life cycle of an AI system;
therefore, as clarified in the preamble, the adoption of additional normative
instruments to regulate specific aspects of the activities of AI systems is not
precluded.
The Convention is first concerned with defining its scope and identifying
the notion of AI systems. In this respect, it clarifies that no new obligations
are created for States parties with respect to those arising from treaties
already in force, rather signatory countries are required to extend pre-
existing human rights obligations to all stages in the life cycle of an AI
system. In this legal framework too, as under the AI Act, a differentiated
approach has been adopted, based on the type of risk and the probability of
the occurrence of a given violation of the principles of the rule of law (Art.
1, paras. 1 and 2). In relation to the definition of AI systems, the drafters of
the Convention adhered to the same notion provided by the OECD
Recommendations – as updated in November 2023 – and Art. 3 of the AI
Act, providing in Art. 2 that they are to be understood as a

machine-based system that for explicit or implicit objectives, infers, from


the input it receives, how to generate outputs such as predictions, content,
recommendations or decisions that may influence physical or virtual
environments. Different artificial intelligence systems vary in their levels
of autonomy and adaptiveness after deployment.

Second, the Convention is concerned with defining the main obligations


incumbent on States parties, represented by the protection of human rights
and respect for the integrity of democratic processes and the principles of
the rule of law. Indeed, there are several risks associated with the
uncontrolled use of this technology: AI can, for example, contribute to the
dissemination of false information that may manipulate public opinion and
undermine public trust in the fairness of democratic processes; the
exploitation of unclear algorithms tainted by discriminatory input can affect
the functioning of the judicial system and undermine the right to effective
judicial protection; mass surveillance made possible by the use of
recognition tools that rely on the collection of biometric data carries the risk
of undermining the enjoyment of certain fundamental freedoms, including
the freedom of assembly or expression. In order to avoid these risks, Art. 5
requires the adoption of measures necessary to ensure the integrity of
democratic institutions, safeguarding, for example, the principle of the
separation of powers, the independence of the judiciary and the right of
access to judicial protection.
The measures that States are required to adopt must conform to a set of
principles that must inform all the activities within the life cycle of AI
systems, drafted in a sufficiently elastic way that they can be applied
flexibly in different contexts and circumstances and adapt to future
developments in technology. Among these principles the most important
ones are the protection of human dignity and individual autonomy (Art. 7);
the obligation of transparency and control (Art. 8) – particularly relevant in
order to overcome the opacity of algorithmic tools; the principle of
accountability of those in control of the various stages of the life cycle of
AI systems (Art. 9); the principles of equality and non-discrimination that
can avert probable distortions attributable to “cognitive bias” (Art. 10); and
the protection of personal data (Art. 11).
At the same time, States parties are required to take measures to ensure
the availability of effective and accessible remedies in case of the violation
of human rights, through instruments that are capable of overcoming the
difficulties associated with the information asymmetry that exists between
those harmed and those who develop or use AI systems. Thus, crucial
importance is also given to the phase following the impairment of people's
rights, with a view to ensuring the adoptability of effective remedies. In this
regard, the Convention subordinates this obligation, and the associated
possibility for aggrieved persons to take legal action, to the circumstance
that there has been or there is a risk of a significant violation of the rights
involved, introducing a limit on the enforceability of remedies, the precise
definition of which is left to the discretion of the States parties (Arts. 14 and
15).
Among the most important provisions, with a view to preventing risks
that may arise from the use of AI systems, are those contained in Chapter 5,
which require measures to be taken to identify, assess and mitigate risks.
Monitoring of adverse effects on human rights should be done through
documentation to ensure that they are understood and verified by
independent bodies charged with monitoring compliance with obligations
under the Convention. Where interference is found to be unacceptable and
such that no exception can be tolerated, States parties have the option of
considering the introduction of appropriate prohibitions (Art. 16).
For the effective implementation of the Convention, Chapter 6 is
concerned with the introduction of digital literacy obligations, providing
that States parties are required to promote the knowledge and skills
necessary to enable the informed use and exploitation of digital tools and
AI systems in particular. Such skills are found to be instrumental in creating
widespread awareness in the community, on the one hand, and in
contributing to more effective performance of activities to aid risk
prevention and mitigation, on the other hand. Relationships with other
international conventions concerning the protection of human rights are also
clarified, including primarily the ECHR, providing that the Framework
Convention does not introduce new obligations and does not derogate from
those already deriving from previous treaties: the Framework Convention
extends pre-existing obligations to the whole life cycle of AI systems (Art.
21).
The Convention is concerned with creating an effective monitoring
mechanism through the establishment of the Conference of the Parties,
whose main tasks are to monitor compliance with the obligations carved out
by the Convention, to propose any necessary amendments to cope with
technological development, to express opinions on the correct interpretation
of the Convention provisions, and to facilitate dispute settlement (Art. 23).
To enable the Conference to carry out its functions, States parties are
required to submit a report to that body on the steps they have taken to carry
out the Convention obligations, within two years of their accession.
Thus, by the end of 2024, the first international treaty with a global
vocation aimed at enshrining the primary importance of human rights
protection in the context of the use of AI systems will enter into force,
helping create a widespread international culture regarding the hierarchy of
values that must preside over the exploitation of such technology (Ruoppo,
2024).

The Risk-Based Approach Adopted by the EU AI


Act and the CoE Framework Convention:
Prohibited Systems
At the top of the pyramid structure defined by the Regulation and the
Framework Convention are prohibited AI practices, due to the crucial value
of the interests likely to be affected (Veale & Borgesius, 2021), in the name
of the need to protect the fundamental rights and freedoms that support the
normative system of the EU and the Council of Europe (Artificial
Intelligence Act, 2024, Recital No. 28; CoE Framework Convention, Art.
16, para. 4). Consistent with this approach, by way of example, under Art. 5
of the AI Act, the placing on the market or use of AI systems that, by using
subliminal techniques and acting without a person's knowledge, undermine
that person's capacity for self-determination, impairing their ability to make
an informed decision and causing them to make a decision that they would
not otherwise have made are prohibited (Leiser, 2024). The rule in Art. 5(1)
(a) therefore prohibits those forms of AI use whose implications are
considered unacceptable because of their impact on the personal freedom
and free will of individuals, the manifestation of which is likely to suffer
particularly intrusive impairments in the digital context (Neuwirth, 2023).
Art. 5(1)(c) prohibits the placing on the market of AI systems that can
evaluate and classify individuals or groups of individuals on the basis of
their social behavior or personal characteristics, where social scoring
involves prejudicial treatment of the same individuals in social contexts that
are unrelated to the contexts in which the data were collected (Kosta, 2020).
Such practices, also referred to as “social scoring,” are dangerous in that
they have the potential to undermine the right to dignity and non-
discrimination (European Data Protection Board, 2021),12 because of their
potential to exclude certain categories of people from the enjoyment of
certain fundamental freedoms and rights. Such tools could, for example, be
used for the purpose of making decisions about obtaining a job, taking out
an insurance policy or a lease in connection with the social score obtained
as a result of one's own profiling or that of the group of people to which one
belongs (Human Rights Watch, 2019; Daelman & Yordanova, 2023).
Art. 5(1)(d) prohibits AI systems that allow risk assessments to be made
regarding natural persons for the purpose of assessing or predicting the
likelihood of their committing a crime, solely on the basis of their profiling
or assessment of personality traits and excluding any assessment by a
human operator based on objective, verifiable facts. The provision is a clear
application in the AI field of the fundamental principles that inform
criminal law and are established in both national provisions and
supranational norms, such as Arts. 47 and 48 of the EU Charter of
Fundamental Rights and Art. 6 of the ECHR. Consistent with the
presumption of innocence and the minimum guarantees inherent in the right
to a fair trial, individuals should always be judged on the basis of their
actual behavior, and not on the basis of their profiling alone. With this in
mind, the AI Act and the Framework Convention reiterate that there is no
due process without human control and oversight, reaffirming how one of
the milestones of the rule of law is the necessary imputation of the exercise
of public power, particularly criminal power – the most invasive vis-à-vis
personal liberty – in favor of the human public official exclusively.
To the same category of prohibited systems belong AI models that allow
for real-time remote biometric identification of persons in publicly
accessible spaces, unless this is necessary for the combating of certain
criminal offenses and subject to prior authorization by a judicial authority
or independent administrative body (Art. 5(1)(h)). Such practices are
prohibited because they may prove particularly invasive of people's rights
and freedoms insofar as they could affect the private lives of a large
segment of the population, imposing a psychological condition of
continuous surveillance and consequently discouraging the exercise of
certain fundamental rights, such as freedom of assembly, as ascertained by
the European Court of Human Rights (ECtHR, 2023).13

High-Risk AI Systems and Their Requirements


In the pyramid-structured regulatory framework defined by EU institutions,
at the level immediately below prohibited practices are high-risk AI
systems, the placing on the market of which is made conditional to certain
specific requirements in order to ensure compliance with the fundamental
rights of users (Art. 6 and Annex III, AI Act). Those same rights that under
Chapter 3 of the Framework Convention must inform every stage of the life
cycle of AI systems. The qualification of an AI system as a “high-risk
system” goes through the assessment of a set of requirements defined in
Art. 6 and 7 of the AI Act, which focus on the likelihood of fundamental
rights being compromised. High-risk AI systems are also listed in a special
annex to the Regulation (No. III), the task of updating which is assigned to
the Commission, which is responsible for ensuring their compatibility with
the rapid pace of technological development and potential changes in the
use of AI systems. At the same time, the Commission's filtering and
classification activity helps minimize any unwarranted restrictions on
international trade, achieving a balance between regulatory activity and the
need not to overly restrict investment in the sector (Stratieva, 2024).
Pursuant to Annex III, for example, systems used in the granting,
revocation, or recovery of benefits and essential public assistance services,
such as those that may be required in connection with maternity, illness,
industrial accidents, and loss of employment, are likely to be qualified as
high-risk AI systems. The use of algorithmic tools in the performance of
administrative procedures is already a widespread reality, but the use of AI
can lead to detrimental outcomes when it is used for the provision of public
benefits with respect to vulnerable individuals. In this case the need to
ensure the training of the system with adequate datasets arises, avoiding
discriminatory effects. A negative example in this regard was the use of the
“SyRi” system (van Bekkum & Borgesius, 2021) by Dutch public
authorities for the purpose of conducting investigations to ascertain possible
fraud in welfare benefit claims by citizens. Research conducted by a non-
profit organization, later endorsed by a ruling of The Hague District Court
(2020),14 revealed that the system's training was based on discriminatory
datasets in that grammatical mistakes – which people from modest cultural
backgrounds or immigrant families were more likely to make – were treated
in the same way as misrepresentation, to the detriment of a specific segment
of the population. The classification of such systems under the category of
“high-risk” instruments thus ensures that they can be placed on the market
only upon the fulfillment of certain obligations predetermined by the
Regulation and the Framework Convention, which help avoid detrimental
outcomes for segments of the population in vulnerable situations.
AI systems used to assess the creditworthiness of individuals fall into the
same category, insofar as they condition access to bank loans and financial
resources on personal credit history, again causing discrimination among
people or groups of people on the basis of racial or ethnic origin, gender,
disability, age, and sexual orientation (Artificial Intelligence Act, 2024,
Recital No. 58). The assessment of creditworthiness, in fact, is carried out
by establishing a prediction of the likelihood of a person's future behavior –
such as the repayment of a loan – from certain characteristics of that person,
based on mathematical and statistical procedures.
High-risk systems also include those in the investigation and prosecution
of crimes, to determine the risk of crime or recidivism in relation to an
individual based on the assessment of the personality traits or past criminal
behavior of the person concerned (Artificial Intelligence Act, 2024, Recital
No. 59). European institutions are well aware that AI tools in the course of
investigations could become a factor of inequality, risking undermining the
exercise of the right of defense. Therefore, the need is emphasized to ensure
access to information for suspects concerning the operation of such
systems, in order to challenge their results in court and ensure the principle
of equality of arms between the prosecuting authority and the individual
concerned. The use of such technology in court, while it may help the
efficiency of the administration of justice, could at the same time lead to
potential distortions and opacity. Thus, it is reiterated that the final decision-
making process must remain an activity whose responsibility and domain
lie with the human operator, with the consequence that AI systems can
support human decision-making, but in no case will they be able to replace
it entirely (Ruotolo, 2022).
In order to mitigate the risks posed by high-risk AI systems placed on the
market and to ensure a high level of reliability, certain mandatory
requirements must be observed before they can be marketed and used, in
accordance with the provisions of Art. 9 et seq. of the AI Act and Art. 6 et
seq. of the Council of Europe Framework Convention. The latter are
represented, by way of example, by the preparation of a risk management
system (Artificial Intelligence Act, 2024, Art. 9; Framework Convention,
Art. 16), through which the dangers that the instrument is likely to cause are
identified and analyzed, with the consequent identification of measures
intended to neutralize them. Pursuant to Art. 10 of the AI Act and Art. 10 of
the Framework Convention, providers must ensure that the training datasets
used to instruct the high-risk system are relevant, sufficiently representative
and error-free, thereby ensuring that appropriate statistical analyses are
conducted. Such a burden is basically intended to prevent the possibility of
algorithmic tools being placed on the market that could lead to
discriminatory outcomes, along the lines of what occurred in the
Netherlands in connection with the aforementioned SyRi system, for the
purpose of carrying out controls to counter the undue receipt of public
subsidies.
The obligations incumbent on providers are complemented by the
obligation of transparency aimed at ensuring that the operation of the
system is presented in such a way that the output of the system can be easily
interpreted and the logical steps that led to the adoption of a particular
decision can be understood (Artificial Intelligence Act, 2024, Arts. 13 and
86; Framework Convention, Art. 8); the obligation of supervision by
individuals during the period they are in use (Artificial Intelligence Act,
2024, Art. 14; Framework Convention, Art. 8); and accuracy, robustness
and cybersecurity (Artificial Intelligence Act, 2024, Art. 15; Framework
Convention, Art. 12).
The transparency provision seems to have been formulated for the
specific purpose of averting the occurrence in Europe of events detrimental
to fundamental rights resulting from the application of algorithmic tools
whose modes of operation are characterized by high opacity. In the light of
the aforementioned Art. 86 of the AI Act and Art. 8 of the Framework
Convention, it will be reasonable to exclude the use, for example, of AI
systems in the context of criminal proceedings without adequate illustration
of the logical steps that led to the adoption of a given decision. The fact is
far from obvious if one considers the different treatment that is provided of
the same principle of transparency in other legal experiences, as was the
case in the United States in the well-known Loomis case decided by
Wisconsin authorities (Wisconsin Supreme Court, 2016).15
This case concerned the criminal conviction of a US citizen, whose
sentence was determined through the application of an algorithm that made
it possible to detect the risk of recidivism on the basis of the defendant's
profiling, thus enhancing personal circumstances such as age, residence,
social class, and ethnic origin. In the face of his conviction in the first
instance and following an investigation conducted by a non-profit
organization that noted the discriminatory nature of the algorithm training
used (Angwin et al., 2016), the citizen filed an appeal before the Wisconsin
Supreme Court on the grounds of violation of his right to due process and
the related right to equality of arms. The flaw in the ruling was essentially
identified in the lack of availability of the operating mechanism and
methodologies used in order to calculate the risk of recidivism, thus
preventing the exact understanding of the reasons supporting the decision
and making it impossible to verify the correctness and impartiality of the
result reached (Freeman, 2016). The appeal was, however, dismissed
because of the alleged failure to prove the violation of the principle of
transparency, given that the source code of the algorithm had been disclosed
to the defendant, not hindering, in the opinion of the US courts, his defense
at trial. The resolution of the case, which was based on a formalistic and
inapposite interpretation of the transparency principle, in this case suffered
from the greater weight given in the United States to economic interests to
protect trade secrets in favor of the software developer than to the
protection of the right to effective judicial protection.
The merit of the new Regulation and Convention stands in having given
a marked impetus to the recognition and evolution of the principle of
transparency, which in the new technological context cannot be limited to
requiring the formal sharing of the material supporting a given measure, but
requires a clear and easily understandable explanation for the recipient: it
will no longer be sufficient to simply provide the source code of the
algorithm, but it will be incumbent to ensure the understanding of the
reasons behind the final decision through non-technical language which is
intelligible regardless of the level of technological education.
As a complement to the obligations designed to ensure compliance with
the principle of transparency, under Art. 50(4) of the AI Act and Art. 8 of
the Framework Convention, it is stipulated that when the AI system is
capable of generating or manipulating images or audio or video content, the
deployer and provider are required to disclose that the content has been
artificially generated or manipulated through appropriate labeling systems.

Conclusion
As highlighted in the preceding pages, the EU and the Council of Europe
have outlined a uniform regulatory framework with the primary purpose of
creating the necessary conditions to ensure that the development and use of
AI systems take place in such a way as to guarantee adequate protection of
users’ rights. And, although the enhanced guarantees constitute the
application of regulatory provisions that were already in force prior to the
adoption of the AI Act and the Framework Convention – overall see the EU
Charter of Fundamental Rights and the ECHR – it cannot be overlooked
how the new legal rules have had the merit of not only clarifying but also
enriching with meaning the fundamental rights already recognized in these
provisions. It clearly emerges, therefore, how the discipline introduced by
the European institutions effectively fulfills the adaptation of fundamental
rights to the new needs emerging in the current technological environment,
contributing to the evolution and adaptation of regulatory provisions to the
changed social reality.
At the same time, the introduction of a uniform regulatory framework
makes it possible to meet the expectations of economic operators, through
the provision of obligations spelled out in a general and abstract manner,
which can provide, with reasonable certainty, the requirements to be met
before placing an AI system on the market, thus configuring a predictable
and accessible legal regime for companies interested in investing in the
sector. The clear identification of the subjects required to fulfill the
obligations introduced by the new legal rules – the providers and the
deployers – the determination of the minimum requirements for AI systems,
and the classification of these systems according to the type of risk, all
point in this direction.
It is possible, in conclusion, to state that the main contribution made by
the AI Act and the Council of Europe Framework Convention lies in having
prepared a shared and uniform legal framework, oriented primarily toward
the protection of the fundamental rights of individuals, a value placed at the
apex of the European legal framework and national legal systems, the
importance of which deserves to be enhanced in a context affected by the
advent of a technology with multiple potential applications and numerous
benefits on the one hand, but characterized by assorted risks of interference
with users’ rights on the other hand. In this scenario which is full of
uncertainty and possibility, the EU and the Council of Europe have
appropriately reaffirmed the key role of personal rights, in line with the
European legal tradition, according to which there can be no progress
without adequate protection of fundamental rights.

Notes
1. OECD Recommendation of the Council on Artificial Intelligence,
OECD/LEGAL/0449, adopted on May 22, 2019 and amended on May
3, 2024.
2. UNESCO Recommendation on The Ethics of AI: Shaping the Future
of Our Societies, 23 November 2021.
3. United Nations, General Assembly Resolution, Seizing the
opportunities of safe, secure and trustworthy artificial intelligence
systems for sustainable development, March 11, 2024, A/78/L.49.
4. Regulation (EU) 2024/1689 of the European Parliament and of the
Council of June 13, 2024 laying down harmonized rules on artificial
intelligence and amending Regulations (EC) No 300/2008, (EU) NO
167/2013, (EU) 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU)
2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU)
2020/1828 (Artificial Intelligence Act).
5. State of Wisconsin v. Eric L. Loomis, 881, N.W.2d 749, July 13, 2016.
6. The Hague District Court, Nederlands Juristen Comité voor de
Mensenrechten e al. v. The State of Netherlands, February 5, 2020,
C/09/550982.
7. See European Commission, White paper on artificial intelligence – a
European approach to excellence and trust, February 19, 2020, COM
(2020) 65 final.
8. Proposal for a Directive of the European Parliament and of the Council
on adapting non-contractual civil liability rules to artificial intelligence
(AI Liability Directive), September 28, 2022, COM(2022) 496 final
and proposal for a Directive of the European Parliament and of the
Council on liability for defective products, September 28, 2022,
COM(2022) 495 final.
9. US Algorithmic Accountability Act, September 21, 2023, Section 2,
para. 7: any person, partnership or corporation that use automated
decision-making systems and fulfils some requisites: have more than
50 Million $in annual turnover or more than 250 Million $in equity,
use or control more than 1 Million consumers, households or consumer
devices.
10. European Parliament resolution of October 20, 2020 with
recommendations to the Commission on a civil liability regime for
artificial intelligence (2020/2014(INL)), Art. 3, para. 1, lett. e and f.
11. Ethics Guidelines for Trustworthy AI, Independent High-Level Expert
Group on Artificial Intelligence, April 8, 2019, paras. 58 ff.
12. European Data Protection Board (EDPB) and European Data
Protection Supervisor (EDPS), Joint Opinion 5/2021 on the proposal
for a Regulation of the European Parliament and of the Council laying
down harmonized rules on artificial intelligence (Artificial Intelligence
Act), June 18, 2021, para. 29.
13. European Court of Human Rights (ECtHR), Glukhin v. Russia, July 4,
2023, no. 11519/20.
14. The Hague District Court, Nederlands Juristen Comité voor de
Mensenrechten et al. v. The State of Netherlands, February 5, 2020,
C/09/550982.
15. State of Wisconsin v. Eric L. Loomis, 881, N.W.2d 749, July 13, 2016.

References
Adinolfi, A. (2020). L’Unione europea dinanzi allo sviluppo
dell’intelligenza artificiale: La costruzione di uno schema di
regolamentazione europeo tra mercato unico digitale e tutela dei diritti
fondamentali. In Il ragionamento giuridico nell’era dell’intelligenza
artificiale (pp. 13–35). Pacini.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias.
There's software used across the country to predict future criminals.
And it's biased against blacks. Propublica.org.
Artificial Intelligence Act (Regulation (EU) 2024/1689). Official Journal
version of 13 June 2024.
Borgesius, F. Z., & Frederik, J. (2020). Strengthening legal protection
against discrimination by algorithms and artificial intelligence. The
International Journal of Human Rights, 24(10), 1–22.
Bradford, A. (2020). The Brussels effect: How the European Union rules
the world (p. 142). Oxford University Press.
Casonato, C. (2023). Unlocking the synergy: Artificial intelligence and (old
and new) human rights. BioLaw Journal – Rivista di BioDiritto, 3,
233–240.
Chander, A. (2021). Artificial intelligence and trade. In Big data and global
trade law (pp. 115–127). Cambridge.
Chiappini, D. (2022). Intelligenza Artificiale e responsabilità civile: nuovi
orizzonti di regolamentazione alla luce dell’Artificial Intelligence Act
dell’Unione europea. Rivista italiana di informatica e diritto, 4(2), 95–
108.
Cole, M. D. (2024). AI regulation and governance on a global scale: An
overview of international, regional and national instruments. Journal
of AI Law and Regulation, 1(1), 126–142.
D’Aloia, A. (2022). Ripensare il diritto nel tempo dell’intelligenza
artificiale. In Intelligenza artificiale e diritto: una rivoluzione? (99). Il
Mulino.
Daelman, C., & Yordanova, K. (2023). AI through a human rights Lens.
The role of human rights in fulfilling AI's potential. In Artificial
intelligence and the law (pp. 123–150). Intersentia.
Ferencz, J., López-Gonzalez, J., & Oliván Garcia, I. (2022). Artificial
Intelligence and International Trade: Some Preliminary Implications,
OECD Trade Policy Paper No. 260.
Floridi, L. (2023). The ethics of artificial intelligence: Principles,
challenges and opportunities. Oxford University Press.
Freeman, K. (2016). Algorithmic injustice: How the Wisconsin supreme
court failed to protect due process rights in State v. Loomis. North
Carolina Journal of Law & Technology, 18(5), 75–106.
Human Rights Watch. (2019). China's Algorithms of Repression Reverse
Engineering a Xinjiang Police Mass Surveillance App.
Kosta, E. (2020). Algorithmic state surveillance: Challenge the notion of
agency in human rights. Regulation & Governance, 16(1), 1–13.
Leiser, M. (2024). Psychological patterns and Article 5 of the Artificial
Intelligence Act: AI-powered deceptive design in the system
architecture and the user interface. Journal of AI Law and Regulation,
1(1), 5–23.
Liu, H., & Lin, C. (2020). Artificial intelligence and global trade
governance. A pluralist agenda. Harvard International Law Journal,
61(2), 301–344.
McCarthy, J. (2007). What is artificial intelligence? (pp. 1–15). Stanford
University.
Molbæk-Steensig, H. (2022). AI at the European court of human rights:
Technological improvement or leaving justice by the wayside? Ordine
internazionale e diritti umani, 5, 1254–1267.
Neuwirth, R. J. (2023). Prohibited artificial intelligence practices in the
proposed EU Artificial Intelligence Act (AIA). Computer Law &
Security Review, 48, 105798.https://doi.org/10.1016/j.clsr.2023.105798
Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., & Floridi, L. (2024). AI
risk assessment: A scenario-based, proportional methodology for the
Artificial Intelligence Act. Digital Society, 3(13), 1–29.
Pasquale, F. (2018). The black box society. The secret algorithms that
control money and information. Harvard University Press.
Reusken, G. (2024). Striking a balance: UK's pro-innovation approach to AI
governance in light of EU adequacy and the Brussels effect. Journal of
AI Law and Regulation, 1(1), 155–159.
Ruoppo, R. (2024). Adottato dal Consiglio d’Europa il primo trattato
internazionale legalmente vincolante sull’intelligenza artificiale: la
Convenzione quadro sull’intelligenza artificiale e i diritti umani, la
democrazia e lo Stato di diritto del 17.5.2024. Persona e mercato, 2,
647–650.
Ruotolo, G. M. (2022). Imparzialità e indipendenza dei giudici, intelligenza
artificiale, diritto internazionale. In Beni e valori comuni nelle
dimensioni internazionale e sovranazionale (pp. 369–390). Editoriale
scientifica.
Stratieva, N. (2024). An agile approach to the EU Artificial Intelligence Act
ecosystem. Journal of AI Law and Regulation, 1(1), 102–109.
Trincado Castán, C. (2024). The legal concept of artificial intelligence: The
debate surrounding the definition of AI system in the artificial
intelligence act. BioLaw Journal – Rivista di BioDiritto, 1, 305–334.
van Bekkum, M., & Borgesius, F. Z. (2021). Digital welfare fraud detection
and the Dutch SyRi judgment. European Journal of Social Security,
23(4), 323–340.
Veale, M., & Borgesius, F. Z. (2021). Demystifying the draft EU Artificial
Intelligence Act. Computer Law Review International, 22(4), 97–112.
Yeung, K. (2020). Introductory note to recommendation of the council on
artificial intelligence. International Legal Materials, 59(1), 27–34.
PART II
Philosophy, Policy, and Governance
5
ARTIFICIAL INTELLIGENCE,
GOVERNANCE, AND RACE
Farai Chipato and David Chandler

DOI: 10.4324/9781003518495-8

Introduction
Perhaps more than any other recent event, the COVID-19 pandemic has
been held to reveal the limits of the modernist episteme; the failure of
modernist binaries, cuts, and separations to capture the fluid, entangled,
‘planetary real’ (Burke et al., 2016 ; O’Callaghan-Gordoa & Antó, 2020, p.
510). The critical ‘planetary’ approaches argue that the ‘entangled world’ of
the Anthropocene (Lövbrand et al., 2020 , pp. 3–4) calls into question
liberal modernist assumptions and understandings, disrupting any attempts
to construe contemporary problems as just additional threats and concerns
that can be addressed with traditional methods and techniques. In the
discipline of International Relations, a key text for ‘planetary’ approaches
has been the ‘Planet Politics’ manifesto (Burke et al., 2016) that follows
Anthropocene philosopher Timothy Morton in arguing that ‘the global
ecological crisis “has torn a giant hole in the fabric of our understanding”;
that it is a vast “tear in the real”’ (Burke et al., 2016, p. 501). They state:

Now our paradigms fail the real. International Relations, as both a system
of knowledge and institutional practice, is undone by the reality of the
planet. The question posed by what we are calling ‘planetary’ thought is
that of how to get access to a ‘real’ beyond the ‘anthropocentrism’ of the
modernist episteme.
(Burke et al., 2016, p. 501)

It is in the promise of removing ‘human’ prejudices and biases that Big


Data and algorithmic machine learning often come into the center of both
policy and academic discussions. We argue that the focus on the problem of
the human, cast in the register of the rational ‘Man’ of the liberal episteme,
clearly brings out the distinctions between the critiques forwarded in the
‘planetary’ approaches – seeking to articulate post- or more-than-human
regimes of correlation, feedback loops and non-linear emergence and
critiques – from a critical race theory and ‘decolonial’ perspective which
seek to differently problematize the over-representation of ‘Man’ (framed as
White, European and male) with its universalist gaze from nowhere and
command-and-control imaginaries (Jackson, 2020; Weheliye, 2014; Wynter,
2003). As we shall see, placing constructions of the human at the heart of
the problematization of knowledge, produces a quite different paradigm of
critique. Planetary approaches are generally affirmative of adding new ways
of knowing through algorithmic governance, correlation, and pattern-
recognition, while critical race theory approaches are quicker to detect
continuities, problematizing correlational imaginaries as somehow being
able to bring new realities into being.
This chapter focuses upon the claims that AI enables new forms of
understanding that go beyond the limits of the modernist episteme,
removing the limits of anthropocentric thinking and universal, abstract and
reductionist views. We do this through engaging with two framings of
contemporary critical thought. First, the ‘planetary’, post-human, new
materialist, and relational understandings that seek to go beyond the
human/nature divide. These approaches are generally positive about the
power of algorithmic governance and Big Data to move beyond modernist
abstraction and reductionist linear causal assumptions. The second
important approach to AI that we engage with is that of critical race theory,
where analysis is much more skeptical of the possibilities of AI and Big
Data delivering on their promise of newer and better understandings,
challenging claims to neutrality or objectivity. Critical race theory
approaches, at their more positive, drawing upon a critical Black materiality
of post-human entanglement, see the potential of algorithmic governance
and Big Data to point to other possible worlds.
We counterpose to these two paradigmatic framings a third approach of
the Black Horizon, which shifts to the paraontological to demonstrate that
approaches which focus upon fluidity, correlation, and the deconstruction of
entities and essences are no less violent than imaginaries of linear
causation; both suborn us to the discovery and reproductive salvage of the
‘real’ world disavowing the anti-Blackness inherent in these constructions.
This chapter is organized in three sections. The first section illustrates how
planetary approaches take up the challenge of algorithmic governance and
Big Data, articulating new ethical and political understandings of the
relations between humans as governing subjects and the world around us.
The second section considers the critical race theory critique that questions
the stakes involved in the distinction between correlation and causation,
suggesting that correlations do little to move understandings beyond
reductionist and racializing binaries and cuts, reifying a liberal humanist
world order. The third and final section introduces a Black Horizon
approach that seeks to go beyond both critical race and planetary
understandings to suggest that these two framings of critique are to a certain
extent undermined by the assumptions of a ‘real’ underlying their critiques
of the modern episteme.

The New Planetary Real


The Big Data imaginary is that rather than a world of causal linearity and
fixed essences of modernity, another world comes into being, a world that is
grasped as relational and entangled. A world where the subject is no longer
at the center and therefore governance is freed to respond to ‘reality’ rather
than hubristically seeking to impose upon it or govern against it. Exemplary
in this regard is Benjamin Bratton's The Revenge of the Real (2021). As the
title indicates, the critical framing of ‘planetary’ approaches is that the
reality that had been disavowed and ignored in the modernist imaginary of
distinct entities, fixed essences and linear causality has finally come back
with a vengeance. This vengeance can be seen in the catastrophic powers
unleashed by global climate change and zoonotic diseases, both frightful
products of the reality of human/nature entanglement. To meet these
challenges, Bratton calls for ‘a new realist form of planetary politics’
(Bratton, 2021, p. 8). A ‘realism’ that with the aid of algorithmic
governance and Big Data can move beyond the limits of liberal modernist
imaginary of human exceptionalism and the autonomous individual
separate from the world: ‘and so, in this way the revenge of the real is also a
return of the repressed. Repressed are the biological realities of human
society's co-evolution with viruses, the epidemiological reality of
populations’ (Bratton, 2021, p. 11).
In Bratton's technological imaginary, access to a fluid and entangled
reality is so real that we can become self-governing through immersing
ourselves within this reality rather than trying to separate ourselves from it.
While the moderns imagined themselves as governing over a world as
object of representation, the post-moderns of the Anthropocene necessarily
govern within emergent entangled relations, in a process of ‘recursive self-
composition’ – where society can ‘sense, model, and act back upon itself’ –
rather than of command-and-control (Bratton, 2021, pp. 13, 31). Bratton
calls this ‘positive biopolitics at planetary scale’ (Bratton, 2021, p. 30,
italics in original). The key methodological point of ‘positive biopolitics’ is
one of de-centering the human of the liberal individuated imaginary thus
seeing ‘society as epidemiology does, not as self-contained individuals
entering into contractual relationships, but as a population of contagion
nodes and vectors’ (Bratton, 2021, p. 33). It is the ‘underlying biological
and chemical realities’ that matter rather than a social imaginary of
individuated sovereign selves. As Bratton states: ‘The pandemic has made it
easier to see oneself more as a node in a biopolitical network to which one
is responsible than as an autonomous individual whose sovereignty is
guaranteed by free will’ (Bratton, 2021, p. 35).
For ‘planetary’ frameworks of critique, the problem is the top-down
liberal imaginaries of universal understanding and linear causality,
exemplary of the modernist subject/object divide. Developments in
algorithmic machine learning and Big Data are vital to this alternative
imaginary as they provide an opportunity to grasp the subject and the world
differently, as processes, entangled and in flux, rather than as pursuing
straight lines. In these discourses of entanglement both the subject and
world are autopoietic or self-making, there is a ‘flat ontology’, which does
not distinguish between the human and the non-human in terms of agential
power and capacity. The difficulty then becomes how to enroll these non-
human agencies in extending the understanding of the new fluid ‘real’. This
is where ‘positive biopolitics’ or the ‘epidemiological model of society’ is
important in constructing society as ‘self-sensing’ as a way toward ‘self-
governing’, including and flattening the human subject with an equal status
to the non-human ‘as a body that counts’ (Bratton, 2021, p. 42).
Perhaps the most interesting aspect of the ‘planetary’ approach as
forwarded by Bratton is the certainty that the ‘truth will set us free’. That
being ‘counted’ by algorithmic governance and Big Data is essential to the
accuracy of the algorithms and therefore ‘inclusivity’ is essential. His
argument is not merely that inclusion will overcome any problems of power
or bias but that Big Data should not be working on understanding the world
as if it were composed of discrete individuals. Seeing society as composed
of individuals is a liberal or modernist mode of working, constituting a
divide between individual and society, private and public, binary
approaches which fail to capture our reality of social entanglement (Bratton,
2021, p. 96). ‘Libertarian’ individual demands to be left out or excluded
from governing knowledge and control are therefore to be considered as
illegitimate, undermining the security of all (Bratton, 2021, pp. 55–56).
The certainty of the new planetary real, provided by machine learning
and Big Data, no longer requires the human of the modernist Cartesian
imaginary, the rational autonomous human cut apart from nature in a divide
between self-determined rational subjects (with free will) and other-
determined irrational objects (without free will) (see Ferreira da Silva,
2007). The post-human or more-than-human entangled emergent ‘real’
cares little for universal reason or even for the liberal ethics of good
intentions. Everyone knows that unintended consequences have more
weight and impact upon the world than intended ones (Beck, 1992). As
Bratton states: ‘This is a challenge to political philosophy… in that it
demands the conceptualization of an ethics of being an object, not of being
a subject’ (Bratton, 2021, p. 104). In fact, it is only as epistemological
objects rather than as knowing subjects that we have duties and
responsibilities to each other, including the task of ‘deliberate and ethical
self-objectification as a responsible participant in the model abstractions
through which the care of others is realized’ (Bratton, 2021, p. 106). The
more training data is available and the more real-time training this training
data undergoes, the more adaptable and accurate machine learning insights
will be.
As Petersmann and Van Den Meerssche highlight (2024, p. 7)
mainstream critiques of the ‘planetary’ celebrations of Big Data and
algorithmic governance tend to criticize correlation and ‘algorithmic
governmentality’ on precisely the grounds that ‘planetary’ advocates have
already rejected, the desire to reinstate the liberal legal subject. It is
important to see the limitations of these modernist critiques that tend to rely
on the defense of the liberal autonomous subject to privacy or a right to
their ‘own’ data as their personal property. Bratton argues that this
understanding of data only makes sense if the data is about them as
individuals. The ‘data’ generated by algorithmic governance and Big Data
are not ‘extracted’ from individuals, ‘Data is not preformed and embedded
like a mineral, but is produced in the act of modelling it’ (Bratton, 2021, p.
149; see also Gray, 2023). Most importantly data is indirect, it is
correlational, it is relational, it is not a stack of discrete essentialized truths
that can be wheeled out when necessary.
However, the fact that the processes of Big Data and algorithmic
governance might not reproduce the liberal imaginary of a world of fixed
essences and discrete autonomous individuals does not necessarily mean
that these post-human imaginaries are unproblematic. When Bratton writes
that the planetary ‘real’ calls for ‘a view of politics shifting from law to
biology, from voice to organism’ the concerns should be apparent (Bratton,
2021, p. 158). He continues: ‘Instead of just reflecting the general will or
popular voice, the function of governance is now also the direct
management of ecosystems, understood as inclusive of human society’
(Bratton, 2021, p. 158). While liberal critiques of the right to privacy or to
‘ownership’ of data carry little weight considering the correlational drive
and search for patterns and interconnections, critiques that focus on the
methodological assumptions behind Big Data imaginaries are of
fundamental importance.
Petersmann and Van Den Meerssche (2023, p. 12) suggest, we think,
quite correctly, that work in critical Black studies forwards an alternative
framework of critique, one that points toward more fundamental problems
with correlation while at the same time being less interested in reviving
understandings of the individual as an autonomous legal subject. For critical
race theorists, critical of Big Data as a method of knowledge production,
these technologies inevitably fail to properly articulate the problem of the
‘human’ let alone overcome it. It is the imaginary of being ‘objective’ or
‘neutral’ that is, in fact, the problem, reconstituting simplifying, reductionist
(and racializing) knowledge claims even if based on imaginaries of flux,
pattern recognition, and correlations.

Race as a Technology
For critical race theory approaches, ‘objective’ or ‘neutral’ decision-making
by algorithmic correlation reproduces and hides racism because the problem
is that algorithmic imaginaries and Big Data actually obscure rather than
reveal the underlying ‘reality’. The ‘reality’ generated as the new ‘planetary
real’ is a racializing one, one that reproduces and further reifies racial
divides and understandings. The problem stems from a set of universalist
assumptions, as Simone Browne writes, from ‘a gaze that is always
unmarked, and therefore markedly white and male, and one that claims a
power “to represent while escaping representation”’ (Browne, 2015, p. 49).
It is racializing precisely because race is ‘undertheorized’ in analyses of
correlational methodologies and Big Data, where questions appear to start
from new technologies as themselves inaugurating certain questions or
problems or possibilities.1 Instead, these technologies need to be seen as
part of a racialized and racializing continuum where, as Browne notes,
‘blackness’ is a ‘key site through which’ they are ‘practiced, narrated and
enacted’ (Browne, 2015, p. 9).
If the universalizing logic of ‘proto-typical whiteness’ is the problem,
privileging some users over others and reproducing and reifying hierarchies
of power (Browne, 2015, p. 162) then one solution is to make reality more
‘real’ but in a different way to the datified imaginaries above. A different
reality is brought to the surface, more plural, one that opposes and works
outside the racializing assumptions often ‘hidden’ inside the black-boxed
technologies of algorithmic reasoning and Big Data (Benjamin, 2019a, pp.
44–45). The heart of the epistemological problem is the assumption of a
universal, neutral, or objective perspective (one shared as much by the
‘planetary’ approaches discussed above as by modernist approaches). It is
this assumption of objectivity which is seen to be reinforcing the
‘racializing logics of command and control’ (Scannell, 2019, p. 124). Key
to these logics is the liberal rationalist universal imaginary of ‘Man’,
without this imaginary there could be no assumptions of algorithms as
somehow being less biased than human decision-makers. As Safiya Noble
writes:

The pretext of universal humanity is never challenged, and the default


and idealized human condition is unencumbered by racial and gender
distinction. This subtext is an important part of the narrative that
somehow personal liberties can be realized through technology because
of its ability to supposedly strip us of our specifics and make us equal.
We know, of course, that nothing could be further from the truth.
(Noble, 2018, pp. 62–63)

The problem of the planetary (post-human or more-than-human


approaches) is that the ‘human’ itself is always treated homogenously. The
critical race theory approach mounts a different critique of ‘the liberal
humanist figure of Man’ than that of the ‘planetary’ scholars. As Ruha
Benjamin states, drawing upon the work of Alexander Weheliye, Sylvia
Wynter, and Zakiyya Iman Jackson:

…a black feminist approach to posthumanism and all of its


technoscientific promises is not about including the oppressed in the fold
of (Western liberal) humanism or about casting out humanism writ large,
but about abolishing one particular genre that, by definition, dominates
and devours all others. Ultimately it is an approach to world-building in
which myriad life forms can flourish.
(Benjamin 2019b, p. 10)

As Noble writes, using critical race theory as the guiding theoretical


framing, designers need to recognize that bias is ‘an inevitable result’ of the
way systems are structured, meaning that a policy of neutrality can only
reproduce this bias, necessitating starting from the perspective of racially-
defined user populations’ (Noble, 2018, p. 136).
For critics at the forefront of critical race and technology studies, such as
Safiya Noble, Simone Browne, and Ruha Benjamin, then, ‘neutral’
algorithms are necessarily racist because the data that they are generated
from stem from a society that is permeated by the social, cultural,
economic, housing, and welfare divides of racial inequality. But, more than
this, as Nobel argues (Noble, 2018, p. 108), if: ‘gender and race are socially
constructed and mutually constituted through science and technology. The
very notion that technologies are neutral must be directly challenged as a
misnomer.’ Race is therefore front and center of the critique of algorithmic
governance and Big Data. Neutral or scientific portrayals of new
technological applications reinforce and rework the racism of the New Jim
Code (a hubristic attempt at domination and control, where designers
encode judgments into technical systems which then are presented as
‘neutral’ or ‘objective’) (Benjamin, 2019a, p. 11, 2019b, p. 3).
The problem of race thereby cannot be addressed by merely making
algorithms more accurate or inclusive but also necessitates that the social
and historical structures with which technology interacts be challenged and
revealed. Drawing upon the work of Safiya Noble and Simone Browne,
Benjamin insists that race as a means of organizing a social structure, as
‘the literal architecture of power’ (Benjamin, 2019a, p. 91) cannot be
detached from the application of technology under our present social and
ideological conditions. As Simone Browne writes, surveillance technologies
cannot be understood without their racialized histories, from the slave ship
to the plantation to the modern passport system (Browne, 2015, p. 70). R
Joshua Scannell therefore argues that ‘It is impossible to distinguish
between efforts to inscribe ‘race’ and racial difference on human bodies and
efforts to quantify, compare, evaluate, and surveil the human’ (Scannell,
2019, p. 120).
‘Race is a technology’, argues Benjamin, one that naturalizes inequalities
by translating or registering social discrimination as something already pre-
existing, as a ‘natural’ property of individuals or groups. Thus,

The animating force of the New Jim Code is that tech designers encode
judgements into technical systems but claim that the racist results of their
designs are entirely exterior to the encoding process. Racism thus
becomes doubled - magnified and buried under layers of digital denial.
(Benjamin, 2019a, pp. 11–12)

The inequalities and exclusions of ‘reality’ that often go unnoticed


become the ‘natural’ or ‘objective’ basis of judgments, which then
‘doubles’ racism, doubling the assumptions of naturalness, that somehow a
deeper ‘reality’ is captured by Big Data.
Benjamin writes that placing algorithmic correlation and Big Data in this
context means working to clarify how this technology works to produce
divisions which naturalize inequalities, despite the promise of liberalism of
equal treatment. This means that making algorithms more ‘inclusive’ or
more ‘accurate’, training them on broader or larger databases is not the
solution, particularly if the intent and the deployment of Big Data is for the
policing and control of minority communities:

While inclusion and accuracy are worthy goals in the abstract, given the
encoding of long-standing racism in discriminatory design, what does it
mean to be included, and hence more accurately identifiable, in an unjust
set of social relations? Innocence and criminality are not objective states
of being that can be detected by an algorithm but are created through the
interaction of institutions and individuals against the backdrop of a
deeply racialized history, in which Blackness is coded as criminal.
(Benjamin, 2019a, p. 124)

What initially might appear as an epistemological problem, of accessing


the ‘reality’ of the world, is not a problem of the algorithm itself. No
alternative algorithm can somehow fix Google or policing approaches, as
the problem is not one of knowledge but of materiality, of the word itself.
The problem lies in the cultural, social, economic, and political structuring
of society. Safiya Noble's work makes it clear that the problem with Google
search engine results is not a poor algorithm but the fact that Google is a
public resource that is run by a private commercial interest, thus creating
‘advertising algorithms, not information algorithms’ (Noble, 2018, p. 38).
As Scannell argues, coding out racism from automated predictive policing
systems misunderstands the problem, instead perpetuating ‘an essentially
liberal, reformist concept of policing that obscures the broader and more
fundamental violence enacted by digitizing the decision-making capacities
of the carceral state’ (Scannell, 2019, pp. 107–108). It is for this reason that
there can be no ‘technical fixes’ for the problem of algorithmic correlations
and Big Data but only a struggle for epistemological change: a change of
perspective that seeks to reveal and to challenge racializing modes of
governing, using new technologies and Big Data for good rather than for
preserving and extending a racial status quo.
Thus, Safiya Noble's view of an ‘ethical algorithmic future’ (Noble,
2018, p. 171) is one that challenges the racialized context in which new
technologies are put to work, helping us: ‘understand the role of the design
of platforms, interfaces, software, and experiences as practices that are
culturally and gender situated and often determined by economic
imperatives, power and values’ (2018, p. 179). This approach of
repurposing algorithmic capabilities and Big Data in order to challenge the
social order to ‘actively appropriate and reimagine technoscience for
liberatory ends’ (Benjamin 2019b, p. 13) is expressed clearly in Simone
Browne's work on ‘sousveillance’ or Benjamin's idea of ‘abolitionist tools
for the New Jim Code’. Browne suggests that technology be repurposed for
‘sousveillance’, or surveillance from below, ‘a way of naming an active
inversion of the power relations that surveillance entails’ (Browne, 2015, p.
19). She develops the conception of ‘dark sousveillance’:

…as an imaginative place from which to mobilize a critique of


racializing surveillance, a critique that takes form in antisurveillance,
countersurveillance, and other freedom practices. Dark sousveillance,
then, plots imaginaries that are oppositional and that are hopeful for
another way of being… where the tools of social control… were
appropriated, co-opted, repurposed, and challenged in order to facilitate
survival and escape.
(Browne, 2015, p. 21)

Benjamin's ‘abolitionist tools’ work in a similar manner, that:

would have to entail a socially conscious approach to tech development


that would require prioritizing equity over efficiency, social good over
market imperatives… designing computer programs from scratch and
training AI… so as to make us aware of social biases.
(Benjamin, 2019a, p. 183)

The Black Horizon


What distinguishes a Black Horizon approach to algorithmic governance
and Big Data is the attention to deconstruction rather than seeking to
counter the ‘reality’ constructed by Big Data with another ‘more real’ or
‘deeper’ reality underlying it. While the ‘planetary’ approaches focus on an
entangled ‘planetary real’ to highlight the necessity of (co)relational
understandings of life as emergent, non-linear, communicative, or semiotic,
these approaches (as we have stated throughout this book) often fail to
consider the role of race and coloniality in the shaping of these
technological practices. On the other hand, critical race theory approaches –
which flag up the conjoined and entangled histories of race, surveillance,
and technologies – often restrict their critique to the problem as one of
phenomenology or of epistemology, even when more widely construed in
terms of the structures through which technologies are developed and
applied and not only concerns of accuracy and digital inclusion.
Our Black Horizon argument relies on a continuing focus on the anti-
black world, which cannot be easily escaped through a futuristic world of
becoming, or a more epistemologically accurate account of the ‘real’. Key
for us is the fact that in seeking to read or to understand the world free from
hierarchies and divides of race reinforces and reifies the anti-black world.
In this section, we explore this through engaging with three important lines
of critique. First, the work of Ramon Amaro's more radical
problematization of Big Data, second, Louise Amoore's critique of race as a
floating signifier, and, third, Wendy Chun's account of correlation as violent
world making. We then push these critiques a little further in drawing out a
Black Horizon account that draws on the affinities between the opacity of
Big Data and the opacity of Blackness.
Among recent critical scholars of race and technology, Ramon Amaro's
work stands out as a bold attempt to move beyond the issues with critical
race theory work on Big Data. His work exemplifies some of the problems
with focusing on phenomenology at the expense of questioning claims
dependent upon substantiating an underlying ‘reality’. In Ramon Amaro's
The Black Technical Object (2022) the starting position of the analysis
follows the lines of those discussed above in terms of the critical race
approach: that it is the racist structures of society which shape the
technological ‘real’ produced by algorithmic correlation and Big Data. As
he writes, following Frantz Fanon: ‘the fact of Blackness as an object of
being enters into relation with the technical object as always already
externally fragmented and thereby readied for algorithmic reassembly
through the white imaginary’ (Amaro, 2022, p. 13). To be Black is to be
denied true self-consciousness, to be always measured through the eyes of
the White other (Amaro, 2022, p. 74). The problem is ‘bioepistemological’
in that, following Sylvia Wynter, the problem of knowledge and of self-
knowledge, is shaped by the structuring of power (Amaro, 2022, p. 22).
The limitations of the technological imaginary of modernity are clear to
those who continually come up against them, whose lives appear to be in
excess of the reductionist categories of correlation. Amaro's work is
particularly interesting in that he radicalizes the critiques made by Browne,
Noble, and Benjamin (focused upon above) through an attention to this
non-representational aspect of Black life as a problem for algorithmic
capture. Thus, for Amaro, rather than calling for new algorithmic
technologies to be developed in ways that recognize and include Black
individuals, the question should be posed the other way around, how a
proper understanding of Black life can enable the transformation of Big
Data capabilities:

To merely include a representational object in a computational milieu that


has already positioned the white object as the prototypical characteristic
catalyzes disruption superficially… Can the Black technical object be
conceptualized as outside of the dialectic between human and
machine?… how can Black technical objects generate new possibilities
outside of phenotypical calculation, prototypical correlation, and the
generalization of category? How might we create a more affirmative
view of the relation between the Black technical object and technology?
(Amaro, 2022, pp. 52–53)

This mode of ‘post-human’ ‘affirmation’ takes Amaro much closer to the


‘planetary’ imaginaries considered in the opening section. Here, (Black)
life, understood as fluid, emergent, and non-representational, enables a new
immanent and fluid ‘bioepistemological’ approach to Big Data: ‘The Black
technical object converges with the artificial in an assemblage of mutable
and multivalent experiences. Here, both the Black technical object and
technical object inform each iteration of themselves in a self-governing
system of feedback’ (Amaro, 2022, p. 60). Black life constrained and
excluded from self-knowledge under the regime of the modernist human
can be reimagined as immanent, non-individuated, as ‘post-human’, such
that ‘levels of human knowledge of the physical and biological are
emancipated by epistemological processes that circulate in the autopoietic
field of relation’ (Amaro, 2022, p. 197).
Although the language can be a little abstruse, Amaro's point is that the
critical Black studies’ engagement with the history of racial coding,
algorithmic governance, and Big Data opens up tensions within the
modernist episteme, pointing to another reality in excess of the ‘white gaze’
(Amaro, 2022, p. 188). A reality that pre-exists the reductionist assumptions
of the ‘planetary real’:
As such, the Black technical object invites new multivalent modes of
representation. It argues for a nonlinearity as a key point of entry into
future techno-human relation. While the Black technical object might be
externally positioned outside of ontology, it is in its pre-individuated state
the constitution of a potential that exists prior to the amplification of
machinic perception or the colonial logics of racial substance.
(Amaro, 2022, p. 221)

Amaro posits an alternative ‘real’ beyond the ‘white gaze’ that cuts and
divides and individuates subjects, a real that ‘prefigures the constitution of
any white prototypicality and instead resides in a continual process of
becoming’ (Amaro, 2022, p. 221). He posits that this ‘alternative politics
and Black techno-sociality… could set Black life – as well as machine
learning technology – free’ (Amaro, 2022, p. 218). This would be a world
without pre-divided entities and essences – where ‘the conditions of human
reality are irreducible to the relation of the other’ (Amaro, 2022, p. 225).
Data would bring us together rather than tear us apart: ‘Simply put, the
authenticity of the social would have to become an authenticity that is
difference in itself’ (Amaro, 2022, p. 225). Taking the ‘white gaze’ out of
knowing would mean to be governed by the ‘real’, without imaginaries of
individuation, without a ‘substantialist metaphysics… grasped from the
perspective of the substance of race’ (Amaro, 2022, p. 227). As Amaro
concludes: ‘Instead, we march toward an embrace of the always already
incomplete and incompatible process of racial and machinic individuation’
(Amaro, 2022, p. 227).
Amaro's work can be viewed as a kind of bridge between the critical race
theory accounts and the Black Horizon, as it seeks to radicalize its critique,
but ultimately takes an affirmative approach to its critique. From the
perspective of the Black Horizon there is something deeply problematic in
the desire for technology to provide an alternative epistemological access to
the ‘real’, even if that ‘real’ is a fluid, emergent, non-linear, and relational
one. For a Black Horizon approach, unlike Amaro's, the problem is in the
attempt to technologically enhance the capacity to ‘world’, to produce
worlds, rather than with how well or how badly these worlds are
constructed. This is because the problem does not lie at the ontic level of
epistemological understanding but at the onto-epistemological level of
world production itself. Anti-Blackness does not require the production of
Black bodies but merely the production of an imaginary of a world that is
available for a subject.
The aspiration to know the world differently acts as problematic lure for
algorithmic imaginaries. It is clear that affirmative futural imaginaries draw
attention away from the ‘real world’ of the present. This point comes
across, for us, very clearly in Ruha Benjamin's interview with Dorothy
Roberts with which she closes her edited collection Captivating Technology
(Benjamin & Roberts, 2019). Affirmation of the future, the hope that
technology can govern better than modernist politics were able to, takes the
focus away from the development of critique of the present. As Ruha
Benjamin states: ‘It's a kind of displacement of human agency’ (Benjamin
& Roberts, 2019, p. 347). The concern is with a future world rather than the
one in which people are suffering despite the ‘everyday penetrance of
technologies’. As Dorothy Roberts states, in effect, this is merely
reproducing racism as social questions are reposed as questions of
objectivity and science, as if they were somehow natural or inevitable
(Benjamin & Roberts, 2019, p. 346).
Perhaps, more importantly, this perspective posits the assumption that it
is possible to access another form of knowledge that enables us to grasp a
‘reality’ that is somehow more ‘true’, more ‘objective’, or more
‘progressive’, under the technological imaginaries of Big Data or
algorithmic governance. Our point, that we seek to draw out in this section,
is that even fluid, relational, and non-linear worlds (such as that imagined
by Amaro) depend upon ontological assumptions and thereby upon anti-
Blackness. We analyze below how ontological imaginaries of correlational
flux are no less anti-black than the fixed and linear imaginaries of the
plantation. Anti-Blackness is inseparable from the process of line drawing
itself. It is not a particular cut in reality itself. These points can be usefully
examined by focusing upon the correlational analytics at the heart of new
developments in algorithmic governance and Big Data.
Having examined the openings provided by Amaro's work, and some of
the issues with his approach of affirmation, we now turn to Louise
Amoore's work on Big Data, which provides a further point of critique that
has an affinity to the Black Horizon. Prior to Amoore's intervention in this
area, Simone Browne had highlighted the historical development of
technologies of surveillance with the process of ‘epidermalization’ of the
White gaze, ‘the imposition of race on the body’ (Browne, 2015, p. 7) and
the use of branding as a ‘biometric technology’ for the ‘making, marking,
and marketing of the black subject as commodity’ (2015, p. 91). Today, it is
important to note that racializing divisions no longer require biometric
markers to make bodies legible in these ways. Amoore's work on the role of
algorithmic technologies in policing technologies has been particularly
sensitive to this shift (Amoore, 2024). Amoore considers how machine
learning models threats not on the basis of individual bodies or fixed
characteristics but layered and interlocked mechanisms of correlation which
disaggregate distinctions of entities and bodies. Importantly, Amoore
argues: ‘Features, understood by computer scientists as the “set of attributes
associated to an example”, significantly are not necessarily defined in
advance of an operation but are generated by the data examples the
algorithm is exposed to’ (Amoore, 2024, p. 4). This is correlation on the
move which, as she notes, ‘allies closely also with resilience logics’
(Amoore, 2024, p. 5) as processual, non-linear, understandings generate a
need for ‘constant monitoring’.
Petersmann and Van Den Meerssche highlight that this is precisely the
Big Data imaginary of border control forwarded by the European
Commission Directorate-General for Migration and Home Affairs, which
argues for the ‘unsupervised uncovering of correlations’ (2024, p. 4). They
state that, ‘In this sense, the practice of pattern detection will necessarily be
a “live” process of iterative review and adaptation through which observed
attributes and correlations become part of the open-ended taxonomy of
“risk”’ (Petersmann & Van Den Meerssche, 2024, p. 5). Even if the mobile
correlational ‘clustering’ involves ‘flexible principles’ and ‘mutable
categories’, lines of ‘proximity’ are drawn which are then understood to
naturally pre-exist the process of knowledge generation (Amoore, 2024, p.
5). Amoore states:

To belong to a cluster is not about resemblance, common characteristics,


or meeting specified criteria, but is instead a spatialised proximity or
distance. “Similarity and belonging no longer rely on resemblance as a
common genesis… but on measures of proximity or distance”.
(Amoore, 2024, p. 6)

Amoore draws on Stuart Hall to argue that ‘race is a “floating signifier”’


(Amoore, 2024, p. 7) and that new technologies thereby racialize through
generating new correlational categories imagined to be more ‘real’ or
‘natural’ than the fixed, reductionist logics of modernist imaginaries. As
noted above, ‘planetary’ celebrations of ‘more-than-human’ sensorial
assemblages no longer require the individual as ‘subject’ or ‘object’,
enabling race to operate as technology without the attachment of
algorithmic governmentalities to physical attributes of individuals. As the
applications of Big Data to policing imaginaries become less and less
reductive, more ‘holistic’, more ‘objective’ analytics extend the size and
reach of databases. Thus, while race might become detached from physical
bodies and essentialist attributes, the world of anti-Blackness remains in
place.
Scannell analyses one such system, HunchLab, which takes into account
a fluid correlational metric of

take-out restaurants, schools, bus stops, bars, zoning regulations,


temperature, weather, holidays, and more… [all of which are] assigned
criminological weights’, thus ‘transforming the policed ‘community’ into
a field of criminological data.
(Scannell, 2019, p. 117)

Scannell argues that this system ‘produces a panopticon without inmates;


surveillance without subjects; criminality without criminals’ (Scannell,
2019, p. 118). This may remove the human and human judgment but is no
less racializing. ‘HunchLab does not target individuals but, [along the lines
of Foucault's Discipline and Punish] a certain concerted distribution of
bodies, surfaces, lights, gazes; in an arrangement whose internal
mechanisms produce the relation in which individuals are caught up’
(Scannell, 2019, p. 118).
A third critical perspective we draw upon is the work of Wendy Chun's
Discriminating Data (2021), which is particularly sharp in its examination
of the implications of correlation as a new means of violent world-making.
Chun's focus on correlation as a methodology is insightful for
understanding how correlational categories can take a mobile ‘real time’
form, essentializing the pattern or relation rather than any fixed ontological
attribute. She does this through genealogically linking correlation with
eugenics, as a futural methodology, seeking a more meaningful ‘reality’
than one that is either visible directly on the surface or predetermined by
some form of linear causality:

Correlation's eugenicist history matters, not because it predisposes all


uses of correlation towards eugenics, but rather because when correlation
works, it does so by making the present and future coincide with a highly
curated past. Eugenicists reconstructed a past in order to design a future
that would repeat their discriminatory abstractions… The important point
here is that predictions based on correlations seek to make true disruption
impossible, which is perhaps why they are so disruptive.
(Chun, 2021, p. 52)

Correlational reason depends upon seeing the unseen, bringing something


that exists into presence for the observer. This ‘reality’ would not exist if it
were not for the indirect access to ‘reality’ seemingly afforded by the
machine learning algorithm. While algorithmic thought is deconstructing
the world of binaries, entities and essences, the search for knowledge, Chun
argues, necessarily works on enabling the anomaly because there could be
no patterns in a world of homogeneity and fixed relations. The governing
gaze reverses the universal norming of the white male ‘Man’, as outlined by
Sylvia Wynter. Chun writes: ‘The trappings of social experimentation
disappear because the Internet has become nothing but social
experimentation to normalize the nonnormative’ (Chun, 2021, p. 152).
This is precisely the process through which new correlations are found,
for example via the use of Bayesian processes of inference, in which
patterned similarities are not focused upon individuals as fixed entities but
as mobile and fluid patterns of ‘likes’ on the basis of deviation from the
norm (Chun, 2021, p. 160). What does it mean to ‘deviate from the norm’?
It means that there is the possibility of relational differentiation.
Algorithmic governance and Big Data, according to Chun, thereby are not
constraining individuation, forcing individuals into essentializing
categories. On the contrary, the desire is precisely to magnify and to
instrumentalize deviation; to put deviation to work rather than to extend or
to police the norm. As Claire Colebrook has argued ‘there is now a demand
to be other than any fixed norm. Neoliberal marketing imperatives have
seized upon the language of intensities, becoming and affect’ (2022, p.
261). Thus, correlational work of deconstruction, of the removal of fixed
entities to be regulated around the norm, is increasingly central to
governmental reason (Lemke, 2021, p. 189) and just as much an act of
violent world-making as modernist binary imaginaries of linear causality.
The symbolic order is still sutured around the subject, constructed in
post-human or more-than-human imaginaries in a field of (co)relation. For
Chun, as for us, there can never be a more ‘real’ outside to this order (Chun,
2021, p. 183) without again affirming and suborning the subject. ‘The task
of ideology critique’ she states, following Žižek, ‘is thus not to make the
latent manifest… but rather to “discern the hidden necessity in what appears
as a mere contingency”’ (Chun, 2021, p. 183). The problem of critiques of
algorithmic governance and Big Data, with their post-human claims of
objectivity and of the planetary ‘real’ is the racializing reworking of
contingency as necessity, of the ‘real’ as a problem of knowledge or of
access to a more real ‘reality’ than is possible under modernist
universalizing abstractions or the ‘color-blind’ essentializations of the ‘New
Jim Code’. There can be no construction of a world through correlation that
is not an artifice, that does not force a relation of sameness where there can
only be difference. Chun argues that to avoid being suborned to new
versions of machine-imagined realities we need ‘to “unlearn” the
temporality and the colonial logics that undergird our archives’ (Chun,
2021, p. 254). ‘We need machine unlearning’ (Chun, 2021, p. 254).
Thinking with and beyond the work of Amaro, Amoore and Chun, the
Black Horizon mode of critique of Big Data focuses on the possibilities of
opacity, which allows us to consider the issue of race not as an
epistemological problem but to engage on an onto-epistemological level. If
we understand Blackness as a problem of being and knowing, then we can
engage with Big Data differently, refusing to attempt a ‘better’ approach to
data, which more accurately represents the ‘real’, and instead seeing the rise
of algorithmic governance as intensifying the ontological fissures at the
heart of modernity, which are intimately linked to the foundational role of
Blackness as a means to suture the ontological materials that make up the
world and the human. One way to think this through is by engaging with the
affinities between Big Data, as an object of opacity, and the Black(ened)
non-subject, as a plastic form of opaque non-being that helps to negotiate
the distance between the human and the non-human.
First, it is important to understand how the polymorphous nature of Big
Data is framed and articulated in the context of algorithmic governance, as
a problem both for knowledge and for societal governance. In an influential
paper on machine learning algorithms, Jenna Burrell argues that there are
three different types of opacity among the machine learning algorithms that
we encounter in the rapidly expanding world of technology, from search
engines and advertising to insurance and credit scoring (Burrell, 2016).
First, there is intentional secrecy used by corporate or government actors to
shield their data gathering and analysis operations from the scrutiny of the
public or the use of market competitors. The second mode of opacity is
driven by a lack of applicable knowledge of algorithms, or ‘technical
illiteracy’ (Burrell, 2016, p. 4). However, the third form of opacity is most
interesting for our purposes, as it concerns a growing level of
unknowability of algorithms, even to technical experts, due to their
escalating complexity and evolution through machine learning processes.
Burrell argues that:

The continuing expansion of computational power has produced certain


optimization strategies that exaggerate this particular problem of opacity
as the complexity of scale even further. With greater computational
resources, and many terabytes of data to mine (now often collected
opportunistically from the digital traces of users’ activities), the number
of possible features to include in a classifier rapidly grows way beyond
what can be easily grasped by a reasoning human.
(Burrell, 2016, p. 9)

Essentially, Burrell is arguing that the more sophisticated machine


learning becomes, and the longer these systems operate, the further they
move from compatibility with the types of representational systems or
semantics that are intelligible to people. Although Burrell does not fully
spell this out, there is a suggestion that there is something in the
constitution of these algorithms that makes them tend toward opacity, that
they are ontologically conditioned to unknowability. She suggests that:

…the opacity of machine learning algorithms is challenging at a more


fundamental level. When a computer learns and consequently builds its
own representation of a classification decision, it does so without regard
for human comprehension. Machine optimizations based on training data
do not naturally accord with human semantic explanations.
(Burrell, 2016, p. 10)

For Burrell, this represents a fundamental challenge for algorithmic


governance, legal regulation, computer scientists and technological experts,
a clarion call to drive new ways of circumventing this opacity. As with
much policy focused social science literature, Burrell views this issue as a
problem that must be fixed, an issue that should be mastered. However, we
wish to argue that the opacity of the algorithm may provide a way into a
Black Horizon critique, one which is not simply focused on knowing better,
or knowing in a way that eschews racialization.
The concept of opacity in much of Black thought is identified with
Glissant's conception of opacity (Glissant, 1997), one which challenges the
modern will to know, and through that knowledge gain mastery. Thus, we
would suggest that a Black Horizon approach to Big Data seeks to amplify
the opacity of the algorithm, harnessing the ambiguity and unknowability of
this new world of data as a means to destabilize a knowable human world
of being or the post-human world of becoming rather than seeking to make
Big Data know better, or to find ways for it to grasp the ‘real’ of Black data
(Harney & Moten, 2021).

Conclusion
Petersmann and Van Den Meerssche ask ‘how, concretely, could the opacity
of be(com)ing be practiced and kept open against algorithmic foreclosures?’
(2024). Yet algorithmic imaginaries operate precisely upon the basis that the
‘opacity of be(com)ing’ can never be overcome and will always be open.
As Louise Amoore concludes, ‘There is no line of best fit to the world, and
there must be no border line of best fit’ (2024, p. 8) There may be a flight
from the individuated liberal subject but as long as the world exists, new
lines of ‘best fit’ will continue to emerge, anti-Blackness is merely
reformulated rather than overcome. Yet, as suggested in the final part of this
chapter, perhaps beginning from the opacity of Big Data provides the
grounds for a sharper critique, which highlights the shakiness of the
foundations of governance and the anti-Blackness at its core.
While the planetary and critical race theory framings and critiques of Big
Data and algorithmic governance are distinct, the Black Horizon as a
perspective is sharply honed to dispute imaginaries of a fluid, immanent
ontology of co-constitutive becoming in which there is a view that
emancipated non-individuated (post)humans could become ‘with’ the world
rather than be forced to define themselves against it. This imaginary of
harmonious, resilient, self-organizing, post- or more-than-human, becoming
is precisely the danger that we see forewarned in discussion and debate
about algorithmic governance and Big Data. As we have demonstrated, it is
vital to attend to the maintenance of anti-Blackness, even in a world of
relational, subjectless becoming, as Louise Amoore argues. Moreover, the
very process of correlational world-making which is championed by
‘planetary’ approaches, and which some critical race theory perspectives
seek to subvert as a form of resistance against the violences of modernity, is
itself a form of violent world-making, as Wendy Chun demonstrated.
Instead, perhaps we should follow a Black Horizon path that attends to the
opacity of Big Data, not as a problem that must be fixed, but as a potential
way into a further deconstructive critique.
Note
1. Although for an exception see Ana Valdivia and Martina Tazzioli,
2023 who argue that datafication as a methodology has historically
been part of a racializing mode of governing the colony and the border.

References
Amaro, R. (2022). The black technical object: On machine learning and the
aspiration of black being. Sternberg Press.
Amoore, L. (2024). The deep border. Political Geography, 109, 102547.
https://doi.org/10.1016/j.polgeo.2021.102547
Beck, U. (1992). Risk society: Towards a new modernity. Sage.
Benjamin, R. (2019a). Race after technology: Abolitionist tools for the New
Jim Code. Polity.
Benjamin, R. (Ed.). (2019b). Introduction: Discriminatory design, liberating
imagination. In R. Benjamin (Ed.), Captivating technology: Race,
carceral technoscience, and liberatory imagination in everyday life
(pp. 1–22). Duke University Press.
Benjamin, R., & Roberts, D. (2019). Reimagining race, resistance, and
technoscience: A conversation with Dorothy Roberts. In R. Benjamin
(Ed.), Captivating technology: Race, carceral technoscience, and
liberatory imagination in everyday life (pp. 328–348). Duke
University Press.
Bratton, B. (2021). The revenge of the real: Politics for a post-pandemic
world. Verso.
Browne, S. (2015). Dark matters: On the surveillance of blackness. Duke
University Press.
Burke, A., Fishel, S., Mitchell, A., Dalby, S., & Levine, D. J. (2016). Planet
politics: A manifesto from the end of IR. Millennium: Journal of
International Studies, 44(3), 499–523.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in
machine learning algorithms. Big Data and Society, 3(1), 1–12.
Chun, W. H. K. (2021). Discriminating data: Correlation, neighbourhoods,
and the new politics of recognition. MIT Press.
Colebrook, C. (2022). Deleuze after Afro-pessimism. In C. Daigle & T. H.
McDonald (Eds.), From Deleuze and Guattari to posthumanism:
Philosophies of immanence. Bloomsbury Academic.
Ferreira da Silva, D. (2007). Toward a global idea of race. University of
Minnesota Press.
Glissant, E. (1997). Poetics of relation. University of Michigan Press.
Gray, C. (2023). More than extraction: Rethinking data's colonial political
economy. International Political Sociology, 17(2), 1–20.
Harney, S., & Moten, F. (2021). All incomplete. Minor Compositions.
Jackson, Z. I. (2020). Becoming human: Matter and meaning in an
antiblack world. New York University Press.
Lemke, T. (2021). The government of things: Foucault and the new
materialisms. New York University Press.
Lövbrand, E., Mobjörk, M., & Söder, R. (2020). The Anthropocene and the
geo-political imagination: Re-writing earth as political space. Earth
System Governance, 4, 100051.
Mayer-Schönberger, V., & Cukier, K. N. (2013). Big data: A revolution that
will transform how we live, work and think. John Murray.
Noble, S. U. (2018). Algorithms of oppression: How search engines
reinforce racism. New York University Press.
O’Callaghan-Gordoa, C., & Antó, J. M. (2020). COVID-19: The disease of
the Anthropocene. Environmental Research, 187, 109683.
Petersmann, M., & Van Den Meerssche, D. (2024). On phantom publics,
clusters and collectives – Be(com)ing subject in algorithmic times.
Special issue: The non-human turn in law. AI & Society, 39, 107–124.
Scannell, R. J. (2019). This is not minority report: Predictive policing and
population racism. In R. Benjamin (Ed.), Captivating technology:
Race, carceral technoscience, and liberatory imagination in everyday
life (pp. 107–129). Duke University Press.
Valdivia, A., & Tazzioli, M. (2023). Datafication genealogies beyond
algorithmic fairness: Making up racialised subjects. In 2023 ACM
conference on fairness, accountability, and transparency (FAccT’23),
12–15 June, Chicago (pp. 840–850). ACM.
https://doi.org/10.1145/3593013.3594047
Weheliye, A. (2014). Habeas viscus: Racializing assemblages, biopolitics
and black feminist theories of the human. Duke University Press.
Wynter, S. (2003). Unsettling the coloniality of being/power/truth/freedom.
CR: The New Centennial Review, 3(3), 257–336.
6
INTERNATIONAL RELATIONS IN
THE “AGE OF
TECHNOLOGIZATION”
Diego Brasioli*

DOI: 10.4324/9781003518495-9

AI: Between Opportunities and Dangers


In this third decade of the twenty-first century, we are witnessing an
unprecedented global revolution: machines are transforming the world as
we have always known it.
Artificial intelligence (AI) is now at the center of almost every debate on
a wide range of topics. When we consider the realms of research, applied
sciences, engineering, labor, education, healthcare, medicine, and politics,
the question of how new digital technologies can revolutionize each of
these fields seems inescapable.
The recent wave of public attention may come as a surprise. After all, the
concept of AI (the “intelligence” of machines combing through an immense
amount of data, so immense that it is almost unimaginable to human minds)
is not particularly new (Wooldridge, 2022), as it goes back to the famous
1950 test of British scientist Alan Turing (1912–1954), while the first
structured program on this specific discipline (the “Logic Theorist”), was
launched as early as 1956 by Allen Newell, Herbert A. Simon, and Cliff
Shaw with the Research and Development Corporation (RAND) as a tool
designed to mimic the problem-solving skills of a human.
Since then, AI has developed significantly. However, the release of AI-
powered ChatGPT in November 2022 quickly sparked intense excitement
and unprecedented global interest. This “chatbot” captured everybody's
attention precisely because it almost perfectly simulates a conversation with
an omniscient interlocutor and can effortlessly write cover letters, prepare
resumes, pass school exams, take college entrance tests, and perform
complex tasks that previous generations could achieve only at the cost of
long hours of hard work.
Advances in computer technology are no longer the stuff of science
fiction movies, and the possibilities of applying AI in various fields are
easily accessible to everyone through a simple personal computer or
smartphone. This Fourth Industrial Revolution is in full swing before our
very eyes, with fast-paced advances in AI, robotics, the Internet of Things
(IoT), Web3, blockchain, 3D-printing, and genetic engineering, thus
bringing the physical, digital, and biological worlds together.
* Disclaimer: The views contained in this chapter are expressed in a
personal capacity and are not attributable to the Italian Ministry of Foreign
Affairs and International Cooperation.
AI systems can operate autonomous vehicles, smart buildings, medical
robots, intelligent digital personal assistants, and home gadgets, functioning
without any, or with limited, human guidance. All this is thanks to the
autonomous ability to process a very large amount of data at very high
speed.
Data has been referred to by British mathematician Clive Humby as the
“new oil” of the twenty-first century because of its capacity to lead to
breakthroughs in a variety of industries and fields (Arthur, 2017).
The analogy makes sense: data, akin to oil, is next to useless in its
unprocessed form, necessitating refinement, processing, and transformation
into a usable form; its worth is not solely inherent, but primarily rooted in
its potential.
This is leading all governments to take part in a new AI Race, a
phenomenon so important that Vladimir Putin, in his 2017 speech for the
first day of the new school year broadcast to all Russian schools, declared,
“Whoever becomes the leader in this sphere will become the ruler of the
world” (Meyer, 2017) and that Henry Kissinger enthusiastically devoted his
last studies to “The Age of AI: And Our Human Future” (Kissinger et al.,
2022).
AI, which can analyze and process vast amounts of data, is transforming
the way we think about productivity, jobs, and medical breakthroughs; it is
influencing warfare, deterrence, and the foreign policy decisions of major
economies. Undoubtedly, AI in international relations raises ethical and
regulatory issues. Due to their frequent feeding on preconceived notions or
false premises, AI systems may exhibit bias, potentially leading to decisions
that could negatively impact international relations or the countries
involved. Additionally, malicious actors can misuse AI systems to falsify
statistics or spread false information. Governments and international
organizations must regulate and oversee AI usage to mitigate risks,
including objective data training, openness, and accountability.
The impact of AI on international relations raises questions about
whether conventional diplomacy will survive computational technologies
and how diplomacy can maximize AI's benefits while minimizing its
negative impacts on the world's political, socio-economic, and military
balance.
To understand the complex topic of AI and its impact on international
relations and to understand what the future holds for us from the
perspective of diplomatic craftsmanship in the face of new technological
advances, it may be useful to start this analysis by looking at one of the
oldest activities developed at the dawn of human civilization, in the distant
days of the Bronze Age: writing on clay.

Diplomacy and Technology


Diplomacy and technological progress are deeply interconnected, with the
rise of “proto-diplomacy” as one of the main human activities developed in
ancient history. Writing has always been the principal technology of
diplomacy, requiring the acquisition of appropriate tools and knowledge.
In the fourth millennium BC, the Sumerians created writing, marking the
transition from prehistory to ancient history. Around 1300 BC, Egypt and
the Hittites employed Akkadian, the Middle East's first diplomatic
language, in their treaties. Before this, messengers had to memorize the
contents of official letters, which significantly increased the risk of
inaccuracies.
In the late nineteenth century, archaeologists found the world's oldest
diplomatic archive, 382 clay tablets of official correspondence between the
Egyptian administration and neighboring kingdoms in Canaan and Amurru
(Berridge, 1990).
Since the dawn of time, technical improvements in human civilization
have accompanied diplomatic activities, improving their effectiveness, such
as in the case of the telegraph, radio, telephone, computer, and Internet.
Whenever mankind has experienced technological advances, there have
been those who have hastened to proclaim – only to be proven wrong by the
facts – the demise of diplomacy, a profession that indeed relies heavily on
tradition and custom but is also eager to embrace the opportunities offered
by new technological tools.
The rise of direct telephone communication between heads of state and
governments has not, as forecast by many, brought about the end of
traditional embassies and diplomatic missions, but rather has considerably
improved their operations. The rise of the Internet and social media has also
significantly impacted diplomacy, with digital innovations now playing a
major role in the profession, which, as it is essentially based on
communication, is heavily influenced by changes in information sharing
and social interaction (Bjola et al., 2019).
Dr. Katharina Höne, who focuses on the study of the interplay between
technology and international relations, has published intriguing and seminal
articles and academic papers on this concept, in particular drawing a
fascinating parallel with Schrödinger's logic experiment, from which I have
drawn much inspiration in writing this article. She uses the example of the
work of a diplomat who can simultaneously engage in both a covert
negotiation and a public diplomacy initiative on the same subject as
representing a practical application of the principles of quantum physics,
which involve the simultaneous existence of a subject in two seemingly
contradictory states; information and communication technology (ICT)
enables increased transparency, public diplomacy efforts, and the
emergence of new diplomatic actors. However, the covert realm of
traditional diplomacy is still very much alive and needed (Höne, 2022).
The big, hi-tech game of data utilization opens up new frontiers in
classical diplomacy. In a digitally advanced world, computational
diplomacy can significantly improve its effectiveness by integrating the
science of complexity and insights from international relations (Bouffanais
& Lim, 2022).
Recent studies on neuroscience offer promising opportunities to apply the
latest results from scientific research to international relations, such as the
discovery of the mechanisms that regulate empathy through the functioning
of “mirror neurons,” as discovered in the early 1990s by researchers from
the University of Parma (Bonini et al., 2022).
And indeed, we owe to a neuroscientist, Nayef Al-Rodhan, one of the
most innovative contributions to recent advances in the theory of
international political doctrines. Thanks to his extensive research, Al-
Rodhan has developed a brilliant theory that seeks to resolve the long-
standing conflict between realism and liberalism through his original notion
of “symbiotic realism,” in which the human factor is fully taken into
account along with the interests of the single state (Al-Rodhan, 2007).

Gearing Up for Change


Advances in technology, such as AI, quantum computing, and digital
monitoring, are transforming the way individuals live, governments run
their nations, and power is exerted in the international sphere.
The United States of America and China are clearly positioned as the two
dominant superpowers in this field, but many other nations actively follow
them in developing AI investment programs (both in the industrial and
financial sectors), as evidenced by some of the most reputable studies on
the topic, including the University of Stanford's 2024 AI Index Report
(Artificial Intelligence Index, 2024).
It has been argued that a nation's ability to create and embrace new
technologies, or “innovation power,” will be decisive in defining the
outcome of the current competition between major powers.
According to a recent McKinsey study, the impact of generative AI on
global productivity has reached the equivalent of $2.6 trillion (Italy's entire
GDP is about $3.2 trillion). McKinsey also estimates that by 2030, AI could
contribute up to $15.7 trillion to the global economy, more than the current
output of China and India combined. Of this, $6.6 trillion is likely to come
from increased productivity and $9.1 trillion from consumption side effects
(Manyika and Spence, 2023).
Diplomacy must adapt to this dynamic and complex environment of
growing technologization. Government agencies dealing with foreign
policy may want to incorporate blockchain technology into passport control
and border management, while machine learning can be considered as a
means of providing data to inform foreign policy decisions. In response to
the impending changes in foreign relations brought about by new
technologies, governments are beginning to act by taking specific actions.
In 2017, Denmark named the first tech ambassador in the world (Gramer,
2017), while the following year France appointed its country's first
ambassador for digital affairs (ambassadeur pour le numérique) (France
Diplomatie, n.d.), both of them with the mission of approaching Silicon
Valley as if it were a global superpower, to improve their governments’
connections with the world's largest tech companies.
These examples were then followed by many other countries, thus de
facto creating a new job profile in several foreign ministries across the
world, that of “techplomacy” (Norkunas, 2022).
In 2022, Barbados was the first country to establish a digital embassy in
the metaverse, the three-dimensional virtual world in which users
represented by avatars can interact. The Caribbean island struck an
agreement with Decentraland, a company that hosts a user-owned, 3D
digital world, comprised of 90,601 plots of “land” (Wyss, 2021).
The Luxembourg “e-embassies” initiative, designed to protect the
sensitive data of foreign countries from cyberattacks, is another intriguing
example of how traditional diplomatic tools and digital systems can
interact. The goal of this new international relations strategy is to have a
neighboring friendly country's data center store a third nation's sensitive
data, based on a bilateral agreement. Unlike a traditional embassy, an e-
embassy does not have a strictly diplomatic purpose. Rather, it is a cloud-
based data center that supports a government's e-governance network. The
foundation agreements between countries, based on the provisions of the
1961 Vienna Convention on Diplomatic Relations, regulate the hosting of
data with guarantees of immunities and privileges similar to those of a
traditional embassy. This is an example of how using a landmark
international convention can enable the development of an entirely new
idea in international law: just like classical embassies, these centers have
extra-territorial powers within their host countries (E-embassies in
Luxembourg, 2021).
Estonia's “data embassy” in the Grand Duchy of Luxembourg is a world
first, following a number of cyberattacks on the country (hailed by the
World Bank as the world's most advanced digital society) in 2007, when
hackers succeeded in simultaneously downing 58 Estonian websites,
including those of the government, most newspapers, and several banks.
Although no sensitive information was compromised during the attack, the
event gave considerable impetus to the idea of ensuring the functioning of
services and the continuity of data via “data embassies” located outside the
country (Schulze, 2019). The formal agreement between Tallinn and
Luxembourg was signed in 2017, and since then an extension of the
Estonian government's cloud has been hosted at LuxConnect's certified Tier
IV data center (Rice, 2019).
A few months later, in December 2018, the Grand Duchy and Monaco
signed a partnership agreement on digital innovation that included a section
on preserving sensitive information in a protected center with guarantees of
immunity in line with the model of Estonia's e-embassy. It is indeed very
problematic for a state covering a very small area of just a few square
kilometers to apply the internationally recommended cybersecurity standard
of 120 kilometers between two data storage locations. Since 2021, the
Grand Duchy has therefore hosted a digital twin of the Monegasque
sovereign cloud (Monaco a son e-embassy au Luxembourg, 2020).
The security of communications is another area where new technologies
show promise. In 2022 the French Embassy in Washington sent the first
diplomatic letter encrypted to withstand the power of future quantum
computers. This was the first “post-quantum” encrypted message ever, and
it included remarks about cooperation between the United States and France
in computer science, according to press releases (Potter, 2023). Experts in
digital security predict that quantum computers will eventually crack
existing cryptography techniques, raising the bar for future security
measures for diplomatic communications.
Despite the fact that traditional diplomacy and those new tools can
complement one another quite well, there is a widely held belief among
insiders that, notwithstanding the growing benefits of new technologies
(Buch et al., 2023), diplomacy will struggle in the future to abandon its
customary age-old methods (Yarhi-Milo & Christensen, 2023).
Once again, as at many other times in the history of international
relations, the challenge is to use the new tools made available by
technological advances to strengthen, rather than supplant, the classic
mechanisms of diplomatic craftsmanship.
According to the famous Italian diplomat Daniele Varè, the fundamental
nature of a skilled negotiator's ability has essentially “remained the same
throughout the ages. There is no arcana in diplomacy. Our arts are the
immemorial devices of the peasant in the marketplace, buying and selling
his products.” However, technological progress has made diplomacy
increasingly effective, he adds, clarifying that “diplomacy can, and
sometimes must, make use of new techniques” (Varè, 1938).
Diplomacy must not be afraid to fully utilize the opportunities presented
by technological advances while keeping its core asset: the human factor,
which will continue to prove decisive in international relations.

Plato's Cave, Prometheus’ Challenge, and


“Quantum Diplomacy”
To understand the profound effects of AI on our very survival, it might be
instructive to look back to classical philosophy. In particular, we are helped
by Plato's famous allegory of the cave.
This myth takes up a small portion of the seventh book of Plato's “The
Republic” but it captures the essence of his entire philosophical system
(McAleer, 2020). It highlights the importance of education and the path of
progress that, once freed from bonds, leads to every person becoming
competent and fully responsible. Plato argues that the soul's innate
disposition determines the path to the world of ideas, while Socrates
believes every person is a philosopher, perceiving truth within themselves
(Brickhouse & Smith, 1994).
Plato is obsessed with the question of how we can acquire full
knowledge. He makes a distinction between “sensory knowledge” which is
obtained through the senses and addresses the world of opinions, beliefs,
etc.; and “intelligible knowledge” based on reasoning and scientific
thinking, that is, the unchangeable and certain parts of the universe: the first
is defined as “doxa,” which is uncertain and fluctuating, and the latter is
called “episteme,” which is certain and constant; the cave allegory describes
how the philosopher obtains full knowledge of concepts (Foucault, 1966).
Plato's cave allegory underscores the importance of politics in his
philosophy: it may be thought that his theory mainly deals with
epistemological matters, but the true emphasis of the entire formulation of
the doctrine of ideas is aimed at investigating the underlying ethical
dimensions of philosophy and the quest for methods to promote fairness
among individuals. According to the myth, men can only become genuine
philosopher-governors in the sensory world if they go back to the cave. The
allegory seeks to provide an answer to the tragic fate of Prometheus, the
demigod, who endeavors to serve as a mediator between the supreme deity
and the finiteness and fragility of mankind.
This primal myth, which has its origins in the quest for immortality and
freedom from the fear of death, is a central topic in the cultural traditions of
both the Western world and ancient Asia: references to a figure quite similar
to Prometheus may be found in various civilizations and traditions, such as
Sanskrit and Tamil (Pugazhendhi, 2021).
The myth of Prometheus highlights the fundamental goal of technology,
which is to improve the lives of people: “[Prometheus] found that the other
animals were suitably furnished, but that man alone was naked and shoeless
and had neither bed nor arms of defense. The appointed hour was
approaching when man, in his turn, was to go forth from the earth into the
light of day. And Prometheus, not knowing how he could devise man's
preservation, stole the wisdom of practicing the arts of Hephaestus and
Athena and fire with it and gave them to man. Thus man had the wisdom
necessary for the support of life” (Plat. Prot. 321) (Sisson, 1978).
Thus, Prometheus, driven by philanthropy, aims to save humanity by
empowering its unique “technological nature” (Perkins, 2015).
The character of Prometheus gained particular notoriety in the late
eighteenth and early nineteenth centuries, specifically during the Romantic
period, with writers such as Goethe and Schlegel. He was portrayed as the
one who saved humanity by giving it technology, some uses of which are
made possible by fire, an element that, interestingly, can be both a source of
survival as well as extremely hazardous if not handled with the utmost
caution. The contradiction is underlined in Aeschylus’ “Prometheus Bound”
where a pessimistic view predominates and the drastic inadequacy of
technology to truly help humanity is emphasized, as reflected in the tragic
fate of the demigod in his quest to free humanity from its inevitable
condition of mortality (Amore Mortis: La Condizione Umana e Il Dono Di
Prometeo; Fondazione Collegio San Carlo, 2006).
Digital technology challenges the duality between subject and object,
theory and practice, and software and hardware that characterizes Western
philosophy. As neuroscience and cognitive philosophy advance, we can
argue that man does not use his mind to understand reality, but rather, he
creates it in some way through his interaction with the environment. The
digital world is the place where this mechanism comes into existence,
where the person splits in two and becomes both the network's subject and
object (The Philosophy of Computer Science, Stanford Encyclopedia of
Philosophy, 2021).
Through the development of an enormous data processing capacity
generated by extremely fast computational systems capable of handling an
amount of information that is, if not infinite, certainly not intuitively
conceivable by the human mind, AI opens up astounding new scenarios for
the creation of innovative visions and interpretations of reality.
The dilemma of the liberated prisoner of Plato's cave becomes even more
pressing and urgent: how to distinguish between reality, the interpretation of
reality, and artificially generated reality? What are the interconnections
between these dimensions, and the practical, ethical, and political
consequences of this “Mundus Novus”? How to distinguish between reality,
hallucinations (Lajeune, 2023), and “artificially generated reality”? How
can we devise a way to navigate through what is true, plausible, and
coherent and what is based on totally or partially false content, created by a
computational, fully automated mechanism?
According to a 2022 Europol analysis, 90% of Internet material might be
created “synthetically” – that is, entirely or in part via the use of AI – by
2026 (Harrison, 2022). This tendency puts applications of AI in danger of
operating in an inaccurate or deceptive manner by causing them to consume
more and more “synthetic” material, which would incrementally lower the
accuracy of the results they provide to users.
An example of this is the case of Microsoft's Internet services platform,
MSN, which received extensive media coverage in December 2023 for
mistakenly recommending the food bank in Ottawa as a popular tourist
attraction for foodies (Teo, 2023).
According to an experiment published in June 2023 by the magazine
Foreign Policy (Polakow-Suransky, 2023), as of right now, a trained eye can
still distinguish between an article authored by a human being and one
generated by a text production tool such as ChatGPT. An expert panel
evaluated the essay, the topic of which was “the origins and implications of
Russia's 2014 invasion of Crimea.”
The result of the experiment is that in practice, rather than writing an
essay in a fully creative way, ChatGPT limits itself to imitating the patterns
of human language, thus producing an intelligible text drawing from an
impressive mass of information available on the web and held together by a
sound logic yet without much originality: in a word, “spitting out” what it
considers the next most likely word to follow another in a sequence.
Frequently trained on static data sets, generative AI systems still lack the
context and nuance of human comprehension. Moreover, they struggle to
generate unique analytical content, and their tone may come across as
unclear or overly general, the article concludes.
In essence, these instruments currently lack the capacity to accurately
depict distinct, authentic, or empathetic human cognition. However, it is
possible that, as the system continues to refine itself, improve its writing
style, and gather and analyze data, it will eventually become impossible for
even experienced readers to distinguish between human and machine
output. This will happen with every other application of AI, possibly
leading to paradoxical outcomes. Human reasoning and scenario prediction
are based on an examination of past evidence and intuitive understanding.
Weather forecasters can accurately and confidently anticipate rain and
storms by considering many parameters, such as wind, air pressure, sun
exposure, and relative humidity. However, what happens when these factors
are not relevant to the occurrence? In other words, could AI potentially
persuade people that, despite a clear sky, rain will fall heavily if we open
our umbrella? Is it possible for both situations – the computer-generated
one and the real one – to have similar odds or to occur at the same time?
The theory of physics, whether Newtonian or quantistic, could provide
the solution. Newtonian thought can ignore paradoxes that do not align with
the established framework. However, in the realm of quantum systems,
paradoxes are an essential component of the process. They have the
potential to give rise to novel possibilities and even alternate universes that
were previously unforeseen. In this context, one cannot help but think of
Professor Schrödinger's cat as an example.
This is a well-known thought experiment – quite a sadistic one too, at
least from the point of view of the cat, we should say – proposed by the
Nobel prize-winning physicist Erwin Rudolf Josef Alexander Schrödinger
(1887–1961): a complex equation, illustrated through a sort of logical
provocation based on quantum theory, according to which, contrary to
Newton's classical physics, it is possible for a physical system to exist
simultaneously in two distinct states (Baker, 2023).
The experiment is quite detailed: in a box there is a cat, a vial of poison,
a radioactive source, and a mechanism that activates a hammer. If the
decaying radioactive source emits radiation, it could trigger a chain reaction
that activates the hammer, shattering the poison vial and ultimately leading
to the cat's death. All of this can happen, but it can also not happen, so the
likelihood that the cat will die is 50%, while the remaining 50% probability
is that nothing will happen to it. There are no other possibilities: either the
cat is alive or dead, and yet there is no way of knowing until we open the
box (since we are dealing with a pure thought experiment, the possibility of
trying to understand the fate of the poor feline – by beating on the box or
shaking it to hear any movements or meows – is not foreseen).
The experiment continues, and here is the interesting point: until we look
into the box, the cat will be in an “absurd state”; it will be “both alive and
dead.” Not only that: when we open the box, in a certain sense we will
“force,” that is, we will “cause,” the event, and therefore we will finally be
able to see if the unfortunate cat is alive or, alternatively, dead as a doornail.
But how is it possible that the cat, until we see it with our own eyes, is at
the same time both alive and dead? And more importantly, how does
actively looking at the contents of the box determine what state the animal
will be in?
Several decades after the conception of this experiment, some
researchers, wishing to illustrate a possible concrete application of quantum
theory, made special reference to an example drawn from the field of
international relations. This is perhaps peculiar, though ultimately not all
that surprising, given, again, the inherent propensity of diplomacy to use
and adapt the technological and cognitive tools provided by human progress
to the performance of its professional functions.
Rebecca Adler-Nissen, a lecturer at the University of Copenhagen,
presents a case study on the relationship between new ICT and diplomacy.
ICT promotes openness and public diplomacy, while conventional
diplomacy's covert realm remains vital. Negotiations often occur
simultaneously in both public and secret diplomacy, demonstrating the
validity of the quantum approach. We could refer to a diplomat who
performs these tasks simultaneously, complementing and enhancing each
other, as a “quantum diplomat.” This concept, evoked by former US
Secretary of State George Shultz, suggests that diplomacy can continue to
exist and even increase its relevance in a future dominated by AI (Adler-
Nissen & Eggeling, 2022).
Therefore, the practice of diplomacy has the potential to continue
existing and to become even more relevant provided it is prepared to accept
progress and make strategic use of new technologies in order to grow its
core business.

The Uncharted Territory of AI Dangers


AI is radically transforming the world at dramatic speed, generating both
skepticism and fear among the public and insiders. Despite its potential
benefits, many entrepreneurs, researchers, and commentators express
skepticism or even mistrust about the impact of AI on human life. While it
would not be entirely advisable to take an over-optimistic stance toward the
unlimited opportunities that AI opens up for our future, it would also not be
wise to indulge in unrestrained pessimism or adopt a Luddite attitude
toward new technologies. Yet, the fact remains that, starting right from the
field of politics and international relations, many believe AI's impact on
human life is far-reaching and unpredictable.
Yuval Noah Harari, author of best sellers such as “Sapiens: A Brief
History of Humanity,” “21 Lessons for the 21st Century,” and “Homo Deus.
A Brief History of Tomorrow,” argues that AI has “breached” the operating
system of human civilization. He believes that new AI tools could threaten
the survival of human civilization from unexpected directions and that they
could influence culture by autonomously developing their own language,
which is a crucial component of human culture. Human rights, he points
out, are not naturally inherent in our DNA but rather are cultural artifacts
developed over time through narratives and laws, and democracy is
fundamentally a language based on meaningful discussions. Harari suggests
that if AI were to hack human language, it could potentially “kill
democracy” (Yuval Noah Harari Argues that AI has Hacked the Operating
System of Human Civilization, 2023).
To put it another way, many worry that we are on the verge of the
“technological singularity” (Cadwalladr, 2018), a potential time in the
future when technological development becomes unstoppable and
irreversible and causes unanticipated changes to human civilization. It is a
view that is as pessimistic as it is widely shared by many thinkers, such as
Miguel Benasayag, who argues that even the most important social,
political, and economic decisions are now inevitably linked to the linear
logic of machines and entrusted to calculations and data collection, to the
pseudo-rational management of a cause-and-effect relationship that does not
take into account the complexity of individual and social bodies, and thus
ends up undermining the foundations of our democracies (Benasayag,
2021).
Moreover, it has been pointed out that generative AI tools such as
ChatGPT, Bing Chat, and Google Bard suffer from a significant problem, as
they often tend to hallucinate: to formulate, in short, plausible, and coherent
answers with totally false content (Santin, 2023). This may offer an
explanation for all the incorrect or distorted answers produced by AI itself,
due to problems in the interpretation of sources and data.
In an open letter published by the Center for AI Safety (CAIS), Sam
Altman, founder of OpenAI, together with authoritative personalities who
have contributed to the development of the new technologies, such as
Geoffrey Hinton and Yoshua Bengio, underlines the possibility that AI may
lead to nothing less than “the extinction of mankind”; hence the global
priority of mitigating this risk, on a par with limiting the proliferation of
weapons of mass destruction or the spread of pandemic contagion (Meyer,
J, 2023).
Are the consequences of AI so threatening? Or are the cries of alarm of
those who fear the consequences of new technologies, often without
knowing their deep mechanisms of functioning, exaggerated, or even
unfounded?
Indeed, some experts have expressed a more nuanced approach for the
future management of AI: alarmism is inappropriate, according to Dr.
Mariarosaria Taddeo, deputy director of the Digital Ethics Lab and senior
research fellow at the Oxford University Internet Institute. She argues
(Taddeo et al., 2021) that, similar to the Covid-19 no-vax movement,
arguments against AI are misleading and counterproductive because they
prevent a meaningful debate on governance and run the unjustified risk of
inciting an outright, aprioristic rejection of the new technologies. Yet, as
one delves deeper and deeper into studying the vast body of books, essays,
articles, and debates on AI, one cannot help but notice that the voices of
alarmed concern about the potentially dire consequences of new
technologies on the fate of humanity are clearly in the majority compared to
the optimists (especially when taking into account the possibility, in the not
too distant future, of arriving at Artificial General Intelligence – or Strong
AI – a totally autonomous system that surpasses human capabilities in the
majority of tasks).
Already in 2018, a comprehensive Chatham House Report, ominously
titled “Artificial Intelligence and International Affairs. Disruption
Anticipated,” warned of the imminent potential risks that could be caused
by AI in international relations, with destructive implications for the global
order (Artificial Intelligence and International Affairs, 2020).
Some of the most acute commentators have even gone so far as to
imagine catastrophic scenarios for humanity's very destiny. Among them,
we find not only eminent scientists such as Stephen Hawking (“The
development of full Artificial Intelligence could spell the end of the human
race… It would take off on its own and re-design itself at an ever-increasing
rate. Humans, who are limited by slow biological evolution, couldn’t
compete and would be superseded” (Luckerson, 2014)), but also – and this
is particularly disturbing – the very creators and managers of new
technologies, who believe that AI is far more hazardous than, say,
mismanaged aircraft design or bad car production, since it has the potential
to “destroy civilization” (Cheng, 2023).
Returning to Professor Schrödinger's experiment, it feels as though he
has finally unlocked the box containing his hapless captive feline. To his
dismay, he discovers that his nefarious experiment has transformed the
innocent animal into a ferocious Smilodon fatalis, a fearsome saber-toothed
tiger, resurrected who knows how from the ancestral mists of the
Pleistocene! We can just imagine the panic in his laboratory and in the city
when the fearsome animal begins to prowl, furiously attacking and tearing
apart anyone who draws near.
What should we do in the face of this eerie, dreadful development? What
measures should I take? Is it possible to capture this beast and perhaps, who
knows, subdue it for study for the benefit of mankind?
And above all, is it possible to prevent it from reproducing at the expense
of other unfortunate creatures, including humans, who might find
themselves in its path?
If a wild animal can be tamed and even a pandemic can be controlled
through containment measures, the discovery of appropriate drugs, and a
mass vaccination campaign, what would happen in the case of a technology
whose extreme consequences and self-generative mechanisms we will no
longer be able to comprehend?
After all, we cannot ignore AI or act as if it does not exist out of fear of
its implications. Hence the need to try to understand, regulate, and manage
it.

Taming the Demon: Diplomacy and AI's Future


Challenges
Since AI is already a necessary part of our lives, we obviously cannot reject
its use, nor can we go back in time and pretend it never occurred. Rather,
we have to embrace and control the potential that this advanced and
unparalleled technology, together with its largely unpredictable changing
consequences, presents. Therefore, the task is to learn to live with it, reap its
advantages, and minimize, if not completely eradicate, its hazards.
As US Secretary of State Antony Blinken and Secretary of Commerce
Gina Raimondo have underlined, “the future of AI – whether it makes our
societies more or less equitable, unlocks breakthroughs, or becomes a tool
of authoritarians – is up to us. The question is not whether to use it, but
how” (Blinken & Gina, 2023).
This ambitious goal can be achieved by taking two parallel paths, one
paved by the norms of technical-legal regulation and the other by the appeal
to philosophical and ethical principles, which are not alternatives but
complement each other and which will lead us in the same direction and to
the final destination of a world that is better and no more dangerous than
the one we have known so far.
The first of these avenues, the most obvious, finds significant precedents
in the historical cases of WMD (weapons of mass destruction) control and
the promotion of a global agenda for sustainability.
Following World War II, the global community faced the unprecedented
task of averting a nuclear war that had the potential to annihilate humanity
and all living beings on Earth. The proposed solution was to establish
regulation using instruments designed to contain and control the situation.
Although the major international treaties governing nuclear, chemical, and
biological weapons have not led to the complete elimination of these
stockpiles or prevented their spread, they have contributed to maintaining a
general balance that, despite occasional setbacks, has so far proven to be
largely effective.
One could argue that the primary issue is avoiding a mindset of war and
competition between nations. Instead, the focus should be on rejuvenating a
“participatory approach,” examples of which, at the multilateral level, are
not lacking even in these turbulent times. Such is the case represented by
the effort that has enabled the entire international community to embrace a
shared project to agree on common sustainable development goals
enshrined in the principles of the 2030 Agenda, which was adopted in 2015
by all 193 United Nations (UN) member states and currently ranks among
the most important medium-term action programs for people, planet, and
global prosperity. While the actual results may vary, the adoption of this
Agenda signifies a common commitment among nations.
Following this perspective, the distinguished political scientist, author,
and global risk entrepreneur Ian Bremmer and the founder of several AI
companies Mustapha Suleyman, have addressed the potential hazards of AI,
advocating a prudent and responsible approach to effectively manage the
geopolitical implications brought about by these emerging technologies.
They begin by acknowledging that technology companies currently
possess greater authority over the monitoring and control of AI than
governments. The reason is quite straightforward: it is exceedingly
challenging to set regulations for a game that is not completely understood,
as is the case with numerous nations. However, relying entirely on IT
providers to oversee such products may only be a short-term solution. For
them, the prioritization of profit generation over the protection of
individuals, nations, or the environment is evident. After all, we should not
expect energy corporations to willingly play a prominent role in initiatives
aimed at addressing climate change.
Bremmer and Suleyman believe that it is essential to include common
sense in the management of new technologies in international relations in
order to effectively address the risks associated with the AI revolution.
They cite the power dynamics between the United States and China as an
example, emphasizing the need for Beijing and Washington to engage in
ongoing and meaningful discussions regarding emerging threats that affect
both nations and the global community. They suggest that they should
proactively address these challenges rather than waiting for an “AI version
of the Cuban missile crisis” to force them into true transparency in the
management of their rivalry. The conclusion is that every government must
listen to technologists on both sides, who, unlike others, fully understand
the potential risks that need containment, to create an “AI control”
agreement with mutual verification mechanisms (Bremmer and Suleyman,
2023).
Some significant international developments are moving in that direction.
On May 26, 2023, the G7 launched the “Hiroshima AI Process,” a forum
dedicated to harmonizing AI governance in line with the democratic values
of its members (G7 and Artificial Intelligence, n.d.), and AI has been
included as one of the core items on the agenda of the 2024 G7 Summit in
Italy.
On July 18, 2023, UN Secretary-General Antonio Guterres delivered a
vibrant speech before the Security Council, emphasizing the need to
urgently regulate AI. His intervention marks a significant step forward since
for the first time the topic of AI was discussed in the Security Council,
despite the conflicts that divide its members today (Guterres Calls for AI
‘That Bridges Divides’, Rather Than Pushing Us Apart, 2023); The
Secretary-General on the occasion pushed for the creation of a high-level
advisory body on AI, with the aim of establishing a UN agency dedicated to
the topic – a request made most notably by OpenAI CEO Sam Altman, who
suggested (Kang, 2023) that the International Atomic Energy Agency
(IAEA) could serve as a model for global coordination of AI governance.
As a result, in October 2023 the UN convened a multi-stakeholder high-
level advisory Body on AI to foster a globally inclusive approach,
undertake analysis, and advance recommendations for the international
governance of AI.
Its report “Governing AI for Humanity,” published in 2024, provides
guidelines for future actions in view of the possible adoption of universal
tools on AI governance.
The guidelines of the report were taken into account in the drafting of the
“Global Digital Compact” (GDC), adopted during the Summit of the Future
(United Nations, 2024a), a high profile initiative held in New York in
September 2024, and aimed at identifying new strategies to address crucial
issues such as digital inequality, the protection of young generations, and
environmental sustainability for a fairer and more sustainable future.
The goal of the GDC is to lay the foundations for a global agreement on
digital technologies, considering that, while representing a huge opportunity
– also to achieve the sustainable development goals set by the UN in the
2030 Agenda – if used improperly they can fuel divisions between states,
undermine human rights, and increase inequalities. Therefore, in order to
maximize the benefits of digital transformation while reducing its risks, the
role of international cooperation is indispensable, in order to establish
shared principles to build an open, free, and secure digital future for all. The
main goals listed in the GDC are to: connect all people to the Internet,
including all schools in the world; avoid the fragmentation of the Internet
(i.e., prevent individual states from preventing or limiting their citizens’
access to it, or from creating national networks isolated from the rest of the
digital world); ensure the protection of personal data; apply human rights
also to the digital dimension; introduce accountability criteria, in order to
combat discrimination and misleading content; ensure that the digital
commons are considered as a general public good; promote the regulation
of AI. The creation of a scientific panel to conduct relevant studies and
analyses is also envisaged.
Also within the framework of the UN, in March 2024 the General
Assembly unanimously approved a resolution promoted by the United
States (United Nations, 2024b) on the subject of promoting safe, secure and
trustworthy AI systems. The following July, the General Assembly adopted
a further resolution, sponsored by China (United Nations, 2024c) urging the
international community to create a free, open, inclusive and non-
discriminatory business environment between wealthier and developing
nations. Both resolutions represent a good common basis for the
international community to work together for the advancement of AI.
An example of effective international cooperation on this matter is
represented by the document “Guidelines for secure AI system
development” adopted following the AI Cybersecurity Summit held in early
November 2023 at Bletchley Park (UK), which saw the participation of
government leaders, scientists, and representatives of the private sector
from 18 countries: perhaps the embryo of a coalition of cybersecurity and
intelligence agencies, together with technology companies and research
organizations. The document offers advice on the design, development,
deployment, and secure operation and maintenance of AI systems. By
applying cybersecurity concepts such as “threat modelling,” “supply chain
security,” and “incident response,” it aims to encourage AI developers and
users to prioritize security issues.
These examples seem to demonstrate, in our view, that any normative
effort must go hand in hand with the development of shared agreements on
the promotion of fundamental rights, such as those contained in the
Universal Declaration of Human Rights and other important international
instruments.
For this to happen, however, and this is the second path to be taken, it is
essential to approach the problem with a totally fresh philosophical
approach.
This is a concept expressed with exemplary clarity by Henry Kissinger in
an interview with Time Magazine, released in 2021, just after the
publication of his last essay, dedicated precisely to AI: “In the
Enlightenment, there was a conceptual world based on faith. And so Galileo
and the late pioneers of the Enlightenment had a prevailing philosophy
against which they had to test their thinking. You can see how that thinking
has evolved. We live in a world that, in effect, has no philosophy; there is
no dominant philosophical view. The technologists can run wild. They can
develop world-changing things, but there's nobody there to say, ‘we’ve got
to integrate this into something.’ (…) When I was a graduate student,
nuclear weapons were new. And at that time, a number of concerned
professors at Harvard, MIT and Caltech met most Saturday afternoons to
ask, ‘What is the answer? How do we deal with it?’ And they came up with
the arms-control idea. Similarly, also on AI, one of the major points is that
we think there should be some philosophy to guide the research”
(Luscombe, 2021).
Eric Schmidt, former CEO of Google, adds in the same interview: that to
have AI regulated correctly, it is important to build a philosophical
framework: a set of understandings of where the limits of this technology
should go. The only way to make this happen is to have the scientists and
the policy people working together

From Artificial Intelligence to Artificial Wisdom?


Developing a Value-Oriented Governance
Strategy
A great challenge is now facing the international community: that of
developing a system of ethical principles applicable to the field of an
instrument, AI, which is essentially computational, that is based on
mechanistic processing, at an extraordinary speed, of a theoretically infinite
amount of data and information. How do we reconcile the fact (Mazzi &
Floridi, 2023) that AI represents an “unprecedented divorce between
agency and intelligence”?
And the other big questions, related to the previous one are: when will
we draft an AI Bill of Rights? What will it entail? Who is going to decide?
Statements such as “Artificial Intelligence is both good and bad” are
often heard, and AI can indeed be used to improve our lives in a myriad of
ways, but it could also lead to huge ethical and legal problems, as well as
real catastrophic, irreparable damage.
Many academics had essentially assumed that one could characterize
both AI and brain function using axiomatic algorithms. However, one
important factor continues to elude AI and will continue to do so for some
time to come, according to the latest studies by authors like Professor Yejin
Choi of the University of Washington: common sense, the capability to
effectively intuit, the ability to read between the lines of specific situations,
find plausible explanations for partial observations, and complete the
missing pieces of the puzzle, allowing the development of the ability to
adapt to the social environment in which we live. AI, in fact, transforms and
connects a huge mass of data, but it does not know what it is doing or why.
Simply put, it lacks awareness.
“Many open questions remain, including the computational mechanisms
for ensuring the consistency and interpretability of commonsense
knowledge and reasoning,” Choi wrote in the conclusion of her paper
(Choi, 2022). Here again, the field of neuroscience comes in handy: the
research group directed by Dr. Choi is attempting to recreate the behavioral
models of empathy based on the mechanisms of “mirror neurons.”
Similarly, regulating AI means following a “commonsense approach”
and establishing a set of ethical principles on the basis of which it must
operate, beyond simple moralistic evaluations. Morality (a set of principles,
values, and cultural beliefs that define the actions that are considered
acceptable in a specific society and historical context at a given time)
differs deeply from ethics, essentially because the latter is a system of
values and principles that universally define the actions that are considered
correct or incorrect in all societies.
The principles of ethics in the field of AI (O’Donnellan, 2022) can be
substantially applied through four main elements: autonomy (the user must
be able to consciously choose the use of AI); justice (AI must be used fairly,
i.e., it must be free from prejudice and discrimination); beneficence (AI
must be used for the benefit of the user); and non-maleficence (based on the
ethical principle of “non nocere”: AI must not harm the user).
These elements essentially serve as the foundation for developing the
various models of AI regulation.
UNESCO has adopted a specific recommendation on this matter (Ethics
of Artificial Intelligence, 2023) and the Council of Europe, after intense
negotiation among the member states that started as early as 2019, on May
17, 2024 adopted its Framework Convention on Artificial Intelligence and
Human Rights, Democracy and the Rule of Law. Eleven observer states
(Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan,
Mexico, Peru, the United States of America, and Uruguay) may also ratify
the convention, which is the first global treaty on the ethical use of AI, with
its provisions being legally binding for all states that adhere to it (Artificial
Intelligence, n.d.).
While in the various countries mentioned above there is a certain
uniformity in identifying principles on the ethical implications of AI for our
lives, this is unlikely to be found at the regulatory level, where other factors
(especially political, legal, and cultural ones) come into play and where, as
in cyberspace, competition might prevail over a cooperative approach.
As was most effectively illustrated in a Brookings Institution study,
Europe, for example, is adopting a holistic governance regime for AI
(Larsen, 2022): the EU has been at the forefront of data regulation and AI
governance, starting with the General Data Protection Regulation (GDPR),
which came into force in 2018, setting a significant precedent on the matter.
In December 2023, a political deal was reached by the European
Parliament and the EU Council on a bill to ensure AI in Europe is safe,
respects fundamental rights and democracy, and businesses can thrive and
expand. The regulation was officially adopted by the EU Parliament in
March 2024, and it aims to ensure that fundamental rights, democracy, the
rule of law, and environmental sustainability are protected from high-risk
AI, while boosting innovation and making Europe a leader in the field. The
rules establish obligations for AI based on its potential risks and level of
impact. The regulation introduces safeguards on general purpose AI;
limitations on the use of biometric identification systems by law
enforcement; bans on social scoring and AI used to manipulate or exploit
user vulnerabilities; and measures to protect the right of consumers to make
complaints and receive meaningful explanations. Non-compliance with the
rules can lead to fines ranging from 7.5 million Euro or 1.5% of turnover, to
35 million Euro or 7% of global turnover, depending on the infringement
and size of the company (EU Artificial Intelligence Act, n.d.).
The US approach is more lightweight, characterized by the idea that
companies, in general, should retain control of the industry's development
and governance criteria. However, the American administration and
Congress have demonstrated that they are not indifferent to the challenges
posed by AI: with the executive order on AI signed in October 2023, which
followed voluntary commitments on AI made by 15 big tech companies at
the administration's request in July, President Biden has, among other
things, required the Department of Commerce to develop guidelines for
authenticating and identifying AI-generated content and encouraged
Congress to adopt bipartisan data protection legislation, as well as
promoting studies, principles, and good practices to minimize risks and
maximize the benefits of AI for workers. In the same measure, launched on
the eve of the AI summit hosted by the United Kingdom, President Biden
reiterated the administration's commitment to collaborate with allies and
partners to develop AI globally around the three cornerstone principles of
“safety, security, and trust.” In Congress, Senate Majority Leader Chuck
Schumer (Ryan-Mosley, 2023) initiated a process of closed-door
consultation sessions with industry experts in September 2023 (the “AI
Insight Forum,” attended by OpenAI CEO Sam Altman, X's Elon Musk and
Google's Sundar Pichai) aimed at developing legislation on the topic.
The activity of the legislators has so far been more prolific at the
individual state level than at the federal level, with around 200 regulatory
proposals introduced in 2023 (compared to 50 in 2022), according to
specialized media estimates. California, as often happens in matters of
technology, is the most active US state, and following its lead, as well as
that from European legislation, other countries, especially democratically
led ones, are inspired to make legislative proposals focused on the issues of
personal data protection, on the fight against deepfakes, on the use of AI by
administrations, and its use in the workplace in general. Given the
difficulties and possible dangers posed by AI, it seems inevitable that the
United States will regulate this growing field more strictly. However, as is
always the case with emerging technologies that have the potential to
completely alter the economy, there will always be a conflict between the
need to control the industry and the desire not to impede its growth. Yet,
according to a November 2023 survey by the online publication Axios,
Americans do not believe that regulation of AI is urgent. Out of 15 priority
issues, including immigration, healthcare reform, preventing government
shutdowns, reducing the federal budget deficit, and economic stimulus, they
ranked AI 11th (Buchanan, 2023). Therefore, it is likely that Congress will
postpone the bill's adoption until after the November 2024 presidential
elections.
On the other hand, China's approach to AI governance has been
described as “budding” (Zhang, 2023), and is progressively developing
following the adoption of its AI strategy in 2017 and the implementation of
the Personal Information Protection Law in 2021. In essence, China's
regulation of recommendation algorithms goes far beyond the digital space,
dictating the type of behavior that the central government considers
favorable or otherwise in society.
Chinese President Xi Jinping stressed the need to reach a generalized
international consensus on rules regulating the matter. With the introduction
of the Global AI Governance Initiative (GAIGI), a plan introduced by
Beijing in November 2023 advocating an open and equitable environment
for AI development, China is ready to “promote the safe development of
AI,” he said. The initiative does outline some of China's goals for regulating
new technologies, but it is still more of a collection of abstract suggestions
than specific policy recommendations at this point. Similar to other
governments, China is apprehensive about the potential of AI to generate
novel hazards, such as disseminating a deluge of false information,
strengthening detrimental biases, and disrupting work trends (Tiezzi, 2023).
Whatever the different solutions proposed by governments, there is
hardly any doubt that they will have a very profound impact on citizens’
lives in various ways. As a result, it is necessary to regulate the use of this
technology in accordance with fundamental rights.
As Nayef Al-Rodhan has pointed out (What Are the Ethical Implications
of Emerging Tech?, 2020), “as always, we must welcome innovation and
the benefits it brings us. But we must also remain committed to sustainable
development, taking into account issues of inequality, human dignity, and
inclusiveness; emerging technologies also remind us of the importance of
political commitment. Some technologies might progress independently of
political support. But good governance, examinations of dual-use risks, and
ethical considerations must still remain guiding posts at all times.
Ultimately, how we approach the regulation of emerging technologies will
inevitably have wide implications—not only for security and ethics, but for
our definition of human dignity and equality of individuals.”
The evolving regulations on AI highlight the need for international
institutions to monitor risks and anticipate new data processing
applications. We must find common ground among different provisions to
ensure the effective implementation of regulatory actions.
International stakeholders can achieve success by emulating the global
strategy for sustainability and climate change. For example, the UN created
the Intergovernmental Panel on Climate Change (IPCC) to establish a
shared knowledge foundation for discussions on this matter. Establishing a
comparable advisory council that consistently evaluates the status of AI,
impartially evaluates its potential hazards and impacts, speculates on
conceivable scenarios, and explores technological policy solutions to
protect the common good might be highly beneficial. As in the case of the
IPCC, this organization would possess worldwide recognition and scientific
(as well as geopolitical) autonomy. Its reports could inform multilateral
negotiations on AI, similar to how the reports from the IPCC inform UN
climate negotiations.
While this consultative strategy alone will certainly not be sufficient to
definitively resolve the many challenges posed by new technological
developments, it will at least contribute to establishing the basis for
international cooperation and help reach consensus on the nature of the
problem. This process will also require the active participation of public
policy officials representing the majority of the UN member states,
scientists, technologists, civil society representatives, private sector
delegates, and individual activists, to address the potential threats posed by
AI.
A regulatory regime, modeled on the one that helped contain a dangerous
arms race during the Cold War, could be set up on the basis of the shared
information gathered by this body, to contain a potentially even more
dangerous AI Race. The creation of an “Artificial Intelligence Agency”
(modeled on the IAEA, which promotes the safe and peaceful use of
nuclear technology) could also be envisaged.
Whatever co-operative model is adopted to address the issue of AI
management, government agencies, business, civil society, and academic
networks all have a fundamental and indispensable role to play in
developing a common set of principles on which to build a comprehensive
and universally accepted regulatory system.
The conventional tool of diplomacy – which has often been mistakenly
written off as outdated, especially whenever new technological
advancements have taken place – may actually prove to be quite helpful in
addressing the threats that AI poses to military and geostrategic balances, as
well as in promoting an intense intellectual discourse at global level that
can lead to the discovery of common values and the establishment of
common standards.
It will take a considerable international negotiating effort and, above all,
a lot of common sense to decide on and implement a regulation of AI that is
truly global and based on values and standards that are genuinely shared by
all, in order to make the rugged terrain of technological development less
challenging and more fruitful and beneficial to mankind.
If the goal is to prevent malicious (private or public) actors from using AI
for their own wicked purposes, and governments from establishing
“algocratic” systems (Tsai, 2020), where decisions are no longer made by
democratic processes but by fully automated applications independent of
human will, this can only be achieved through a process (Sinha &
Lakhanpal, 2023) designed to ensure that AI gradually develops into a form
of “Artificial Wisdom” (AW).
To identify fundamental principles, objectives, and short-, medium-, and
long-term implementation strategies for the adoption of a common agenda
for the management of AI, politicians, scientists, philosophers, civil society
representatives, researchers, entrepreneurs, and industrial developers should
engage in a dialogue process. This process could ultimately take the form of
an inclusive diplomatic conference, if necessary, preceded by extensive
preparatory work on a regional basis, with timely and thorough sharing of
results as they are attained.
Diplomacy is called on to promote the establishment of these high-level
international talks with the goal of eventually coming to an agreement on
the best means of preventing new technologies – particularly if and when
AI develops into General or Super Intelligence – from overwhelming
humanity and instead assisting in the making of decisions that are supported
by reliable sources, certified data processing, and, most importantly, a
value-oriented approach.

References
Adler-Nissen, R., & Eggeling, K. A. (2022). Blended diplomacy: The
entanglement and contestation of digital technologies in everyday
diplomatic practice. European Journal of International Relations,
28(3), 640–666. https://doi.org/10.1177/13540661221107837
Al-Rodhan, N. (2007, January 1). Symbiotic realism.
http://books.google.ie/books?
id=nTEiAQAAIAAJ&q=al+rodham+symbiotic&dq=al+rodham+symb
iotic&hl=&cd=1&source=gbs_api
Amore mortis: la condizione umana e il dono di Prometeo; Fondazione
Collegio San Carlo. (2006, September 14). Fondazione Collegio San
Carlo. https://www.fondazionesancarlo.it/conferenza/amore-mortis-la-
condizione-umana-e-il-dono-di-prometeo/
Arthur, C. (2017, December 1). Tech giants may be huge, but nothing
matches big data. The Guardian.
https://www.theguardian.com/technology/2013/aug/23/tech-giants-data
Artificial Intelligence. (n.d.). Committee on Artificial Intelligence (CAI).
https://www.coe.int/en/web/artificial-intelligence/cai
Artificial Intelligence and International Affairs. (2020, December 11).
Chatham House – International Affairs Think Tank.
https://www.chathamhouse.org/2018/06/artificial-intelligence-and-
international-affairs
Artificial Intelligence Index. (2024, April). AI Index Report 2024.
https://aiindex.stanford.edu/report/
Baker, J. (2023, May 5). 50 quantum physics ideas you really need to know.
Hachette UK. http://books.google.ie/books?
id=vxeWEAAAQBAJ&pg=PT86&dq=Baker±J.
±50±Quantum±Physics±Ideas±You±Really±Need±to±Know.
±Hachette±UK%3B±2023.&hl=&cd=1&source=gbs_api
Benasayag, M. (2021, May 18). The tyranny of algorithms. Europa
Compass. http://books.google.ie/books?
id=e8G9zQEACAAJ&dq=Benasayag±M.
±The±Tyranny±of±Algorithms.
±Europa±Editions±UK%3B±2021&hl=&cd=1&source=gbs_api
Berridge, G. (1990, January 1). Amarna diplomacy.
http://books.google.ie/books?
id=0UeXtAEACAAJ&dq=berridge+amarna+diplomacy&hl=&cd=2&
source=gbs_api
Bjola, C., Cassidy, J., & Manor, I. (2019). Public diplomacy in the digital
age. The Hague Journal of Diplomacy, 14(1–2), 83–101.
https://doi.org/10.1163/1871191x-14011032
Blinken, A., & Gina, R. (2023). To shape the future of AI, we must act
quickly. Financial Times. https://www.ft.com/content/eea999db-3441-
45e1-a567-19dfa958dc8f
Bonini, L., Rotunno, C., Arcuri, E., & Gallese, V. (2022, September).
Mirror neurons 30 years later: Implications and applications. Trends in
Cognitive Sciences, 26(9), 767–781.
https://doi.org/10.1016/j.tics.2022.06.003
Bouffanais, R., & Lim, S. S. (2022, September 13). Computational
diplomacy - The science of an art?. Lowy Institute.
https://www.lowyinstitute.org/the-interpreter/computational-
diplomacy-science-art
Bremmer, I., & Suleyman, M. (2023, October 16). The AI power paradox:
Can states learn to govern artificial intelligence—Before it's too late?
Foreign Affairs. https://www.foreignaffairs.com/world/artificial-
intelligence-power-paradox?
utm_medium=newsletters&utm_source=twofa&utm_campaign=The%
20AI%20Power%20Paradox&utm_content=20230818&utm_term=FA
%20This%20Week%20-%20112017
Brickhouse, T. C., & Smith, N. D. (1994, January 1). Plato's Socrates.
Oxford University Press. http://books.google.ie/books?id=-
JQtKIrFuq0C&printsec=frontcover&dq=Brickhouse±TC,±Smith±ND.
±Plato%E2%80%99s±Socrates.±Oxford±University±Press,
±USA%3B±1994&hl=&cd=10&source=gbs_api
Buch, A. M., Eagleman, D. M., & Grosenick, L. (2023, July 27).
Engineering diplomacy: How AI and human augmentation could
remake the art of foreign relations. Science & Diplomacy.
https://www.sciencediplomacy.org/perspective/2022/engineering-
diplomacy-how-ai-and-human-augmentation-could-remake-art-foreign
Buchanan, J. (2023, November 10). Axios survey of Americans on AI
regulation. Economist Writing Every Day.
https://economistwritingeveryday.com/2023/11/11/axios-survey-of-
americans-on-ai-regulation/
Cadwalladr, C. (2018, March 22). Are the robots about to rise? Google's
new director of engineering thinks so…. The Guardian.
https://www.theguardian.com/technology/2014/feb/22/robots-google-
ray-kurzweil-terminator-singularity-artificial-intelligence
Cheng, M. (2023, May 2). The biggest warnings about AI from the people
who design it. Quartz. https://qz.com/the-biggest-warnings-about-ai-
from-the-people-who-desig-
1850391426#:∼:text=%E2%80%9CAI%20is%20more%20dangerous
%20than,an%20interview%20with%20Tucker%20Carlson
Choi, Y. (2022, April 14). The curious case of commonsense intelligence.
American Academy of Arts & Sciences.
https://www.amacad.org/publication/curious-case-commonsense-
intelligence
E-embassies in Luxembourg. (2021, January 10). Luxembourg.
https://luxembourg.public.lu/en/invest/innovation/e-embassies-in-
luxembourg.html
Ethics of Artificial Intelligence. (2023, December 1). UNESCO.
https://www.unesco.org/en/artificial-intelligence/recommendation-
ethics
EU AI Act - EU Artificial Intelligence Act. (n.d.). https://www.euaiact.com/
Foucault, M. (1966, January 1). Les mots et les choses. Editions Gallimard.
http://books.google.ie/books?id=gX09u5eEYA4C&q=Foucault±M.
±Les±mots±et±les±choses.&dq=Foucault±M.
±Les±mots±et±les±choses.&hl=&cd=1&source=gbs_api
France Diplomatie. (n.d.). France Diplomatie - Ministère De L’Europe Et
Des Affaires Étrangères. https://www.diplomatie.gouv.fr/fr/politique-
etrangere-de-la-france/diplomatie-numerique/la-mission-de-l-
ambassadeur-pour-le-numerique/
G7 and Artificial Intelligence. (n.d.). Center for AI and digital policy.
https://www.caidp.org/resources/g7-japan-2023/
Gramer, R. (2017, January 27). Denmark creates the world's first ever
digital ambassador. Foreign Policy.
https://foreignpolicy.com/2017/01/27/denmark-creates-the-worlds-
first-ever-digital-ambassador-technology-europe-diplomacy/
Guterres calls for AI ‘that bridges divides’, rather than pushing us apart.
(2023, July 19). UN News.
https://news.un.org/en/story/2023/07/1138827
Harrison, M. (2022, September 18). Experts: 90% of online content will be
AI-generated by 2026. Futurism. https://futurism.com/the-
byte/experts-90-online-content-ai-generated
Höne, K. (2022, August 7). Quantum diplomacy - Ideas from the other side
of the looking glass? Diplo. https://www.diplomacy.edu/blog/quantum-
diplomacy-ideas-other-side-looking-glass/
Kang, C. (2023, May 16). Sam Altman, ChatGPT Creator and OpenAI
CEO, urges senate for AI regulation. The New York Times.
https://www.nytimes.com/2023/05/16/technology/openai-altman-
artificial-intelligence-regulation.html
Kissinger, H. A., Schmidt, E., & Huttenlocher, D. (2022, November 1). The
age of AI. Back Bay Books. http://books.google.ie/books?
id=WAb4zgEACAAJ&dq=Kissinger±HA,±Schmidt±E,
±Huttenlocher±D.±The±Age±of±A.I.
±and±Our±Common±Future%3B±John±Murray±Press,
±2021&hl=&cd=1&source=gbs_api
Lajeune, G. (2023). AI can hallucinate too: What are the dangers, and how
can we protect ourselves? | Blog - HUMAN Protocol.
https://www.humanprotocol.org/blog/ai-can-hallucinate-too-what-are-
the-dangers-and-how-can-we-protect-ourselves
Larsen, B. C. (2023, June 24). The geopolitics of AI and the rise of digital
sovereignty. Brookings. https://www.brookings.edu/articles/the-
geopolitics-of-ai-and-the-rise-of-digital-sovereignty/
Luckerson, V. (2014, December 2). 5 very smart people who think artificial
intelligence could bring the apocalypse. TIME.
https://time.com/3614349/artificial-intelligence-singularity-stephen-
hawking-elon-musk/
Luscombe, B. (2021, November 5). Henry Kissinger's last crusade:
Stopping dangerous AI. TIME. https://time.com/6113393/eric-schmidt-
henry-kissinger-ai-book/
Manyika, J., & Spence, A. M. (2023, October 30). The coming AI
economic revolution: Can artificial intelligence reverse the
productivity slowdown? Foreign Affairs.
https://www.foreignaffairs.com/world/coming-ai-economic-revolution
Mazzi, F., & Floridi, L. (2023, June 2). The ethics of artificial intelligence
for the sustainable development goals. Springer.
http://books.google.ie/books?
id=r6t0zwEACAAJ&dq=floridi±ethics±AI&hl=&cd=4&source=gbs_a
pi
McAleer, S. (2020, November 6). Plato's “Republic.”
http://books.google.ie/books?id=FrAJzgEACAAJ&dq=McAleer±S.
±Plato%E2%80%99s±%E2%80%9CRepublic%E2%80%9D:
±An±Introduction.
±Open±Book±Publishers%3B±2020&hl=&cd=1&source=gbs_api
Meyer, D. (2017, September 4). Vladimir Putin says whoever leads in
artificial intelligence will rule the world. Fortune.
https://fortune.com/2017/09/04/ai-artificial-intelligence-putin-rule-
world/
Meyer, J. (2023, June 1). AI poses risk of extinction, tech leaders warn in
open letter. Here's why alarm is spreading. USA TODAY.
https://eu.usatoday.com/story/news/politics/2023/05/31/ai-extinction-
risk-expert-warning/70270171007/
Moessner, R., & Allen, W. A. (2010). Banking crises and the international
monetary system in the great depression and now. SSRN Electronic
Journal. https://doi.org/10.2139/ssrn.1721664
Molnár, L. (2014, October 22). A step beyond AI: Artificial philosophy.
ResearchGate. https://doi.org/10.13140/2.1.1124.6085
Monaco a son e-embassy au Luxembourg. (2020, June 25). Paperjam.
https://paperjam.lu/article/monaco-a-son-e-embassy-au-luxe
Norkunas, A. (2022, February 2). What is techplomacy—And is it actually
needed? Forbes.
https://www.forbes.com/sites/forbestechcouncil/2022/02/02/what-is-
techplomacy—and-is-it-actually-needed/
O’Donnellan, R. (2022, October 28). AI ethics: What are its key
principles?. Intuition. https://www.intuition.com/ai-ethics-what-are-its-
key-principles/
Perkins, M. (2015, January 20). From Homo sapiens to Homo
technologicus. The Stanford Daily.
https://stanforddaily.com/2015/01/19/from-homo-sapiens-to-homo-
technologicus/
Polakow-Suransky, S. (2023, August 1). Can ChatGPT explain geopolitics?
Foreign Policy. https://foreignpolicy.com/2023/06/16/chatgpt-ai-
geopolitics-artificial-intelligence-russia-ukraine-crimea/
Potter, J. (2023, December 7). France sends first post-quantum encrypted
diplomatic message. https://www.iotworldtoday.com/industry/france-
sends-first-post-quantum-encrypted-diplomatic-message
Pugazhendhi, D. (2021, July 30). Greek, Tamil and Sanskrit: Comparison
between the Myths of Prometheus, Sembian and Sibi. Athens Journal
of Philology, 8(3), 157–180. https://doi.org/10.30958/ajp.8-3-1
Rice, N. F. (2019, October 10). Estonia's digital embassies and the concept
of sovereignty. Georgetown Security Studies Review.
https://georgetownsecuritystudiesreview.org/2019/10/10/estonias-
digital-embassies-and-the-concept-of-sovereignty/
Ryan-Mosley, T. (2023, September 25). An inside look at Congress's first
AI regulation forum. MIT Technology Review.
https://www.technologyreview.com/2023/09/25/1080104/inside-
congresss-first-ai-insight-forum/
Santin, F. (2023, June 5). OpenAI e il problema delle allucinazioni: come
risolverlo? IlSoftware.it. https://www.ilsoftware.it/openai-problema-
allucinazioni-come-risolverlo/
Schulze, E. (2019, February 8). How a tiny country bordering Russia
became one of the most tech-savvy societies in the world. CNBC.
https://www.cnbc.com/2019/02/08/how-estonia-became-a-digital-
society.html
Sinha, A., & Lakhanpal, P. (2023, May 4). Can AI systems become wise? A
note on artificial wisdom. AI & SOCIETY.
https://doi.org/10.1007/s00146-023-01683-4
Sisson, J. (1978, April). The Protagoras - C. C. W. Taylor: Plato,
Protagoras. (Clarendon Plato Series.) pp. vi + 230. Clarendon Press,
1976. The Classical Review, 28(1), 84–85.
https://doi.org/10.1017/s0009840x00225687
Stach, C. (2023, February 12). Data is the new oil–sort of: A view on why
this comparison is misleading and its implications for modern data
administration. Future Internet, 15(2), 71.
https://doi.org/10.3390/fi15020071
Taddeo, M., McNeish, D., Blanchard, A., & Edgar, E. (2021, October 13).
Ethical principles for artificial intelligence in national defence.
Philosophy & Technology, 34(4), 1707–1729.
https://doi.org/10.1007/s13347-021-00482-3
Teo, K. X. (2023, August 23). Microsoft has pulled an AI-written travel
guide, which told tourists to visit the Ottawa Food Bank if they are
hungry. Business Insider. https://www.businessinsider.com/microsoft-
ai-written-article-recommend-food-bank-tourist-attraction-2023-8?
r=US&IR=T
The Philosophy of Computer Science (Stanford Encyclopedia of
Philosophy). (2021, January 19).
https://plato.stanford.edu/entries/computer-science/
Tiezzi, S. (2023, November 9). China renews its pitch on AI governance at
world internet conference. The Diplomat.
https://thediplomat.com/2023/11/china-renews-its-pitch-on-ai-
governance-at-world-internet-conference/
Tsai, C. (2020, February 22). Artificial wisdom: A philosophical framework.
AI & SOCIETY. https://doi.org/10.1007/s00146-020-00949-5
UK Prime Minister's Office, 10 Downing Street (2023, November 6). The
Bletchley declaration by countries attending the AI safety summit, 1–2
November 2023. GOV.UK.
https://www.gov.uk/government/publications/ai-safety-summit-2023-
the-bletchley-declaration/the-bletchley-declaration-by-countries-
attending-the-ai-safety-summit-1-2-november-2023
United Nations, Summit of the Future, Global Digital Compact: zero draft.
(2024a, April 1). https://www.un.org/sites/un2.un.org/files/sotf-global-
digital-compact-zero-draft.pdf
United Nations. (2024b, March 11). Seventy-eighth session Agenda item
13.
https://documents.un.org/doc/undoc/ltd/n24/065/92/pdf/n2406592.pdf
United Nations. (2024c, June 25). Seventy-eighth session Agenda item 13.
https://documents.un.org/doc/undoc/ltd/n24/183/80/pdf/n2418380.pdf
Varè, D. (1938, January 1). Laughing diplomat. J. Murray.
http://books.google.ie/books?
id=DIM1AQAAIAAJ&q=laughing±diplomat&dq=laughing±diplomat
&hl=&cd=1&source=gbs_api
What are the ethical implications of emerging tech? (2020, February 5).
World Economic Forum.
https://www.weforum.org/agenda/2015/03/what-are-the-ethical-
implications-of-emerging-tech/
Wooldridge, M. (2022, January 1). Brief history of artificial intelligence.
http://books.google.ie/books?
id=hjctEAAAQBAJ&dq=A±Brief±History±of±Artificial±Intelligence:
±What±It±Is,±Where±We±Are,
±and±Where±We±Are±Going&hl=&cd=1&source=gbs_api
Wyss, J. (2021, December 14). Barbados embassy near you could soon be
the one on metaverse. Bloomberg.com.
https://www.bloomberg.com/news/articles/2021-12-14/barbados-tries-
digital-diplomacy-with-planned-metaverse-embassy?
utm_source=website&utm_medium=share&utm_campaign=copy
Yarhi-Milo, K., & Christensen, T. (2023, July 13). The human factor: How
Robert Jervis reshaped our understanding of international politics.
Foreign Affairs. https://www.foreignaffairs.com/articles/world/2022-
01-07/human-factor
Yuval Noah Harari argues that AI has hacked the operating system of
human civilisation. (2023, April 28). The Economist.
https://www.economist.com/by-invitation/2023/04/28/yuval-noah-
harari-argues-that-ai-has-hacked-the-operating-system-of-human-
civilisation
Zhang, A. H. (2023, December 11). China's short-sighted AI regulation. The
Strategist. https://www.aspistrategist.org.au/chinas-short-sighted-ai-
regulation/
7
HARMONY IN THE HELIX
Navigating the “AI Race” through
Collaborative Geopolitics
Enzo Maria Le Fevre Cervini

DOI: 10.4324/9781003518495-10

Introduction
Artificial intelligence (AI) is profoundly reshaping the global geopolitical
landscape, propelling nations into an “AI Race” where technological
dominance translates into strategic superiority. This intense competition
exerts a significant influence on international relations, as countries vie to
outpace one another in AI advancements to enhance their global standing
and influence.
The quest for AI supremacy significantly influences economic and social
stability, as nations with advanced AI capabilities gain a competitive
advantage in key industries, fueling innovation, productivity, and growth.
Furthermore, national security is closely tied to AI development, with
countries striving to leverage AI for defense, intelligence, and
cybersecurity. As a result, the race for AI dominance transcends technology
alone, becoming a crucial factor in shaping the future balance of power on
the global stage.
This chapter sets the stage for understanding AI's profound impact on
foreign policy. It emphasizes the necessity of collaborative efforts among
nations, international organizations, private sector stakeholders, and society
to navigate the complexities of AI responsibly. By fostering cooperation,
establishing standardized ethical guidelines, and ensuring equitable access
to AI advancements, the international community can mitigate risks and
harness the benefits of AI for global stability and prosperity.
First, this chapter aims to propose and elucidate the Quintuple Helix
model as a comprehensive and collaborative framework for the ethical and
effective integration of AI in foreign affairs. Second, it will analyze the
specific roles of each component within the model—government, academia,
industry, non-profits, and society—in the context of AI integration. Third,
the chapter will present case studies that highlight the significant impact of
AI on geopolitics, followed by a discussion on regulatory frameworks and
ethical considerations necessary to manage AI's global influence. Finally,
the chapter will offer insights into strategic collaboration, emphasizing best
practices for responsible AI development and international cooperation to
ensure that AI advancements are aligned with ethical standards and global
stability.
Part 1: The Quintuple Helix Approach
In this section, we will delve into the fundamentals of the Quintuple Helix
model as introduced by Carayannis and Campbell (2010). We will explore
the pivotal role of knowledge and innovation in shaping economic policy
and fostering high-quality democracies. This discussion will also break
down the elements that constitute the Quintuple Helix model incorporating
a minor variation tailored to the specific context of AI technology and
highlighting the significance of a collaborative framework in its
implementation.

Knowledge and Innovation for High-Quality Democracies


Innovation is closely linked to the application and dissemination of
knowledge. According to Carayannis and Campbell (2010) in a national
innovation system, the creation and application of knowledge are crucial for
economic development. Innovation policy has become an essential part of
economic policy, encompassing not only technology but also creativity and
the arts.
Innovation integrates knowledge into society, influencing the
conceptualization of knowledge itself. The application and use of
knowledge directly feed back into its creation and production, embodying
non-linear innovation models. Technology, as a form of innovation, focuses
on transforming scientific knowledge into commercial applications and
uses. Thus, technology is closely associated with the application, use, and
innovation of knowledge.
The authors propose the concept of “national innovation systems,” which
emphasizes the importance of the elements and relationships involved in the
production, dissemination, and use of economically beneficial knowledge
for development.
However, this is not the only existing innovation system, as they differ
based on their ambitions and objectives. First, the national level enhances
the international competitiveness of the national economy. Second,
international organizations promote economic growth and prevent
international conflicts. Finally, at the global level, ensuring the long-term
survival of the global economy is the priority, which depends on ecological
sustainability and reducing external social inequality.
There are also multilevel innovation systems that can be based on
geographic, spatial, or geopolitical contexts, juxtaposing subnational,
national, and transnational levels within a framework.
Carayannis and Campbell assert that the pluralization of knowledge and
the democratization of innovation are crucial for developing knowledge-
based societies and high-quality democracies. To achieve this, close
collaboration among the various actors within this ecosystem is
indispensable. This collaboration fosters an environment where diverse
perspectives and expertise can converge, driving more inclusive and
effective innovation processes. In this context, the Quintuple Helix model
becomes particularly relevant. From now on, the previous models will be
reviewed to better understand the contrast and the new elements proposed.
To fully appreciate its significance, it is essential to review the preceding
models.

Mode 1, Mode 2, and Mode 3: Triple Helix and Quadruple Helix


Carayannis and Campbell propose two basic modes of knowledge
production related to Triple and Quadruple Helix models. Mode 1 focuses
on the traditional role of university research within an old-fashioned
understanding of the “linear model of innovation.” This mode reflects basic
university research interested in fundamental principles and discoveries,
structured primarily through disciplinary peer review processes. Its success
is defined by the quality or excellence recognized by hierarchically
established peers. This mode does not concern itself with the application,
dissemination, and use of knowledge, nor does it focus on solving societal
or economic problems.
In contrast, Mode 2 can be characterized by the following principles: 1.
Knowledge produced in the context of its application; 2. Trans-
disciplinarity; 3. Organizational heterogeneity and diversity; 4. Social
accountability and reflexivity; and 5. Quality control.
This mode represents a “problem-solving approach organized around a
particular application” where “knowledge production is disseminated
throughout society.” Thus, it involves socially distributed knowledge. The
prerequisites for Mode 2 include the massification of tertiary education,
followed by a spillover of higher education graduates and knowledge into
society. Continuous communication and negotiation among knowledge
producers are crucial. Trans-disciplinarity, within this mode, should develop
“a distinct but evolving framework to guide problem-solving efforts,”
representing a “dynamic problem-solving capacity.”
The authors claim that knowledge production in Mode 2 occurs within
transient application contexts, involving knowledge producers with various
institutional affiliations, either simultaneously or sequentially. Success in
Mode 2 means that knowledge was useful or that its production effectively
contributed to solving a societal or economic problem. It demands greater
social responsibility, flexibility, and sensitivity to the impact of knowledge
on society and the economy. Modes 1 and 2 coexist in parallel, with
coevolutionary effects.
The Triple Helix model is the overlap and cross-communication among
different sectors in a knowledge-based society and economy. It explicitly
references Modes 1 and 2, asserting that Mode 2 describes the underlying
change in knowledge production, while the Triple Helix can be seen as an
“overlay” at the level of social structures.
Particularly, this model focuses on the interaction between the state
(government sector), academia (higher education sector), and industry
(business sector). The Triple Helix thesis posits that universities can play a
more significant role in innovation in increasingly knowledge-based
societies. There are different configurations of the Triple Helix:

Statist model: The state dominates other sectors.


Laissez-faire model: Different sectors and institutions are considerably
separate.
Trilateral networks and hybrid organizations: These involve university-
industry-government relations, creating a knowledge infrastructure
through overlapping institutional spheres, with each taking on the roles
of the others and hybrid organizations emerging at the interfaces.

Going further, the Quadruple Helix model adds a fourth helix: the public,
more precisely defined as the media-based and culture-based public. This
fourth helix is associated with media, creative industries, culture, values,
lifestyles, and potentially the notion of the creative class. It reflects
phenomena such as “media-based democracy” or a “multimedia
information society,” where knowledge and innovation strategies may be
supported by media communication strategies.
This fourth element, in turn, prompts the addition of a new mode of
knowledge production that emphasizes the critical role of society and its
empowerment. According to Carayannis and Campbell, Mode 3 in
advanced knowledge societies and economies, integrates a pluralism based
on the coexistence and coevolution of diverse modes of knowledge and
innovation, allowing for mutual cross-learning. This makes knowledge
more democratic, enabling discussions of a “knowledge democracy.” The
key characteristics of Mode 3 emphasize integrating different knowledge
modes, fostering coevolution, co-development, and specialization.
A Mode 3 knowledge society and economy require and surpass diversity
in knowledge and innovation. Empowering citizens as knowledge producers
and users can contribute to the “democratization of innovation” (Von
Hippel, 2005). Von Hippel distinguishes between “user-centered” and
“manufacturer-centered” innovation. The user-centered approach implies
that product and service users are increasingly capable of innovating
independently. “Lead users” are “innovative users,” whether individuals or
companies, who innovate to meet needs unmet by the market. These lead
users often “freely reveal their innovations” to others, exemplified by the
open-source software movement. Innovation communities help spread these
innovations more rapidly.
This continuum of knowledge production and use across society implies
that knowledge and innovation “flow” throughout society.

Understanding the Quintuple Helix


As was explained previously, the Quadruple Helix model proposed by
Carayannis and Campbell incorporates the public but with special emphasis
on the media. In this context, it is essential to recognize the pivotal role of
the press, especially in implementing AI systems. These actors disseminate
and publicize technological initiatives undertaken by governments,
particularly those aimed at enhancing citizen services. By collaborating
with other institutions, the media can provide reliable and trustworthy
information to the public.
Moreover, the media act as conduits for social concerns and generators of
critical debates. Journalism plays a crucial role, particularly when it
functions as an independent actor, informing the public about vital issues—
for example, the presence of biases in AI systems or privacy violations—
and scrutinizing government and companies’ actions. This independent
oversight is fundamental to maintaining transparency and accountability,
ensuring that technological advancements serve the public interest.
However, the envisioned role of society is expanding beyond traditional
bounds. Social networks empower individuals to actively engage in
information dissemination, determining which content goes viral and what
receives less traction. While the media remain vital conduits for critical
debates, they are no longer the sole avenue for democratizing knowledge
and fostering societal participation. In today's landscape, social networks
serve as dynamic platforms where diverse voices can be heard, challenging
established narratives and amplifying grassroots perspectives. Through
user-generated content and interactive discussions, these platforms
empower individuals to contribute their expertise, share local insights, and
shape public discourse in ways previously inaccessible through traditional
media channels alone.
Additionally, a fifth element is proposed: non-profit organizations. They
serve as essential advocates for responsible AI usage, championing ethical,
equitable, and public interest–aligned development and deployment of AI
technologies. By incorporating these kinds of actors, the broader impact of
AI on all societal sectors is recognized. This comprehensive framework
ensures diverse perspectives and expertise in addressing AI's challenges and
opportunities.
In conclusion, a novel framework has been introduced, consisting of five
key actors: governments, the private sector, academia, non-profit
organizations, and society. The collaboration across these sectors is
imperative. The Quintuple Helix approach facilitates synergy, ensuring that
AI development aligns with ethical standards and serves the global good. It
fosters a balanced dialogue between innovation and regulation, research and
application, and societal needs and technological advancements. In the next
part, the roles of each one will be deepened with concrete examples.

Part 2: Sector-Specific Dynamics in AI Integration


In the rapidly evolving landscape of AI, the integration of this
transformative technology impacts various sectors in unique ways. To fully
grasp the complexities of AI development and deployment, it is essential to
examine the roles and dynamics of the five key elements of the Quintuple
Helix model: government, academia, industry, non-profits, and society.
Each of these sectors plays a distinct yet interconnected role in shaping the
future of AI. This part explores how these sectors navigate the challenges
and opportunities presented by AI, contributing to the responsible, ethical,
and inclusive integration of AI technologies.

Government: Regulatory Oversight and Policy Formulation


In an era where AI technologies are rapidly transforming societies, both
governments and multinational organizations, as direct or indirect
expressions of governmental will, hold crucial roles in ensuring that these
advancements are ethically guided and beneficial for all. Through a
combination of regulation, policy formulation, and collaboration with
diverse stakeholders, these entities are shaping the future of AI to align with
public interests and safeguard against potential risks on a global scale.

The Role of Governmental Bodies

Governments play a pivotal role in advancing ethical AI practices and


ensuring the societal benefits of AI development and deployment. Their
intervention is multifaceted, encompassing a range of actions and strategies.
This includes formulating recommendations, enacting laws, fostering and
financing AI research, and participating in global AI governance initiatives.
To strengthen national research capacity and translate technological
progress into public sector applications that deliver public value,
governments can act as conveners, through the adoption of overarching
strategies that showcase high-level political commitment. Many countries
have published national strategies and policy initiatives to advance the
digital government agenda and ensure a leadership position in emerging
technologies like AI. Governments can be financiers of AI, typically
through the provision of direct or indirect funds to support the research,
development, and adoption of emerging technologies. Governments have
demonstrated abilities to be direct users and co-developers of AI. Together
with public sector organizations, they can act through innovative
procurement practices, or as a proactive co-developer through public-
private partnerships, and other forms of collaboration, to build tailored
solutions. Governments can be regulators, as the diversity of challenges
raised by the digital transformation calls for a reassessment of existing
policy frameworks, holistic approaches to ensure policy coherence and
international regulatory cooperation (Ubaldi et al., 2019).
As part of these initiatives, public bodies engage with a diverse range of
stakeholders, including industry, academia, and civil society in shaping AI
policies and regulations. This inclusive approach ensures that diverse
perspectives are considered in ethical AI governance. In some cases,
governments establish mechanisms for ongoing consultation and feedback
on regulatory initiatives, fostering transparency and accountability in the
decision-making process.
For example, the Australian government has issued a public consultation
on safe and responsible AI that has demonstrated the desire for strong
protections from Australian society through more than 500 responses
(Ministers for the Department of Industry, Science and Resources, 2024).
As a result, a provisional response was issued in early 2024. In the same
direction, the government of Canada developed a Voluntary Code of
Practice for Generative AI Systems based on feedback received during a
public consultation process (Government of Canada, 2023).
Moreover, governments allocate funding for research and development
initiatives focused on ethical AI, supporting universities, industry
collaborations, and innovation projects. Also, they can incentivize the
development of ethical AI solutions through grants, tax incentives, and
other mechanisms. These initiatives encourage companies to invest in
technologies that prioritize societal benefit and adhere to ethical principles.
Finally, governments lead by example in the ethical use of AI
technologies within public services, ensuring that AI applications in areas
such as healthcare, criminal justice, and social services prioritize fairness,
transparency, and accountability.

Case Studies of National Approaches


Until recently, states and international organizations primarily relied on
issuing soft laws—such as ethical guidelines and recommendations—to
steer stakeholders in the AI sector. These broad frameworks aimed to
provide general guidance without imposing strict regulations, as mandatory
rules were often seen as excessive or premature in a rapidly evolving field.
The concern was that overly rigid regulations could stifle innovation and
limit the potential benefits AI could bring.
However, growing awareness of the risks associated with AI has led
governments to take a more proactive approach to regulation. This shift has
moved the focus from soft laws to the establishment of enforceable
regulations. Breakthroughs like the development of foundation models with
unprecedented capabilities have underscored the limitations of ethical
guidelines alone. As a result, AI regulation has become a top priority on
government agendas worldwide.
Throughout this regulatory evolution, the AI principles adopted by the
OECD in 2019 have served as a global reference point. These principles
have guided both international organizations and governments in shaping
human-centered regulations and promoting democratic values in the
development of trustworthy AI (Morini Bianzino et al., 2024).
A prominent trend in AI regulation has been the adoption of a risk-based
approach. Inspired by the European Union's landmark AI Act, countries
such as Canada, Brazil, and Australia are now developing preliminary legal
frameworks that incorporate this strategy. The influence of the EU's
regulatory model, often referred to as the “Brussels effect,” has extended
beyond Europe. The Brussels effect occurs when the EU's stringent
regulations set global standards because companies and countries around
the world choose to comply with them in order to maintain access to the
European market. This phenomenon has led even major nations like the
United States to reconsider their regulatory approaches. Although
traditionally more resistant to tech regulation, the United States is now
gradually shifting away from non-intervention, as seen in recent actions
such as an Executive Order on AI.
Despite these overarching trends, national regulations exhibit nuanced
differences that reflect diverse approaches and perspectives. A key
distinction lies in the choice between horizontal and vertical regulatory
strategies. The horizontal approach involves creating comprehensive
regulations that address AI's impact across various sectors, while the
vertical approach tailors regulations to specific AI applications or types,
ensuring that each area is governed by rules suited to its unique
characteristics and challenges.
For example, while the EU AI Act favors a horizontal approach, China's
regulatory framework is more vertically oriented. China has implemented
specific laws to address distinct AI risks, such as deepfakes, allowing the
country to quickly adapt to technological advancements. Notably, China
was likely the first nation to introduce legislation on generative AI shortly
after the emergence of ChatGPT (Yang, 2024). However, in June 2023, the
State Council announced a shift toward a more comprehensive AI law,
signaling further regulatory evolution.

International Agreements and Treaties


International cooperation is essential for the ethical development and
deployment of AI. States must collaborate to ensure consistency and
coherence in ethical regulations across borders. Treaties and agreements,
such as the OECD AI Principles and the Council of Europe (CoE)
Framework Convention on Artificial Intelligence, Human Rights,
Democracy, and the Rule of Law, are pivotal in this endeavor. These
frameworks provide a foundation for aligning national policies, promoting
shared ethical standards, and facilitating international dialogue.
Given the global nature of AI risks, a certain degree of harmonization is
essential for effective management. AI regulation is becoming increasingly
pertinent in areas such as peace and security, migration governance,
humanitarian protection, and climate action. If countries adopt significantly
different approaches, entities seeking to evade key regulations can easily
relocate to nations with more lenient or non-existent rules. This divergence
could also exacerbate disparities in technological advancement and
competitive advantage among countries.

Academia: Research, Education, and Ethical Considerations


Academic institutions drive AI innovation through research and education.
They explore ethical implications, develop new technologies, and train
future leaders. Collaborative research initiatives and international academic
partnerships are vital for advancing AI responsibly.

Key Academic Contributions


Academic research advances AI technology and explores its ethical
implications. Universities and research institutions collaborate globally,
contributing to a shared understanding of AI's potential and risks. It brings
together experts from diverse fields such as computer science, philosophy,
sociology, law, and psychology to address ethical AI comprehensively. This
interdisciplinary approach helps in understanding the multifaceted impacts
of AI.
As more independent organizations, academia plays a critical role in
scrutinizing AI technologies and their societal impacts. This includes
analyzing case studies, examining failures, and questioning the motivations
and implications of AI developments.
Collaborative Research Initiatives

Universities frequently engage in collaborative efforts with industry


partners to ensure that ethical considerations are deeply ingrained in the
development and deployment of AI applications. These collaborations take
various forms, ranging from joint research projects and interdisciplinary
initiatives to the establishment of dedicated ethics committees and advisory
boards.
International academic partnerships and research consortia, such as the
Partnership on AI, exemplify successful collaboration. Through dialogue,
research, and education, this organization develops recommendations and
other resources to share insights that can be synthesized into actionable
guidance. In brief, these kinds of initiatives foster knowledge exchange
promoting ethical AI development.

Educational Programs
AI education is indispensable for nations, especially in developing
countries, serving as a catalyst for economic empowerment, societal
development, and sustainable growth. By providing individuals with the
skills and knowledge needed to participate in the burgeoning AI industry,
education creates job opportunities and fosters innovation, driving
economic progress. Moreover, it empowers citizens to understand the
ethical and societal implications of AI, enabling informed participation in
public discourse and policymaking. Through AI education, countries can
bridge the digital divide, access global opportunities, and address local
challenges, ultimately contributing to their integration into the global digital
economy.
Moreover, educational programs prepare future leaders to navigate AI's
challenges. Curricula that incorporate AI ethics, policy, and technology
ensure that graduates are well-equipped to contribute to responsible AI
integration. Likewise, academic institutions host workshops, seminars, and
conferences focused on AI ethics, providing international platforms for
knowledge exchange and professional development.

Industry: Technological Innovation and Application


As the driving force behind AI development, the tech industry plays a
critical role in shaping how artificial intelligence is integrated into various
sectors. With this influence comes the responsibility to ensure that AI
technologies are not only innovative but also ethically sound and socially
beneficial. By balancing rapid technological advancement with ethical
considerations, industry leaders contribute to creating AI applications that
are transparent, accountable, and trustworthy. This section shortly explores
how companies navigate these challenges and collaborate with governments
to set standards and ensure that AI technologies serve the greater good.

Balancing Innovation with Ethical Considerations


The tech industry is at the forefront of AI development, creating
applications that impact various sectors. This leadership position comes
with a significant responsibility to ensure that AI technologies are
developed and deployed in ways that are ethical, transparent, and beneficial
to society. Their efforts in this regard must be transparent, communicating
AI capabilities, limitations, and safeguards. This contributes to
demystifying and tempering exaggerated expectations and building and
maintaining public trust.
The industry drives AI advancements, developing practical applications
that can transform various sectors. Companies invest heavily in AI research
and development, creating products and services that enhance efficiency,
productivity, and customer experience. The commercial sector's competitive
nature drives rapid innovation, leading to significant breakthroughs in AI
capabilities. However, this also means that companies must navigate
complex ethical and legal landscapes to ensure that their AI applications are
responsible and do not cause unintended harm.
Corporations often develop internal policies and frameworks to govern
AI practices within their organizations. These can include ethical review
boards, impact assessments, and regular audits to ensure compliance with
ethical standards.
For example, Microsoft's Responsible AI Transparency Report 2024 is a
key initiative aimed at sharing how the company responsibly builds
generative AI applications. The report details Microsoft's decision-making
processes regarding the release of generative applications, the support
provided to customers as they build their own AI applications, and how the
company continuously evolves its responsible AI program (Microsoft,
2024). Additionally, Microsoft has identified six guiding principles for AI
development: fairness, reliability and safety, privacy and security,
accountability, inclusiveness, and transparency.
Similarly, Google's AI Principles emphasize responsible technology
development. These seven principles include commitments to ensure that
AI is socially beneficial, avoids creating or reinforcing unfair bias, is built
and tested for safety, is accountable to people, incorporates privacy design
principles, upholds high standards of scientific excellence, and is made
available only for uses that align with these principles.
These reports and principles do more than just outline company policies
—they set benchmarks for the entire industry. As industry leaders,
companies like Microsoft and Google influence the broader tech ecosystem
by establishing best practices that other companies often follow. By
publicly committing to these standards, they encourage a culture of
accountability and ethical consideration across the tech industry. Moreover,
these principles serve as a foundation for collaboration with regulators and
policymakers, ensuring that AI development aligns with societal values and
helps mitigate risks associated with emerging technologies.

Partnerships with Governments

The tech industry often sets the standard for what constitutes the state of the
art in various fields, especially in emerging technologies like AI.
Regulations demand that companies address the ethical, safety, and privacy
implications of these technologies to the extent possible. And this extent is
defined by technological advancements at a specific moment. Therefore, it
is incumbent upon the tech industry to continuously push the boundaries of
innovation and technical excellence to meet these regulatory demands.
For instance, the EU AI Act lays out a set of requirements for all
foundational, basic, or general AI models. However, it adds additional
requirements when it comes to more powerful systems of this kind, based
on the computational power needed to train them. While it is uncertain if
the threshold would include models like GPT-4 or Gemini, only the
companies themselves know how much computing power they used to train
their models. As acknowledged by a European Commission official, as
technology evolves, the way we measure and acknowledge this power
should change to make it more transparent (Heikkilä, 2023).
Another example of collaboration between industry and government is
the Executive Order issued by President Biden in the United States, which
includes provisions for content labeling and watermarks on AI-generated
content. Major AI firms, including Google and OpenAI, have voluntarily
committed to developing technologies for classifying artificially generated
content. As advancements in this area have yet to yield fully satisfactory
results, governments are increasingly focused on strategies to combat
disinformation and deepfakes, especially as critical elections approach in
2024. The collaboration between tech giants and governments is essential
for overcoming technical hurdles and developing effective tools to address
these challenges.
These partnerships highlight the importance of ongoing dialogue and
cooperation between the tech industry and governments. Together, they can
create a regulatory environment that supports innovation while ensuring AI
technologies are developed and used responsibly.

Non-Profit Organizations: Advocacy and Ethical Watchdogs


Non-profit organizations serve as vital advocates and ethical watchdogs in
the rapidly evolving AI landscape. Their mission is to ensure that AI
technologies are developed and deployed in ways that are ethical, equitable,
and aligned with the public interest. By raising awareness, pushing for
regulations, and amplifying the voices of marginalized groups, these
organizations play a crucial role in shaping responsible AI practices.
Through research, advocacy, and monitoring, non-profits hold governments
and corporations accountable, striving to ensure that AI serves society's
broader goals rather than just commercial or political interests.

Role of Non-Profits in AI Ethics


Non-profit organizations are key players in advocating for responsible AI
use. They work across multiple dimensions to ensure that AI technologies
align with ethical standards and public interest. These efforts include raising
public awareness about AI's potential risks and benefits, educating
policymakers, and pushing for the development and implementation of
regulations that protect human rights and prevent the misuse of AI
technologies.
Non-profits play a critical role in influencing government policies by
serving as a bridge between the public and policymakers. They provide
expertise, conduct independent research, and mobilize public opinion to
advocate for stronger regulations. By engaging with lawmakers and
participating in policy discussions, non-profits help shape the legal
frameworks that govern AI development and deployment.
For example, the Algorithmic Justice League (AJL), founded by Joy
Buolamwini, has been instrumental in raising awareness about algorithmic
bias and advocating for legislation to address this issue. Their research and
advocacy efforts have led to tangible policy changes, such as the
introduction of bills in the US Congress aimed at regulating facial
recognition technology and addressing bias in AI systems. The AJL's work
has also influenced corporate policies, pushing companies like IBM and
Amazon to reconsider their use of facial recognition technology.
Similarly, Access Now, a global digital rights’ non-profit, has played a
crucial role in advocating for human rights-centered AI regulations.
Through its #KeepItOn campaign, Access Now has successfully lobbied for
AI policies that protect freedom of expression and privacy rights. Their
efforts have contributed to the development of the European Union's
General Data Protection Regulation and ongoing discussions around AI
legislation in the EU. By providing expert input during consultations and
participating in public hearings, Access Now has helped shape regulatory
approaches that prioritize human rights in the digital age.
Non-profits also collaborate with international organizations to influence
global AI governance. For instance, during the negotiation process for the
CoE Framework Convention on Artificial Intelligence, Human Rights,
Democracy, and the Rule of Law, non-governmental organizations (NGOs)
actively lobbied to ensure that private sector companies and systems used
solely for military, defense, or national security purposes were not excluded
from the scope of the instrument (Müller & Pannatier, 2024). Their
advocacy ensured that the convention addressed the broader ethical
implications of AI and did not create loopholes for powerful actors.
These examples demonstrate how non-profits can drive meaningful
change in AI policy, ensuring that regulations reflect ethical considerations
and protect public interests.

Case Studies of Interventions


Beyond influencing policy, non-profits conduct independent research on the
societal impacts of AI, providing evidence-based recommendations to
governments and international bodies. This research often highlights gaps
in existing regulations and underscores the need for more robust oversight.
For instance, AI Now Institute, a leading research institute focused on the
social implications of AI, has conducted influential studies that have shaped
public discourse and policy. Their annual reports have highlighted issues
such as surveillance, bias, and labor impacts, prompting regulatory bodies
to reconsider their approaches to AI governance. AI Now's work has
informed legislative efforts in the United States, particularly in New York
City, where their research contributed to the passing of laws regulating AI
use in hiring practices.
In addition to research, non-profits engage in public campaigns to hold
corporations and governments accountable for their AI practices. They may
conduct audits to assess the fairness and transparency of AI systems and
publicly call out unethical practices or regulatory gaps. By shining a light
on these issues, non-profits apply pressure on both governments and
companies to uphold ethical standards.
For example, Amnesty International launched a campaign to highlight
the human rights implications of AI surveillance technologies. Their efforts
brought attention to the use of AI in mass surveillance, particularly in
authoritarian regimes, and led to increased scrutiny of AI technologies by
international human rights bodies. Amnesty's advocacy contributed to the
United Nations’ discussions on AI and human rights, reinforcing the need
for global standards that protect individuals from AI-driven abuses.
NGOs exert significant influence on government policies by providing
expertise, mobilizing public support, and participating in policy-making
processes. Their efforts often lead to the introduction of new regulations or
the strengthening of existing ones. By collaborating with lawmakers,
engaging in public consultations, and contributing to legislative debates,
non-profits ensure that the voices of marginalized and vulnerable groups are
included in AI governance discussions.
The examples of the AJL, Access Now, AI Now Institute, and Amnesty
International illustrate the diverse ways in which non-profits impact AI
regulations. Whether by influencing national legislation, shaping global
governance frameworks, or holding corporations accountable, non-profits
play a crucial role in ensuring that AI development and deployment are
aligned with ethical principles and public interest.

Society: Public Engagement and Societal Impact


As AI technologies increasingly shape various aspects of daily life, the role
of society in guiding and influencing their development becomes
paramount. Public engagement is essential not only for fostering trust and
acceptance but also for ensuring that AI systems are inclusive and aligned
with diverse societal values. By involving citizens in discussions, policy
formulation, and the design of AI systems, society plays a crucial role in
addressing ethical concerns and bridging the digital divide. This chapter
explores how public perception, diverse participation, and societal feedback
can influence the responsible deployment of AI technologies.

Public Perception of AI
Society's engagement is crucial for the acceptance and ethical deployment
of AI. Public perception shapes policy and innovation, and societal
feedback ensures that AI development is inclusive and reflective of diverse
values.
Society, as the end-user, provides valuable feedback and ensures that AI
developments align with public interests. Engaging the public in
discussions about AI is essential to building trust and acceptance. Public
opinion can shape policy and regulatory decisions, and societal feedback
can guide the development of AI technologies to better meet the needs and
values of the community. Inclusive public engagement ensures that diverse
perspectives are considered, and the benefits of AI are shared equitably.
Moreover, initiatives such as public consultations, forums, and
educational campaigns foster public engagement. These efforts ensure that
AI development reflects societal values and addresses public concerns.

Addressing Societal Concerns


Citizen intervention ensures diverse participation. Historically, the field of
AI has been dominated by a small, specific segment of the population, and
the resulting research and technologies often reflect the values of this
group. Race, nationality, and other characteristics similarly characterize
other under-represented groups in the industry.
As a remedy, when more diverse teams are involved in AI design, it
naturally promotes equity in decision-making systems. By breaking down
workplace homogeneity, it can enable developers to be more mindful of
their own potential biases. Individuals from historically marginalized
groups often come from less privileged backgrounds and are more aware of
concepts such as prejudice and injustice, and how technologies designed for
a specific demographic group can harm disadvantaged populations (Hao,
2019).
Furthermore, diversity in design teams is not only of ethical and social
interest but also holds economic and performance implications. There is
evidence suggesting that diverse teams are more likely to make fact-based
decisions with more accurate group thinking and are also more innovative
(Rock & Grant, 2016).
Citizen engagement stands as part of the solution to bridge the digital
divide. Any application or AI system in the public sector must address the
population which lacks adequate access to technology. Inclusive digital
transformation projects should assess and address access, affordability, and
network speed gaps, as these factors deepen inequalities and vulnerabilities
among the population.

Considerations
The successful integration of AI hinges on the collaborative efforts of all
five elements of the Quintuple Helix model: government, academia,
industry, non-profits, and society. This model recognizes that each sector
has a distinct yet interconnected role in shaping AI's trajectory. However,
the reality remains that not all regions or industries treat these elements
with equal importance. In some countries, the voices of non-profits or the
public may be marginalized, or industry priorities may dominate over
ethical considerations.
Yet, it is precisely this balanced engagement of all five sectors that is
crucial for addressing the complexities and challenges of AI development.
Governments must provide a regulatory framework that safeguards public
interests while fostering innovation. Academia needs to push forward with
ethical research that informs both policy and practice, and educates masses
on the importance of AI. Industry has the responsibility to innovate
responsibly, ensuring that AI technologies serve society's broader goals.
Non-profits must continue to advocate for those who might be left behind,
holding powerful actors accountable. Finally, society's engagement is vital
for ensuring that AI reflects diverse values and meets the needs of all
communities.
Failure to fully engage all five elements of the Quintuple Helix model
could result in a fragmented and unbalanced AI ecosystem, where ethical
concerns are overlooked, innovation is stifled, and societal inequalities are
exacerbated. Without the active participation of all stakeholders, the
potential benefits of AI may be unevenly distributed, leading to a future
where technology serves only the interests of a few, rather than the
collective good. Therefore, embracing this model in its entirety is essential
for fostering a fair, inclusive, and sustainable AI-driven future.
The Quintuple Helix model offers a comprehensive approach to
navigating AI's rapid evolution, but its true potential can only be realized
when all five elements are treated as equally important partners. By
embracing this model fully, stakeholders can ensure that AI not only drives
technological progress but also advances social justice, equity, and human
well-being across the globe.
Part 3: Navigating the AI Race through
Collaboration
As AI continues to reshape global industries and societies, navigating its
development and deployment requires strategic collaboration among
diverse stakeholders. Governments, academia, industry, non-profits, and
society, the elements of the quintuple helix, must work together to ensure
that AI technologies are not only innovative but also ethical, inclusive, and
aligned with public interests. This section explores the collaborative efforts
necessary to harness the potential of AI while addressing its challenges and
risks.

Strategic Collaboration Models


Strategic collaboration models are essential in addressing the multifaceted
challenges of AI development. By fostering partnerships across sectors—
government, academia, industry, non-profits, and society—these models
leverage diverse expertise and perspectives to promote responsible AI
innovation. Here the intent is to delve into various collaboration
frameworks that enable stakeholders to work together in creating AI
technologies that are transparent, ethical, and beneficial for all.

Multi-Stakeholder Partnerships

To deal with AI's challenges in a holistic way, multi-stakeholder


partnerships collaborate across different sectors, such as government,
academia, industry, non-profits, and society. These partnerships benefit
from diverse knowledge and viewpoints.
AI affects these sectors in different ways, each with their own
opportunities and challenges. Governments must find the right balance
between regulation and innovation to protect the public without hindering
progress. Academia has to pursue ethical research that takes into account
the social impact of AI, promoting innovation that helps humanity. The
industry has to incorporate ethical factors into AI development to prevent
harmful outcomes and use AI wisely to maintain or establish trust with
customers and stakeholders. Non-profits stress the importance of human
rights and social justice in AI developments, supporting policies and
practices that safeguard individuals and communities. Society's
participation is vital in shaping AI technologies to be transparent, fair, and
beneficial, ensuring that AI development reflects the public's needs and
values.
AI tools have a significant effect on human lives, improving efficiency,
healthcare, communication, and more. However, their adoption must take
into account ethical issues to avoid abuse. Harnessing the power of AI is
essential in achieving universal health coverage. In this regard, digital
technologies are not an end in themselves; they are vital tools to promote
health, keep the world safe, and serve the vulnerable. Even though
emerging technologies offer new opportunities to improve people's health,
evidence also highlights challenges regarding the importance of data in
ethical issues, as well as the impact of some interventions on patient privacy
(Le Fevre & Heleg, 2022). Similarly, AI-powered communication tools can
enhance connectivity and collaboration but may also spread misinformation
if not properly managed.
People's participation in developing AI tools is crucial to make them
match human values and social objectives, and to build trust and acceptance
for these technologies. Public engagement in AI development fosters
transparency and accountability, allowing individuals to voice their
concerns and contribute to the design and implementation of AI systems.
This participatory approach helps identify potential risks and ethical
dilemmas early on, enabling developers to address them proactively. By
incorporating diverse perspectives, AI developers can create technologies
that are more inclusive and equitable, benefiting a broader range of people.
Compliance with the law is necessary but significantly insufficient.
Adopting an ethical approach to AI offers what can be described as a “dual
advantage” (Floridi et al., 2018). On one hand, ethics allows organizations
to leverage the social value that AI enables, identifying and utilizing new
opportunities that are socially acceptable or preferable. On the other, ethics
helps organizations anticipate and avoid, or at least minimize, costly
mistakes. This advantage lies in preventing or mitigating actions that, while
legally permissible, may be socially unacceptable and thus rejected. This
approach also reduces the opportunity costs of avoided decisions or
unseized options due to fear of errors.
Ethics’ dual advantage can only function in an environment of public
trust and clear responsibilities. Public acceptance and adoption of AI
technologies will occur only if the benefits are perceived as meaningful and
the risks as potential yet preventable, minimizable, or at least manageable
through risk management strategies such as insurance or redress
mechanisms. These attitudes depend on public engagement with AI
technology development, transparency about their operations, and
accessible, understandable regulation and redress mechanisms.
Various legal and regulatory instruments address AI's ethical and societal
challenges. The European Union's AI Act emphasizes effective human
oversight to mitigate high-risk AI systems. This legislation aims to ensure
that AI technologies are developed and used responsibly, with mechanisms
in place to prevent and address potential harms. The Act categorizes AI
systems based on their risk level, with stringent requirements for high-risk
applications, including transparency, accountability, and human oversight
measures.
Similarly, Canada's Artificial Intelligence and Data Act (AIDA)
mandates significant human supervision for high-risk AI systems. AIDA
focuses on promoting ethical AI development while protecting the rights
and safety of individuals. It includes provisions for risk assessment, data
privacy, and accountability, ensuring that AI technologies are developed in
compliance with ethical and legal standards. Both the EU AI Act and AIDA
stress the need for human participation to deal with AI risks efficiently,
acknowledging that ethical and regulatory structures must support
technological progress.
These control measures are vital for two main reasons (Mökander et al.,
2023). First, AI models can replicate harmful stereotypes, breach data
privacy, spread misinformation, and misuse copyrighted material. The
visibility of AI-powered tools like GenAI has increased the potential for
such issues dramatically. AI systems, if not properly managed, can
perpetuate biases and inequalities present in the data they are trained on,
leading to unfair outcomes. Data privacy concerns arise when AI systems
process and store sensitive information without adequate safeguards, risking
unauthorized access and misuse. The spread of misinformation through AI-
generated content can undermine public trust and harm individuals and
communities. Additionally, the misuse of copyrighted material by AI
systems raises legal and ethical questions about intellectual property rights.
Second, the development flow of GenAI involves multiple stages and
actors, making comprehensive human oversight challenging. For example,
systems like ChatGPT or Gemini are further adapted by numerous users,
adding layers of complexity and making it difficult to predict their full
impact. This rapid development often outpaces the establishment of robust
ethical, legal, and technical standards. The AI development process is
dynamic and complex, with different factors and challenges emerging at
each stage. To ensure the ethical and responsible use of AI technologies,
there needs to be ongoing assessment and oversight, as well as effective
communication and cooperation among various stakeholders, such as
developers, users, and regulators.
UNESCO, in its Recommendation on the Ethics of Artificial Intelligence,
acknowledges the global acceleration in AI technology use and the growing
need for media and information literacy. It emphasizes that addressing the
societal, economic, and environmental challenges posed by the digital
economy is crucial, especially for low- and middle-income countries.
UNESCO promotes the adoption of AI systems that are ethical and respect
human rights, to make use of AI's advantages and reduce its dangers. The
organization emphasizes the need for digital economies that support and
value local cultures, values, and knowledge. By exploring the ethics and
social effects of AI, UNESCO wants to establish a set of rules that aligns AI
with global human rights norms.
Comprehensive AI auditing, regulation, and control require coordinated
efforts from multiple stakeholders. Successful collaboration can standardize
practices for future innovations, enhance risk predictability, and ensure AI
development aligns with humanity's objectives. Addressing these
challenges effectively can lead to significant advancements and benefits,
reinforcing ethical AI research and innovation that respects human rights
and fundamental freedoms. By establishing clear guidelines and best
practices, stakeholders can create a more predictable and stable
environment for AI development, reducing the likelihood of harmful
outcomes and fostering public trust.
The advantages of successfully addressing AI challenges are numerous.
Ethical AI development can lead to technologies that enhance life quality,
improve efficiency, and create new opportunities for economic growth. By
ensuring that AI technologies are developed responsibly, stakeholders can
mitigate risks and prevent negative consequences, fostering a more
inclusive and equitable society. Moreover, a collaborative approach to AI
development can drive innovation, enabling stakeholders to leverage their
collective expertise and resources to tackle complex challenges. This
holistic approach ensures that AI technologies are not only technologically
advanced but also socially beneficial and aligned with the broader goals of
humanity.
In conclusion, navigating the AI race through strategic collaboration and
multi-stakeholder partnerships is essential in harnessing the full potential of
AI while mitigating its risks. By involving government, academia, industry,
non-profits, and society in the development process, we can create AI
technologies that are ethical, inclusive, and beneficial for all.
Comprehensive regulation and oversight, grounded in ethical principles and
human rights, are crucial for ensuring that AI technologies align with
societal values and goals. By addressing the challenges of AI development
collaboratively, we can unlock the transformative potential of AI and create
a future where these technologies contribute positively to the well-being of
individuals and communities.

Examples of Successful Frameworks

As noted before, in today's rapidly evolving technological landscape AI


development demands a concerted effort across various sectors to harness
its full potential while mitigating associated risks. The AI4People initiative
serves as a prime example of how a multisectoral approach can effectively
address the ethical, social, and technical challenges posed by AI.
AI4People is an initiative aimed at fostering a good AI society. It
emphasizes the need for a comprehensive ethical framework to manage AI's
opportunities and risks, advocating for collaboration across sectors to
ensure AI development benefits humanity. This initiative is driven by the
recognition both that the rapid advancement of AI technologies necessitates
a balanced approach that considers ethical, social, and technical aspects,
and that these aspects need to be addressed by a framework based on the
following principles:

1. Beneficence: AI systems should contribute to human well-being and


ecological sustainability. They must be designed and deployed to
enhance societal good, prioritizing health, safety, and the environment.
2. Non-Maleficence: AI must be developed and used in ways that do not
harm individuals or society. This involves minimizing risks and
preventing the misuse of AI technologies, ensuring they do not
perpetuate or exacerbate social inequalities or biases.
3. Autonomy: AI should respect human autonomy, empowering individuals
to make informed decisions. This principle emphasizes the importance of
transparency, consent, and control, allowing users to understand and
influence how AI systems interact with them.
4. Justice: AI should promote fairness and equity, ensuring that benefits and
burdens are distributed justly across all segments of society. This
includes addressing biases in AI systems and ensuring equal access to AI
benefits.
5. Explicability: AI systems must be transparent and understandable to their
users. This involves clear communication about how AI systems make
decisions, allowing stakeholders to trust and verify AI processes and
outcomes.

The AI4People initiative exemplifies the importance of a multisectoral


approach in the advancement of AI. By involving diverse stakeholders,
including researchers, policymakers, industry leaders, and civil society,
AI4People ensures that the development and deployment of AI technologies
are guided by comprehensive ethical principles. Collaborative models like
this are crucial for navigating the complexities of AI, balancing innovation
with responsibility, and ensuring that AI serves the broader interests of
society.
To illustrate the practical implementation of such principles, we can
examine the case of the SPICE (Stratospheric Particle Injection for Climate
Engineering) project (Stilgoe et al., 2013). Despite being an initiative
conducted over a decade ago and not explicitly centered around AI, it
allows us to observe the application of ethical principles in the development
of technological innovations. Funded by three UK research councils—the
Engineering and Physical Sciences Research Council, the Natural
Environment Research Council, and the Science and Technology Facilities
Council—SPICE aimed to investigate whether injecting particles into the
stratosphere could mimic volcanic cooling effects and mitigate global
warming. The project was motivated by the need to explore innovative
responses to climate change, alongside mitigation and adaptation strategies.
Geoengineering, particularly solar radiation management (SRM)
techniques like those proposed by SPICE, raises significant socio-political,
ethical, and environmental questions. The concept involves large-scale
interventions in the Earth's climate system, which necessitates international
cooperation and poses potential risks, such as creating moral hazards by
diverting attention from emission reduction efforts. The Royal Society's
2009 report on geoengineering highlighted these challenges, underscoring
the need for robust governance frameworks to address the complexities of
SRM technologies.
The SPICE project originated from a 2009 workshop organized by the
UK Research Councils, which identified the need for a comprehensive
geoengineering research program. This led to the formation of SPICE
through a 2010 “sandpit” event that brought together researchers from
various disciplines to foster innovative thinking. Initially, the project did not
prioritize the broader ethical and social implications of SRM research,
focusing primarily on technical aspects. This oversight would later become
a significant issue.
As the SPICE project progressed, it became clear that ethical and social
considerations had been inadequately addressed. Recognizing this gap, the
project team incorporated a stage-gate review process, a structured
approach that divided the research into distinct phases with specific criteria
for progression. This process integrated principles of responsible
innovation, including anticipation, reflexivity, inclusion, and
responsiveness. The stage-gate criteria required the SPICE team to assess
and communicate the risks and benefits of their research, ensure regulatory
compliance, and engage with stakeholders and the public. This approach
aimed to ensure that the research was conducted ethically and transparently,
addressing potential social and environmental impacts before advancing to
the next stage.
Several aspects of the SPICE project showed how it implemented
responsible innovation. For example, the project conducted a public
dialogue exercise to understand public views and address concerns about
geoengineering. Moreover, the SPICE team carried out a thorough review
of the potential risks and uncertainties of SRM, reflecting the anticipation
dimension of responsible innovation. This proactive approach helped avoid
or reduce potential negative impacts, making sure that the research matched
societal values and expectations.
However, the SPICE project encountered significant challenges, such as
the absence of clear regulatory guidelines for geoengineering research.
These challenges caused conflicts and debates within the scientific
community and among stakeholders. In September 2011, the project was
delayed due to concerns about a possible conflict of interest related to a
prior patent application by one of the project investigators. This incident
emphasized the importance of transparency and accountability in
responsible innovation, ultimately leading to the cancellation of the testbed
experiment.
Looking back at the SPICE project, it is evident that ethical
considerations are essential from the start. The initial lack of attention to
ethical and social implications resulted in a huge loss of time, money, and
effort. If the project had followed responsible innovation principles from
the beginning, many of these issues could have been foreseen and addressed
earlier, possibly avoiding the need for cancellation. The SPICE project's
legacy is its contribution to the discourse on responsible innovation and
geoengineering governance. By showing the need for ethical principles in
scientific research, SPICE provides a useful lesson for future projects. It
highlights the importance of thinking ethically from the start to ensure that
technological developments are pursued in a socially and ethically
responsible way, ultimately saving resources and building public trust.

Recommendations for Future Frameworks

Future regulatory frameworks should be adaptable, inclusive, and


internationally harmonized. Recommendations include fostering
transparency, promoting ethical standards, and encouraging multi-
stakeholder collaboration.
As a starting point, we can consider four general dimensions for any
future framework (Stilgoe et al., 2013):
Anticipation, which in responsible innovation requires systematically
considering future consequences and preparing for various potential
outcomes. It involves exploring questions about risk distribution,
unforeseen impacts, and unknown factors to better shape resilient and
socially beneficial innovations. Questions that help apply this dimension
include: How will the risks and benefits be distributed? What other impacts
can we anticipate? What don’t we know about?
Reflexivity, which calls for ongoing self-examination within the
innovation process. It emphasizes the need for innovators to critically assess
their methodologies, assumptions, and the broader implications of their
work, ensuring that ethical and societal standards are rigorously applied and
evaluated. Key questions include: How do we know we are right? How
should standards be drawn up and applied? How should risks and benefits
be defined and measured?
Inclusion, which ensures that a broad array of stakeholders is engaged in
the innovation process. This dimension emphasizes the importance of
democratic participation, ensuring that different voices are heard and
considered in decision-making, thus enhancing the social relevance and
acceptance of innovations. Important questions to consider are: Who is in
control? Who is taking part? Who will benefit?
Responsiveness, which in responsible innovation is about the ability to
adapt and reorient based on new data, stakeholder input, and changing
circumstances. It ensures that the innovation process remains aligned with
public interests and ethical considerations, continually responding to
emerging challenges and opportunities. Relevant questions include: Why
are researchers doing it? Are these motivations transparent and in the public
interest? What are the alternatives?
Now, if we look at things from a business viewpoint, whether operating
on a small scale or as a large enterprise, it is essential to recognize that
responsible AI is not just about compliance but about embedding ethical
considerations into the core of AI design and implementation (IEEE SA,
2023). Businesses should adopt existing reporting frameworks and key
performance indicators to measure the impact of AI on people and the
planet, rather than solely focusing on economic growth. Aligning
responsible AI metrics with a company's existing reporting structures is
crucial. By integrating these metrics into familiar frameworks, businesses
can gain a holistic view of their progress, ensuring that AI initiatives
contribute positively to societal and environmental goals.
Additionally, asking critical questions can help identify potential risks
and benefits early in the design process. No matter if it is for start-ups,
enterprises or non-profits, governance structures should be tailored to the
size and nature of the organization. The key lies in understanding that
economic, environmental, and social factors are intrinsically linked to long-
term enterprise value creation. Companies that align their objectives with
societal goals, as articulated in the Sustainable Development Goals, are
better positioned to achieve sustainable growth. By prioritizing people and
the planet, businesses not only drive positive outcomes for themselves but
also contribute to broader societal and environmental well-being. This
holistic approach ensures that AI systems are developed responsibly,
fostering trust and delivering lasting benefits for all stakeholders.
Last, but not least, we must not fail to consider the action that might and
should be taken by the public sector. As a matter of fact, UNESCO
presented the following considerations as guidelines:
Respect, protection, and promotion of human rights and fundamental
freedoms and human dignity: the intrinsic and inviolable dignity of every
human forms the foundation of universal human rights and freedoms. These
rights must be respected, protected, and promoted throughout the AI system
life cycle, ensuring that no human is harmed or subordinated in any manner.
AI systems should enhance human quality of life without violating human
rights or dignity.
Environment and ecosystem flourishing: AI systems should promote and
protect environmental and ecosystem well-being throughout their life cycle.
This includes compliance with international and domestic laws, reducing
environmental impact such as the carbon footprint, and preventing the
unsustainable use of natural resources. All actors must strive to minimize
climate change risks and environmental degradation.
Ensuring diversity and inclusiveness: AI systems must respect, protect,
and promote diversity and inclusiveness, ensuring active participation of all
individuals regardless of race, gender, age, religion, or other grounds.
Efforts should be made to overcome technological infrastructure gaps and
legal frameworks, particularly in low- and middle-income countries, to
ensure equitable access and participation in AI developments.
Living in peaceful, just, and interconnected societies: AI actors should
contribute to peaceful and just societies, promoting interconnectedness
among all living beings and the natural environment. This involves
fostering solidarity, peaceful relations, and care for others, ensuring that AI
systems support the collective well-being of humanity and the environment.
In summary, the values and principles of UNESCO provide a practical
and effective way to apply ethical standards in AI governance. AI systems
should be designed to pursue valid objectives that consider their contexts
while ensuring that they do no harm to people, human rights, communities,
society or environment.
Methods for evaluating and mitigating risks in AI must continue to be
logical, fair, and contextually appropriate without violating underlying
values or rights of individuals. Management of the lifecycle of AI should
address safety and security concerns by preventing unintended damage and
risks. For this purpose, high-quality data frameworks must be used to
protect privacy as well as promote safe development of AI technologies.
Social justice can only be achieved if all forms of discrimination are
eliminated within any activity related to Al. Therefore, fairness demands
that different groups have equal opportunities to access benefits from such
technology, with keen attention being paid to bridging digital divides
among various populations worldwide. Economic growth needs to be
sustainable, which means measures taken during its realization should not
only protect our planet but also heal it where possible, while continually
checking how much these advancements affect different spheres in society.
In this regard, policies promoting sustainable development need
consideration alongside continuous monitoring of environmental
implications brought about by AI adoption within economies across
borders. Privacy laws should be applied across the globe when personal
data is handled by any system, including those using AI, in compliance with
international standards by enacting strong frameworks for safeguarding
information and setting up oversight mechanisms. Human intervention and
answerability are essential, especially in critical decisions. AI systems must
clarify and make their algorithms transparent, thus requiring users to
demand such disclosure as well, since this would respect human rights and
ethical values that require observance of transparency or the provision of
explanations. Ethical responsibilities together with legal liability for the
impacts caused by AI should be shouldered by all actors involved in it,
supported through oversight mechanisms for impact assessment and
accountability. Public awareness and understanding of AI should be
promoted through education and civic engagement, ensuring informed
public participation. AI governance needs to be inclusive, transparent,
multidisciplinary, and adaptive, with cooperation among stakeholders of the
Quintuple Helix model being necessary to address ethical AI challenges and
opportunities and ensure that the benefits of AI are distributed globally,
while the geopolitical AI race becomes a race for technology-empowered
humanity.

References
Carayannis, E. G., & Campbell, D. F. J. (2010). Triple helix, quadruple
helix and quintuple helix and how do knowledge, innovation, and the
environment relate to each other? International Journal of Social
Ecology and Sustainable Development, 1(1), 41–69.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum,
V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke,
P., & Vayena, E. (2018). AI4People—An ethical framework for a good
AI society: Opportunities, risks, principles, and recommendations.
Minds & Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-
9482-5
Government of Canada. (2023). Consultation on the development of a
Canadian code of practice for generative artificial intelligence
systems. https://ised-isde.canada.ca/site/ised/en/consultation-
development-canadian-code-practice-generative-artificial-intelligence-
systems
Hao, K. (2019). Señales de que el sector de la IA ya es un poco menos
sexista y racista, MIT Technology Review.
https://www.technologyreview.es/s/11711/senales-de-que-el-sector-de-
la-ia-ya-es-un-poco-menos-sexista-y-racista
Heikkilä, M. (2023). Las cinco claves sobre la Ley de la inteligencia
artificial de la UE MIT Technology Review.
https://www.technologyreview.es/s/15997/las-cinco-claves-sobre-la-
ley-de-la-inteligencia-artificial-de-la-ue
IEEE SA. (2023). Prioritizing People and Planet as the Metrics for
Responsible AI. https://standards.ieee.org/wp-
content/uploads/2023/07/ead-prioritizing-people-planet.pdf
Le Fevre, E. M., & Heleg, G. (2022). Artificial intelligence in medicine:
Laws, regulations, and privacy. In C. N. De Cecco, M. van Assen, &
T. Leiner (Eds.) Artificial intelligence in cardiothoracic imaging.
Contemporary medical imaging. Humana. https://doi.org/10.1007/978-
3-030-92087-6_49
Microsoft. (2024). Responsible AI Transparency Report.
https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW1l5BO
Ministers for the Department of Industry, Science and Resources. (2024,
January 17). Action to help ensure AI is safe and responsible.
https://www.minister.industry.gov.au/ministers/husic/media-
releases/action-help-ensure-ai-safe-and-responsible#main-content
Mökander, J., Schuett, J., Kirk, H.R., & Floridi, L.. (2023). Auditing large
language models: A three-layered approach. AI Ethics.
https://doi.org/10.1007/s43681-023-00289-2
Morini Bianzino, N. et al. (2024). The Artificial Intelligence (AI) global
regulatory landscape. Policy trends and considerations to build
confidence in AI, EY. https://assets.ey.com/content/dam/ey-sites/ey-
com/en_gl/topics/ai/ey-the-artificial-intelligence-ai-global-regulatory-
landscape.pdf
Müller, A., & Pannatier, E. (2024, March 5). The Council of Europe's
Convention on AI: No free ride for tech companies and security
authorities! Algorithm Watch. https://algorithmwatch.ch/en/council-
of-europes-ai-convention-no-free-ride/
O’Shaughnessy, M., & Sheenan, M. (2023, February 14). Lessons From the
World's Two Experiments in AI Governance. Carnegie Endowment for
International Peace.
https://carnegieendowment.org/posts/2023/02/lessons-from-the-
worlds-two-experiments-in-ai-governance?lang=en
Rock, D., & Grant, H. (2016). Why diverse teams are smarter, Harvard
Business Review. https://hbr.org/2016/11/why-diverse-teams-are-
smarter
Royal Society. (2009, September). Geoengineering the climate: Science,
governance and uncertainty, RS Policy document 10/09.
https://royalsociety.org/-/media/policy/publications/2009/8693.pdf
Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework
for responsible innovation. Research Policy, 42(9), 1568–1580.
Ubaldi, B., Le Fevre, E. M., Petrucci, E., Marchionni, P., Biancalana, C.,
Hiltunen, N., Intravaia, D. M., & Yang, C. (2019). State of the art in
the use of emerging technologies in the public sector. OECD working
papers on public governance, No. 31, OECD Publishing.
https://doi.org/10.1787/932780bc-en
Von Hippel, E. (2005). Democratizing innovation. MIT Press.
Winiwarter, V., & Knoll, M. (2007). Umweltgeschichte. Böhlau.
Yang, Z. (2024, January 17). Four things to know about China's new AI
rules in 2024, MIT Technology Review.
https://www.technologyreview.com/2024/01/17/1086704/china-ai-
regulation-changes-2024/
8
EXPLORING THE RECENT
ADVANCEMENTS AND
TRANSFORMATIVE IMPACT OF
ARTIFICIAL INTELLIGENCE
Nicolas Griedlich, Liubomyr Bregman, and Anke Joubert*

DOI: 10.4324/9781003518495-11

Introduction
In 2019, one of the most important scientists and innovators of our artificial
intelligence (AI) era (Bohannon, 2016), wrote his famous article “Artificial
Intelligence—The Revolution Hasn’t Happened Yet” (Jordan, 2019).
However, the question arises: is his proclamation still applicable in 2024?
In this chapter, we continuously ask this question, aiming to uncover the
answer by reviewing the evolution of AI. We analyze the most recent
advancements, explore the challenges, and study the economic impact of
AI, all to ascertain/answer whether we are already living in revolutionary
times, even in 2024.
Back in 2019, the scientist (Jordan, 2019) argued that the perception and
expectation of an emergent intelligence in silicon that rivals human
intelligence entertains, thrills, and frightens us but ultimately distracts us
from the actual narrative of the current era. He asserts that AI, the mantra of
our era, still needs to fully materialize in the way many people understand
it. According to Jordan, real AI success is not the pursuit of human-
imitative AI, but rather, it is deeply embedded in specific engineering
challenges, such as document retrieval, text classification, fraud detection,
recommendation systems, and personalized search (Jordan, 2019). The
developments often termed AI arose mainly in engineering fields associated
with low-level pattern recognition and control theory (Jordan, 2019). Even
today there is no universal definition of AI, but for this chapter we will use
the following. AI is the use of machines to mirror human thought processes
to execute tasks or solve issues. Modern AI innovates beyond this by
having the ability to learn and apply patterns to similar situations (Deloitte,
2022). Advancing this idea is Generative AI, which can be defined as an
“artificial intelligence designed to generate new content” such as text,
images, code, and more (Deloitte AI Institute, 2024).
* Disclaimer: Please note, the facts and figures presented in this article were
accurate and up to date at the time of writing. However, the AI industry is
rapidly evolving, and it is possible that new data might supersede the
information previously available. Therefore, we recommend referring to the
originally cited data sources for any clarifications.
AI has evolved significantly over the past decades from the concept of
human-imitative AI in the 1960s, focusing on high-level cognitive
capabilities, to today's broader field that intersects with operations research,
statistics, pattern recognition, information theory, and control theory
(Wiener, 1950). The backpropagation algorithm, core to the so-called “AI
revolution,” emerged from control theory in the early 1980s (Rumelhart et
al., 1986). This algorithm became a basic block of so-called neural
networks (multi-layer perceptron). Over the last two decades, significant
advancements have been made by both industry and academia. This
involves creating systems that augment human intelligence and creativity,
such as search engines and natural language translation. According to
Jordan, the too-narrow focus on human-imitative AI prevents diverse
perspectives from contributing to the understanding and shaping of the
technology (Jordan, 2019).
Over the recent decade, Deep Learning, the practical implementation of
backpropagation (Rumelhart et al., 1986) as a field, has enabled
advancements in image and speech recognition, and became the core driver
of progress in 2010 (Lecun et al., 2015). The evolution of cloud and
computational power since 2010, coupled with the decline in computational
costs, have significantly driven the growth of Deep Learning and AI. In the
pre-deep learning era, the amount of computation used by AI models
doubled every 21.3 months, consistent with Moore's Law (Oxford
University Press, s.d.). With the introduction of Deep Learning, the
doubling time dropped to around 5.7 months, indicating a surge in
computational efficiency. The scarcity of computational power, particularly
state-of-the-art chips like Nvidia's H100 and A100, crucial for training AI
models efficiently, is high, and supplies are limited. Despite these
constraints, there has been a noticeable shift toward more intelligent
algorithms that require less computation for the same output. For instance,
by 2020, training a 2012 AI model required 44 times less computing than in
2012. This trend signifies that algorithmic progress or compute efficiency
have contributed more to performance improvement than pure hardware
efficiency (Vipra & West, 2023). This progress has largely gone unnoticed
by the public for the last decade but has powered organizations like Google,
Netflix, Facebook, and Amazon (Jordan, 2019).
In the ensuing chapters, we delve into an in-depth analysis of the most
recent advancements in AI. Our discussion highlights the revolutionary
potential of these advancements, while also casting light upon the key
challenges they present. Additionally, we explore the expected impact on
productivity through numerous research studies. These studies offer insight
into potential impacts at both the micro level of individual productivity and
the macro level of economic growth.

The Most Recent Advancements


The 2020s marked a pivotal period in the evolution of AI-based language
technologies. In 2020, OpenAI unveiled GPT-3, a revolutionary language
model, and in 2021, the multifaceted DALL·E capable of generating images
from text, and DeepMind's AlphaFold 2, which significantly advanced
protein structure prediction. The year 2022 saw the emergence of
Microsoft's DALL·E 2 and Stability AI's Stable Diffusion. That year also
witnessed the debut of ChatGPT's GPT-3.5, which quickly catalyzed
interest in Generative AI due to effectively democratizing its usage. Since
2019, Aleph Alpha has been a trailblazer in implementing AI in sensitive
sectors, receiving significant attention. In 2023, the “Generative AI arms
race” ensued with tech titans developing advanced AI utilities, including
Microsoft's integration of ChatGPT into Bing, Google's AI chatbots Bard
and Gemini, Amazon's TITAT and Bedrock solutions, and META's
Llamma2. Mistral AI, established early 2023, released the robust Mistral 7B
in Q3 and the Mixtral 8x7B by year end. The same year, OpenAI's GPT-4
introduced a live data connection extension. These advancements were
fostered by remarkable strides in engineering and computational scaling.
The end of 2023 and Q1 2024 saw significant progress in model scaling
and vision onboarding, particularly with Google's Gemini (1M token
memory) and Amazon's Claude 3 (in collaboration with Anthropic), which
rivalled OpenAI's GPT4. This period marked the transition to multimodal
stages, combining vision image and video input/output. For example, GPT
vision and SORA for video processing were released by OpenAI, while
similar capabilities were presented by other big tech.
A study comparing Gemini Pro and GPT4V across metrics like vision-
language capability, human interaction, temporal comprehension, and IQ,
found GPT4V slightly superior. GPT4V outperformed in IQ tests, object
combinations, and image-text comprehension due to Gemini's inability to
process multiple images. Gemini also lagged in industrial applications,
especially embodied agents and GUI navigation. However, Gemini
provided more detailed outputs, while GPT-4V excelled in precision and
succinctness (Zhangyang et al., 2023).
Industry dynamics continue to change, with weekly updates prompting
revisions to this chapter. Each model has its strengths and weaknesses, and
although no clear leader has emerged, the industry's rapid growth and
evolution caution us against presuming winners prematurely.
What are the latest industry trends that are driving the adoption and
impact of Generative AI? Multiple large and small businesses and
organizations seek to extract value using recent technological
advancements. Industries find a way to combine public knowledge with
precious internal information to boost productivity and produce new use
cases (Deloitte, 2024). Surveys report that most often, for the context of
work, people use text-based models (Deloitte Switzerland, 2023). In this
chapter, we discuss the latest and upcoming trends of Large Language
Models (LLMs) and embedding models, highlighting how they can
potentially improve the efficiency of everyday economic participants
working with textual data.
Anticipated developments encompass the surge of open source LLMs,
LLMs tailored to various use cases, and widespread adoption of model
customization. The customization of Generative AI/LLMs is enhancing
productivity and contributing to the potential for economic growth.
Numerous industry leaders, including Deloitte, have embraced AI
advancements in conjunction with their internal knowledge base, pursuing
the concept of deriving fundamental value through the adoption and
customization of AI (Foy, 2024). We have identified the three most popular
techniques of LLM customization: retrieval-augmented generation (RAG),
finetuning, and intelligent agents. In the following paragraphs, we explain
these techniques, elucidating their core variances and practical applications.
The LLM RAG method combines vector search with LLMs, offering a
high degree of customization as proposed in 2020 (Lewis et al., 2021). This
method combines a pre-trained sequence-to-sequence model with a dense
vector index, establishing a new benchmark for providing accurate text
generation in response to open-domain questions. It also allows for
knowledge updates without the need for additional training. The RAG
technique combines the use of LLM and Search in Own Database, as
illustrated in Figure 8.1. An example, when you ask question about a book,
RAG, instead of reading the whole book to find an answer, quickly picks
out the right pages of the book that help answer your question and uses
them to create an answer. This way, it does not have to read the entire book
each time, it just picks the parts it needs.
Figure 8.1 RAG schema.

LLM finetuning is a technique used in AI development where a pre-


trained LLM, initially trained on a large corpus of data, is further
conditioned on a smaller, specific dataset. This process helps the model
learn new skills and incorporate proprietary or domain-specific data,
enhancing its performance in a particular context or industry (Balaguer et
al., 2024). See Figure 8.2. Following the previous example, with finetuning,
you actually teach the LLM some more specific things about your book
topics by showing it a lot of examples.
Figure 8.2 Finetuning schema.

In early 2024, Microsoft researchers presented a study comparing


finetuning and RAG methods (Balaguer et al., 2024). In contrast to RAG,
finetuning integrates knowledge directly into the model, leading to precise,
concise outputs, but it comes with an increased initial cost. While RAG
augments the input with external data, improves accuracy in large models
and has a lower initial cost, it can produce more verbose outputs.
The performance of RAG and finetuning with LLMs on models like
GPT-4, GPT-3.5, and Llama2-13B was evaluated by researchers from
Microsoft. This evaluation, in one particular experiment, revealed that GPT-
4 learned just 47% of new knowledge introduced until it was finetuned,
after which its learning shot up to 72% and 74% using RAG and the
finetuned model, respectively. Where GPT-4 ranked first with an 80% score
when combined with RAG. It was followed by Vicuna and Llama2-chat
13B with 79% and 75%, respectively. Interestingly, GPT-4 and Llama2-chat
13B yielded similar results without RAG integration. GPT-4 also proved
excellent in the finetuned models scoring 86% with RAG and 81% without.
These results highlight that the impact of RAG and finetuning on
performance are comparable, but they work best in combination when
applied to domain-specific knowledge (Balaguer et al., 2024). This study
only compared a few models from different producers. The aim of the study
was not to identify the best performing model, but rather to highlight the
similarities in the directional impact of different customization techniques.
Finally, Agent Design is the most recent advancement in boosting
productivity with customized AI. AI agents have become a popular trend in
AI adoption, based on the idea of an intelligent agent, as external
communication with a system until some specific tasks are achieved
satisfactorily (Russel & Vorvig, 2010). This is combined with Google
research on Chain-of-Thought prompting, the idea that a series of LLM
prompts can improve the ability of LLM to perform complex reasoning
(Wei et al., 2023). The idea of implementing a rational agent, borrowed
from economics, as software, with sequential reasoning, has led to the
emergence of a trend in the industry. This has resulted in solutions like
Auto-GPT, which is capable of automating not only small tasks, but also
enabling the completion of tasks with an acceptable level of quality. This
has made it a productivity booster, meeting the expectations of companies.
Following the book example in previous paragraphs, intelligent agents, like
specialized tutors, are skilled at extracting and teaching information from
the vast knowledge of LLM, acting as a bridge between the book's vast
knowledge and specific tasks. They ensure the book is used effectively for
specific tasks, helping to customize its use and make it even more useful for
meeting specific requirements.
Overall, all of the AI techniques are evolving with sparkling dynamics
across industries. While there are debates about the efficiency and
investments required in each case, we expect to see more data by the end of
2024. With little doubt, we can see that the adoption of AI on private
datasets is a technique that makes it valuable and impactful at the
organizational level. But does it make it smarter? We will debate this in the
next paragraph.

When to Expect the Revolution?


Many anticipate the AI revolution in two ways: the attainment of artificial
general intelligence (AGI), capable of intellectual tasks matching or
surpassing humans, and the full integration of AI across all industries. We
will discuss the current status of both ideas.
Are we moving toward the revolution and AGI? We are certainly getting
there faster than many expected. Often named the “Godfather of AI,”
Geoffrey Hinton, a scientist whose research focuses on the development of
neural network architectures and learning algorithm-powered technologies
like GPT, initially assumed we were still 30 or 50 years away from AI
surpassing humans However, with recent advancements, he changed his
mind to believe that AI can quickly surpass human intelligence, even his
own (Metz, 2023).
A Microsoft Research study in 2023 suggested that an early rendition of
OpenAI's GPT-4 showed greater general intelligence than its predecessors,
demonstrating human-equivalent abilities in various areas like mathematics,
coding, and law. This ignited discussions about whether GPT-4 could be
seen as a preliminary form of AGI, underscoring the need for increased
scrutiny of these systems (Bubeck et al., 2023).
Can we assume that the current version of AI theory, models, and
products are progressing toward AGI? To evaluate this, we would require
an independent framework for model evaluation, as proposed by physicists
who worked on totally different poles of modern science. Hawking,
discussed in his book A Brief History of Time, the value of theory by stating
that, “A theory is a good theory if it satisfies two requirements: It must
accurately describe a large class of observations based on a model that
contains only a few arbitrary elements, and it must make definite
predictions about the results of future observations” (Hawking, A Brief
History of Time: From the Big Bang to Black Holes, 1988). Hawking also
suggests that the good model is always elegant (Hawking, The Origin of the
Universe, 2005). Would recent AI developments pass Hawking's criteria? It
seems highly unlikely that current developments are the end of the
revolution, as the success of human-like AI is largely driven by scale and
the logical next step is evolution, commoditization, and simplification (Xin
Zhao et al., 2023).
In the end, we tend to believe in evolutionary approaches where value
drivers of AI will trigger more significant investments and quick progress.

Main Hurdles AI Brings: Privacy, Cost,


Environment, Adoption, and Legal Issues
All the disruption achieved by science and technology cannot come at zero
cost or additional challenges. In this chapter, we review the most prevalent
fears in industry: privacy and trust concerns, adoption cost, legal and
regulatory repercussions, and the environmental impact. According to the
Deloitte State of Ethics and Trust in Technology annual report, Generative
AI presents both promise and challenges for organizations as they navigate
its rapid adoption (Deloitte US, 2023). The rapid development of
Generative AI tools has underscored the need for heightened attention to the
ethical dimensions of emerging technologies. According to a survey done
by Deloitte, most often companies envision Data Privacy (22%) and
transparency (14%) as the top challenges in AI adoption (Deloitte US,
2023). While engineers often focus on improving the accuracy and
minimizing hallucinations, which concerns only 9% of respondents to the
survey, the majority are concerned with a topic in which the solution is in
building trust through governance, legal adoption, and practice.
Privacy concerns have been intensified by the increasing adoption of
Generative AI tools (Figure 8.3). The user's private data is leveraged to
enhance the performance of AI systems, inducing significant risks, notably
the potential for data misuse for unintended purposes. Moreover, the
processing of personal data by Generative AI algorithms carries a risk of
unintended exposure or misuse of personal and sensitive data. LLM may
lead to incidents, such as training data extraction, where AI extracts
sensitive user information during conversations (Deloitte US, 2023).

Figure 8.3 Deloitte survey, top concerns companies have about Generative AI.

Another research done by Deloitte in 2023 suggests AI technology is far


from free and should be focused on high-value tasks. Moreover, while the
desire for software companies to monetize AI is understandable, given the
revenue advantages and escalating expenses involved in developing AI
capabilities, it is important to underline that adopting universal AI (gen AI)
is costly. Companies need to invest billions in acquiring the necessary
silicon-based components utilized for AI training or the alternative of
purchasing instances from cloud or chip entities. Projections for large cloud
operators indicate spending between 3% and 13% of their 2023 capital
expenditure on AI (not specified) (Deloitte Insights, 2023). Additionally,
operational costs are substantial, with Generative AI thread costs ranging
between 0.01 USD and 0.36 USD per query (Deloitte Insights, 2023).
Although decreases in chip and operational costs are expected in the future,
this is likely to occur only once the existing Generative AI chip scarcity is
resolved now. According to Deloitte's predictions, this issue might not abate
until the latter half of 2024 (Deloitte Insights, 2023). A survey conducted in
June 2023 shows that while respondents acknowledged the vast long-term
potential of Generative AI, some viewed AI functions as essential service
offerings that should not entail premiums (Deloitte Insights, 2023).
Why are numerous organizations not advancing beyond the proof-of-
concept (POC) stage? In terms of adoption, despite its relative nascence in
the marketplace, a significant portion of companies are already testing or
using Generative AI tools, with 74% beginning testing, 65% using the
technologies internally, and 31% using them for external consumption
(Deloitte US, 2023). However, there seem to be only a few companies that
can effectively transition from POC to production. A study by Gartner
highlights that only 10% of organizations have succeeded in deploying
Generative AI into production in 2023 (Stamford, 2023). This low
transition rate can be linked to various factors. Foremost is a lack of
comprehensive understanding of the still-emerging technology, coupled
with a lack of skills in this area. Moreover, the complexity of integrating
Generative AI with existing systems poses an obstacle to companies.
Additionally, concern about the reliability of Generative AI in a production
environment is a deterrent, as well as the absence of a strategic path for
deployment. Moreover, regulatory and fear of non-compliance with the
potential negative consequences linked pose hesitancy for some companies
to move beyond the POC stage.
As we move forward into 2024, two significant texts will significantly
influence the growth of Generative AI: The General Data Protection
Regulation and the upcoming EU AI Act (Lee et al., 2023). With the
introduction of the AI Act, several countries, such as Germany, France, and
Italy, have exhibited some resistance to certain regulations on general-
purpose AI. Their primary concern focused on the perceived threat to
innovative development, specifically for European startups striving to
compete with their American counterparts (Hartmann, 2023). Moreover, as
Generative AI is still emerging and changing (the recent appearance of
GPT-4 in 2023), it remains difficult to draft comprehensive regulations that
could adapt to technological advancements. This creates a complex
environment for different actors, namely developers and users of Generative
AI systems, to navigate these emerging and evolving rules.
Last, this chapter discusses how advancements in machine learning
present environmental challenges due to substantial resource and energy
usage. Over the last decade, data centers have contributed considerably to
global electricity consumption, leading to increased CO2 emissions. In
2022, data centers accounted for around 1.3% of global electricity usage,
excluding the annual consumption of 0.4% for cryptocurrency mining. Data
centers and data transmission networks contribute significantly to
greenhouse gas (GHG) emissions, responsible for approximately 1% of
energy-related GHG emissions (IEA, 2023). Moreover, the training of
algorithms further adds to global GHG emissions. Although data about the
most popular LLMs are not always accessible, some research has attempted
to evaluate the carbon footprint of open-source LLM BLOOM (Big Science
Large Open-science Open-access Multilingual Language Model) model
training. The study quantifies the carbon footprint of BLOOM, a 176-
billion parameter language model, throughout its lifecycle. Estimates
suggest that BLOOM's final training released around 24.7 tons of CO2
equivalent, considering only dynamic power consumption. Inclusive of all
processes, from equipment production to energy-based operational use, the
CO2 emissions increase to 50.5 tons (Luccioni et al., 2022). This might be
compared to the annual consumption of 50 households.
In conclusion, the underlying research and the evolution of software AI
solutions are advancing rapidly and have even impressed the most
prominent industry experts. While available AI solutions still have room for
improvement in accuracy, computational efficiency, and elegance,
industries, companies, and the workforce are actively embracing new
developments, investing in customization, extracting value from internal
knowledge, and considering the economic impact and productivity growth
(Deloitte, 2024). Customization techniques, such as RAG, finetuning, and
intelligent agents, are swiftly addressing industry needs. While companies
still report many concerns about prices and trust, most industries actively
explore AI tools and programs. By the end of 2024, we anticipate tangible
results across economies, which will be reflected in economic growth
figures over the next few years.

Transformative Impact of Gen AI


This section focuses on the notable and measurable impacts AI can bring
across economies. People often associate AI with fears envisaged by
science fiction (Hermann, 2023). However, today, according to a survey
conducted by Deloitte Switzerland in mid-2023, 61% of employees who
work with a computer or similar device already use Generative AI in their
jobs (Deloitte Switzerland, 2023). Figure 8.4 shows that 47% of people who
participated in the survey for work reasons use Generative AI for text-
related tasks and 26% for images. The same study demonstrated 43% are
somewhat concerned that “AI is coming for their jobs.”

Figure 8.4 Deloitte survey in Switzerland, for Generative AI used for work.

The progress in technology can slow down due to concerns (usually


attributed to three categories). Economic worries refer to job-loss due to AI
that can perform the work; ethical worries concern AI making morally
incorrect decisions; and environmental worries forecast the impact of AI
practices and technologies on the environment (Vinusa et al., 2020).
In 2023, AI and Generative AI exposed two-thirds of current jobs to
automation. Some projections even suggest that by 2030 automation may
displace up to 300 million jobs globally (Goldman, 2023). Do these “new
automation” technologies have different impact patterns from previous ones
driven by electricity, computers, the internet, cloud computing, and other
advancements? Most economists agree that innovation and automation do
not eliminate jobs; they transform them. The World Economic Forum
estimates that by 2025, technology will create at least 12 million more jobs
than it destroys, a sign that, in the long run, automation will be a net
positive for society (World Economic Forum, 2020).
The adoption of Generative AI technologies catalyzes the emergence of
new industries and previously non-existent roles, fueling the demand for
specialized skills. Specifically, individuals understand and leverage
innovative technologies in areas like data science, machine learning, and
others (Karangutkar, 2023). In our view, the new wave of automation driven
by Generative AI and work on previously manual tasks will transform the
way we work, eliminate some tasks from our to-do list, create a new type of
jobs, and, most importantly, in the context of this chapter impact individual
productivity translating into long-term growth.
Generative AI is not intended to replace jobs, but to affect the tasks and
skills used for work. The technology aids in performing tasks faster and
better, thereby changing how tasks are assigned. It also paves the way for
new skills needed for professionals to stay relevant. As AI takes over
routine tasks, organizations are transitioning toward a skills-based
approach, focusing on capabilities and desired outcomes. This approach
emphasizes a diverse workforce and adaptability. Industries, such as
advertising and customer service, are examples where AI is impacting roles,
pushing professionals toward more strategic roles and higher-level decision
making. Innovative companies like IKEA have used AI to broaden their
offerings and create a new revenue stream, focusing on lifelong learning
opportunities for employees. Generative AI also impacts human outcomes,
encouraging meaningful, value-driven work and employee growth (Deloitte
AI Institute, 2024).
The specific impact of Generative AI on productivity has also been
studied (See Table 8.1). The reported impact varies between 10% and
120%, but even the most conservative estimates align on the double-digit
impact on the productivity of “people with computers.” For example, recent
research has focused on examining the influence of Generative AI within
call center settings. A machine learning platform utilizing an LLM interface
was trained using chat and outcome data. Researchers used the average chat
completion time as a productivity metric and noted an average 14%
improvement after the implementation of the new tool (Brynjolfsson et al.,
2023). Another study conducted by Harvard researchers for knowledge-
intensive tasks showcased the effects of AI access (GPT-4) on 18 realistic
consulting tasks.

Table 8.1 Summary of reviewed datapoints of impact of Generative AI with impact on productivity

Source Type of task Suggested impact on productivity


Implementing 14% improvement in chat completion
NBER
Generative AI time
(Brynjolfsson
within call center
et al., 2023)
settings
Application of AI 12.2% increase in task completion and
Harvard &
(GPT-4) in 25.1% faster task completion; 40%
BCG
consulting tasks higher quality outcomes; 19%
(Dell’Acqua
reduction in tasks beyond current AI
et al., 2023)
capability
Use of AI (GPT- Support agents handled 13.8% more
3.5) across customer inquiries; 59% more
Nielsen
various areas for business documents were produced;
Norman
customer service, programmers could code 126% more
(Nielsen,
business projects with an average productivity
2023)
employees and increase of 66%
programmers
Source Type of task Suggested impact on productivity
Amazon Not specified Projection of a 49% productivity boost
Research for around 88% of workers by 2028
(Amazon,
2023)

The results revealed that consultants using AI demonstrated higher


productivity (a 12.2% increase in task completion and 25.1% faster task
completion) and produced higher-quality outcomes (over 40% higher
quality). However, for tasks beyond current AI capability, AI usage led to a
19% decrease in producing correct solutions (Dell’Acqua et al., 2023). In
2023, Nielsen Norman's publication demonstrated the dramatic
effectiveness of AI (GPT 3.5) with their experiments. The study showed
that users were significantly more efficient with AI assistance, which led to
increased productivity. Three studies showed that support agents using AI
could handle 13.8% more customer service inquiries, business professionals
using AI could write 59% more business documents, and programmers
using AI could code 126% more projects. The average productivity increase
from AI was 66%, equating to significant gains compared to natural
productivity growth (Nielsen, 2023). According to an Amazon publication,
AI could boost productivity by 49% for around 88% of workers before
2028. Those results are encouraging economic optimists to a higher level
(Amazon, 2023).
Individual productivity can potentially translate to overall economic
growth. The labor productivity growth rate has been slowly declining across
the largest economies for the last few decades despite all the technological
advancements made simultaneously. See Figure 8.5. In the recovery from
COVID-19, it even went negative. Can we deliver the Generative AI
productivity optimism from the previous paragraph at a macro level?
Undoubtedly, it will depend largely on how quickly and actively businesses
and organizations adopt and benefit from Generative AI technologies.

Figure 8.5 Productivity growth rate in G7, OECD.

A recent survey by Deloitte AI Institute revealed that 62% of leaders in


business and technology express enthusiasm about Generative AI, although
30% also feel uncertainty (Deloitte, 2024). A total of 79% of respondents
anticipate Generative AI to fuel major transformations in their organization
and industry over the next three years. According to Amazon, 93% of
employers expect to use Generative AI in the next five years (Amazon,
2023). Remarkably, around a third of these respondents anticipate this
substantial transformation happening now (14%) or within the year (17%).
Thus, despite uncertainties, the potential of AI to revolutionize business
landscapes is largely acknowledged (Deloitte, 2024). Generative AI is
poised to significantly decrease the time and resources spent on tasks along
with producing novel solutions, thereby increasing overall efficiency and
productivity in organizations, which could revert the current decreasing
productivity trend.

Conclusion
Over the years, AI has been an intriguing concept, providing captivating
ideas and applications, ranging from human-like androids to enhanced
cognitive capabilities. This chapter critically discusses not only the progress
of AI technologies but also explores the potential consequences and
implications of such advancements, particularly in the late 2020s.
Key takeaways include:

Privacy and trust issues, adoption costs, and environmental impact are
some of the concerns that accompany AI advancements.
The increasing automation may not eliminate jobs but transform them.
AI has the potential to significantly enhance individual productivity
and, consequently, overall economic growth.
Effective adoption of these technologies across businesses poses
challenges.

Investigations into these themes elucidate the trajectory of AI evolution


and underscore the importance of prudent decision-making as we move
ahead with these impactful advancements.
Today, AI, predominantly Generative AI, empowers an array of
engineering applications including text classification, recommendation
systems, and even fraud detection. Deep Learning revolutionized this
domain, enabling recognition across diverse arenas, such as images and
speech, and largely triggered the Generative AI advancements we see today.
These advancements, although impressive, are not devoid of concerns or
challenges. Significant among these are privacy and trust issues, substantial
adoption costs, and potential environmental impact due to high energy
usage of AI computations. Moreover, as the technology advances, there are
emerging concerns about job displacement due to increased automation.
However, most industry experts agree that these moments of disruption
often lead not to job elimination, but to job transformation.
Concurrently, Generative AI is progressively impacting economies and
seems poised to instigate an economic boom. It has the potential to enhance
individual productivity significantly, which in turn, can translate into
overall economic growth. However, adopting and leveraging these
technologies effectively across businesses and organizations remains a
demanding task.
To sum up, the AI revolution, as heralded by M. Jordan, while not wholly
here, is certainly on the horizon. The progress made thus far signals the
advent of unprecedented advancements, fascinating capabilities, and critical
challenges. The potential economic impact is equally promising, providing
an optimistic picture of future economic growth. As with other
technological revolutions, tackling the accompanying challenges and
potential disruptions with grace and foresight will dictate the ultimate
impact of AI technologies on our society and economies.

References
Amazon. (2023). About Amazon. A new study reveals 5 ways AI will
transform the workplace as we know it.
https://www.aboutamazon.com/news/aws/how-ai-changes-workplaces-
aws-report
Balaguer, A., Benara, V., Liuz de Freitas Cunha, R., Hendry, T., Holstein,
D., & Marsman, J. (2024). RAG vs fine-tuning: Pipelines, tradeoffs,
and a case study on agriculture.
https://doi.org/10.48550/arXiv.2401.08406
Bohannon, J. (2016, April 20). Who's the Michael Jordan of computer
science? New tool ranks researchers’ influence. American Association
for the Advancement of Science.
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at work.
National Bureau of Economic Research.
Bubeck, S., Chandrasekaran, V., Edlan, R., Gehrke, J., Horvitz, E., Kamar,
E., … Lundberg, S. (2023). Sparks of artificial general intelligence:
Early experiments with GPT-4.
https://doi.org/10.48550/arXiv.2303.12712
Dell’Acqua, F., McFowland, E., Mollick, E., Lifsqhitz-Assaf, H., Kellog,
K., Rajendran, S., Lakhani, K. (2023). Navigating the jagged
technological frontier: Field experimental evidence of the effects of AI
on knowledge worker productivity and quality. Harvard Business
School Technology & Operations Mgt. Unit Working Paper No. 24-
013.
Deloitte. (2022). The age of AI: What exactly is AI?.
https://www.deloitte.com/mt/en/services/consulting/perspectives/mt-
age-of-ai-2-what-is-it.html
Deloitte. (2024). The State of Generative AI in the Enterprise.
https://www2.deloitte.com/us/en/pages/consulting/articles/state-of-
generative-ai-in-enterprise.html
Deloitte AI Institute. (2024). Generative AI and the future of work.
https://www.deloitte.com/global/en/services/consulting/research/gener
ative-ai-and-the-future-of-work.html
Deloitte Insights. (2023). Deloitte's TMT Predictions 2024.
https://www.deloitte.com/lu/en/Industries/tmt/research/tmt-
predictions.html
Deloitte Switzerland. (2023). Generative AI's fast and furious entry into
Switzerland.
https://www.deloitte.com/ch/en/Industries/technology/research/generat
ive-ai.html
Deloitte US. (2023). State of Ethics and Trust in Technology.
https://www2.deloitte.com/content/dam/Deloitte/us/Documents/us-tte-
annual-report-2023.pdf
Foy, S. (2024). Deloitte rolls out artificial intelligence chatbot to
employees. Financial Times.
Goldman, S. (2023). The potentially large effects of artificial intelligence
on economic growth (Briggs/Kodnani).
Hartmann, T. (2023, November). Behind France's stance against regulating
powerful AI models. Euractiv.
Hawking, S. (1988). A brief history of time: From the big bang to black
holes. Bantam Dell Publishing Group.
Hawking, S. (2005). The origin of the universe. Academic Lecture.
Hermann, I. (2023). Artificial intelligence in fiction: Between narratives
and metaphors. AI & Society, 319–329.
IEA. (2023). Data centers & networks. https://www.iea.org/energy-
system/buildings/data-centres-and-data-transmission-networks
Jordan, M. (2019, July 2). Artificial intelligence – the revolution hasn’t
happened yet. Harvard Data Science Review1(1), 1–9.
Karangutkar, A. (2023). The impact of artificial intelligence on job
displacement and the future of work. International Journal of
Advanced Research in Science, Communication and Technology
(IJARSCT) 1(2).
Lecun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Springer
Nature, 436–444.
Lee, P., Lucchini, L., & Seng Ah Lee, M. (2023). Walking the tightrope: As
generative AI meets EU regulation, pragmatism is likely. Deloitte.
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Kieal,
D. (2021). Retrieval-Augmented Generation for Knowledge-Intensive
NLP Tasks, 33, 9459–9474.
Luccioni, A., Viguier, S., & Ligozat, A.-L. (2022). Estimating the carbon
footprint of BLOOM, a 176B Parameter Language Model , 24(253),
1–15.
Metz, C. (2023). The godfather of A.I. leaves Google and warns of danger
ahead. New York Times.
Nielsen, J. (2023). AI improves employee productivity by 66%. Nielsen
Norman Group. https://www.nngroup.com/articles/ai-tools-
productivity-gains/
Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning
representations by back-propagating errors. Springer Nature.
Russel, S., & Norvig, P. (2010). Intelligence agents. In S. Russel, & P.
Vorvig (Eds.), Artificial intelligence: A modern approach (3rd ed., pp.
34–63).
Stamford, C. (2023). Gartner poll finds 55% of organizations are in
piloting or production mode with Generative AI. Gartner.
Vinusa, R., Azizpour, H., Leite, J., Balaam, M., Dignum, V., Domish, S., …
Nerini, F. (2020). The role of artificial intelligence in achieving the
sustainable development goals. Nature Communications 11(1), 1-10.
Vipra, J., & West, S. M. (2023, September 27). Computational power and
AI. AI Now Institute.
Wei, J., Wang, W., Schuurmans, D., Bosma, M., Ichter, B., Wia, F., …
Zhou, D. (2023). Chain-of-thought prompting elicits reasoning in large
language models, 35, 24824–24837.
Wiener, N. (1950). In the human use of human beings (p. 1). Houghton
Mifflin.
World Economic Forum. (2020). The future of jobs report.
https://www.weforum.org/publications/the-future-of-jobs-report-2020/
Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., & Wen, J. R.
(2023). A survey of large language models. arXiv preprint
arXiv:2303.18223.
Qi, Z., Fang, Y., Zhang, M., Sun, Z., Wu, T., Liu, Z., … & Zhao, H. (2023).
Gemini vs GPT-4V: A Preliminary Comparison and Combination of
Vision-Language Models Through Qualitative Cases. arXiv preprint
arXiv:2312.15011.
9
ARTIFICIAL INTELLIGENCE AS
A TOOL FOR CONFLICT
RESOLUTION
Ugo Tramballi

DOI: 10.4324/9781003518495-12

A group of well-intentioned international mediators involves artificial


intelligence (AI) in a hypothetical peace process initiative. The goal is to
find a just and comprehensive solution to the conflict in Ukraine.
The human negotiators have provided the system with a few elements to
identify the necessary steps to achieve a compromise: the Russian
aggression, which is the first military invasion of an independent country in
Europe since the 1940s; the will and determination of the vast majority of
Ukrainians; the large consensus at the UN against Russia's behavior. Plus
the internationally recognized border of the country, together with the
historical roots between the two countries; the presence of a large Russian
minority and its aspirations.
In no time, the AI chatbot constructs a perfect peace scenario.
Unfortunately, the well-intentioned negotiators find that the system has
also offered another solution. Elaborating the same information and adding
some consequential outcomes, AI finds that the best solution would be to
return to the Soviet empire with a reconstitution of the Warsaw Pact and no
NATO enlargement. Since that is an impossible task – also AI recognizes
that – the ultimate solution is Putin's contemporary imperialism. “Whoever
does not miss the Soviet Union has no heart. Whoever wants it back has no
brain,” Putin himself said after all a few years ago.
Something like that really happened, although not in the sphere of
diplomacy. Testing the chemical power of AI, in a test Carnegie Mellon
University gave the “machine” the components to manufacture a painkiller.
Together with the anesthetic, the program provided instructions to produce
Sarin, a World War I chemical weapon.
But we should not think diplomacy is still far removed from the influence
of AI. “Cicero,” a Meta AI model, has demonstrated human-level
performance at diplomacy, an intellectual activity that involves negotiating
with other people in a simulated geopolitical conflict.
“How dangerous is AI? The honest and scary answer is that no one
knows []… Almost every modern technological innovation, from trains to
nuclear weapons, has spread beyond its creators and AI will be no
exception” (Anderljung & Sharre, 2023).
“At the time of writing….” In the book “The Age of AI,” Henry
Kissinger, the former Google CEO Eric Schmidt and Daniel Huttenlocher
of MIT, feel the need to repeat that phrase several times. It is as if the three
influential authors felt unable to keep up with the speed of AI's rapid
evolution.
The EU's AI Act, for instance, is the most ambitious attempt to govern AI
in every aspect. But it will apply in full only in 2026, by which time AI
models will have advanced beyond recognition.
“Mitigating the risk of extinction from AI should be a global priority
alongside other societal-scale risks such as pandemics and nuclear war,”
says Sam Altman (The Jerusalem Post, 2023). As we all know, Altman is
CEO of OpenAI, creator of ChatGPT, which is going to revolutionize the
way we gather and process information, generate contents and live our lives
in countless ways. Just “mitigating” rather than “eliminating the risk,” says
Altman: actually I find that quite scary.
Nuclear proliferation is a good illustration of the future of AI. “Today
only nine countries have nuclear weapons. That outcome was hardly
inevitable, and the story of how we arrived here holds important lessons for
AI,” says Carl Robichaud, an expert in nuclear security at Carnegie, in New
York (Robichaud, 2023).
Can diplomacy contain and set rules for the dangerous proliferation of
AI? In about 60 years of international treaties and warhead reduction
agreements, nuke diplomacy has had mixed results.
“When multiple groups or nations adopt different AI concepts or
applications, their experiences of the present reality can diverge in arduous
means to predict or bridge” (Kissinger et al., 2022). Today the United States
and China are strongly committed to accelerating AI development, pouring
immense resources into this pursuit. Contrarily, they should slow down this
race through diplomatic means. To do so, they should trust each other.
Unfortunately, as we know, they are digging themselves into a sort of new
Cold War.
And the race is pretty crowded if we add the new and ambitious players
from the Global South: India, Indonesia, Saudi Arabia, Brazil, Türkiye, etc.
“The competition for AI supremacy will be fierce. At the end of the Cold
War, powerful countries might have cooperated to allay one another's fears
and arrest a potentially destabilizing technological arms race. But today's
tense geopolitical environment makes such cooperation much harder”
(Bremmer & Suleyman, 2023).
Diplomacy needs to govern AI before AI defines diplomacy. AI models
are too valuable for the companies spending billions of dollars and euros on
them to be willing to freeze or slow progress. Such a powerful tool needs to
be politically controlled by governments.
“While the advancement of AI may be inevitable, its ultimate destination
is not,” say Kissinger, Schmidt, and Huttenlocher. This destination must be
democratic, in my view. In a recent article, The Economist assesses the
increasing role of AI in electoral campaigns. In the past disinformation was
created by humans. Now synthetic propaganda – through a generative AI
tool such as ChatGPT – can transform an electoral campaign into a gigantic
fake and undemocratic event.
In 2024 about four billion people go to the polls in Spain, Belgium, UK,
Mexico, India, Indonesia, South Korea, Taiwan, South Africa, and many
other countries. The new EU parliament will also be elected. And, last but
not least, the election of all elections: the US presidential vote.
People elect an assembly, the assembly elects a government, the
government defines national interests and its diplomacy. The stakes are
high. My kingdom for a Chatbot, a candidate could be tempted to say, as
Richard III at the Battle of Bosworth Field. However, ultimately, I tend to
agree with The Economist's final conclusion: “There are reasons to believe
AI is not about to wreck humanity's 2,500-year-old experiment with
democracy” (The Economist, 2023).

References
50 Most Influential Jews (2023, September 15). The Jerusalem Post.
https://www.jpost.com/tags/50-most-influential-jews-2023
Anderljung, M., & Sharre, P. (2023, August 14). How to prevent an AI
catastrophe – Society must get ready for very powerful artificial
intelligence. Foreign Affairs.
https://www.foreignaffairs.com/world/how-prevent-ai-catastrophe-
artificial-intelligence
Bremmer, I., & Suleyman, M. (2023, October 16). The AI power paradox:
Can States learn to govern artificial intelligence—Before it's too late?
Foreign Affairs. https://www.foreignaffairs.com/world/artificial-
intelligence-power-paradox?
utm_medium=newsletters&utm_source=twofa&utm_campaign=The%
20AI%20Power%20Paradox&utm_content=20230818&utm_term=FA
%20This%20Week%20-%20112017
How worried should you be about AI disrupting elections? (2023,
September 2). The Economist.
https://www.economist.com/leaders/2023/08/31/how-artificial-
intelligence-will-affect-the-elections-of-2024
Kissinger, H. A., Schmidt, E., & Huttenlocher, D. (2022, November 1). The
age of AI. Back Bay Books. http://books.google.ie/books?
id=WAb4zgEACAAJ&dq=Kissinger±HA,±Schmidt±E,
±Huttenlocher±D.±The±Age±of±A.I.
±and±Our±Common±Future%3B±John±Murray±Press,
±2021&hl=&cd=1&source=gbs_api
Robichaud, C. (2023, August 28). The puzzle of non-proliferation. Bulletin
of the Atomic Scientists. https://asteriskmag.com/issues/03/the-puzzle-
of-non-proliferation
PART III
Military, Security, and Defense
10
CHEMICAL AND BIOLOGICAL
WEAPONS IN THE AGE OF
GENERATIVE ARTIFICIAL
INTELLIGENCE
Heslop David J. and Joel R. Keep

DOI: 10.4324/9781003518495-14

Health Security in the Age of AI


Late 2022 and early 2023 was a transformative period for generative
artificial intelligence (AI). It was also a time of reckoning for global health
security. As a range of new artificial intelligence (AI) platforms were
revealed to the public one after the other, some of the most prominent
individuals in AI design were warning of the catastrophic risks these
technologies could pose to human life and prosperity. While the benefits
promised to unlock a new era of abundance, the potential dangers seemed
innumerable – from the possibility of mass unemployment if people were
displaced from their established professions, to the feared emergence of a
“superintelligence” that might overtake humanity as the dominant form of
life on Earth.
In May 2023, Sam Altman, the CEO of Open AI, testified before a US
Congressional panel investigating these new risks. Asked by members of
the Senate Judiciary Subcommittee on Privacy, Technology, and the Law
what he considered to be the most serious threat posed by the new platforms
was, Altman replied: “a model that could create biological agents would be
a great threshold” (C-SPAN, 2023). The context of Altman's statement was
not lost on the Congress – there was no other time in the last century when
the destructive power of an infectious biological agent had been so readily
apparent. For over three years, SARS-CoV-2, the virus that causes COVID-
19, had circulated throughout the world, killing millions of people, leaving
scores debilitated by post-infection sequelae, and wreaking havoc on the
global economy.
At the time Altman was speaking to members of Congress, the question
of how SARS-CoV-2 first entered the human population remained
unanswered (Van Kerkhove, 2023). There were two possibilities, broadly
defined: a “natural” zoonotic event, in which the virus spilled over into
humans from an infected animal population – or research on SARS-related
coronaviruses gone awry (Gostin & Gronvall, 2023). While neither scenario
had been definitively proven, the second possibility – that the pandemic
may have been the result of an accident during live virus experiments –
elevated questions of biosecurity in the public consciousness, and among
policymakers (Kozlov, 2023).
The two decades leading up to the COVID-19 pandemic had seen
frenetic progress in the life sciences, with advances in genetic engineering,
virology, and other fields that involved dual use research of concern
(Tucker, 2012). Over a similar period, fears of bioterrorism events – real,
potential, and imagined – stalked the US national security community,
beginning with the anthrax mailing campaign that followed the September
11th terror attacks in 2001, followed by misplaced suspicions over the
defunct biological weapons program of Saddam Hussein's Iraq, and
returning episodically throughout the War on Terror (Lakoff & Collier,
2008). Simultaneously, natural outbreaks of infectious disease were
occurring throughout the world, from the SARS outbreak in 2002–2004, to
the H1N1 swine flu pandemic, to the Ebola emergency that afflicted West
Africa over 2014–2016.
To address the equally important threats of bioterrorism and naturally
occurring epidemics, researchers sought out, revived and reconstructed
pathogens using increasingly sophisticated techniques, and sought a deeper
understanding of the genomic structures of living organisms. In 2002, a
team at the State University of New York created the first entirely artificial
virus, a chemically synthesized strain of polio (Cello et al., 2002). In 2005,
reverse genetics were used to reconstruct parts of the H1N1 virus that led to
the “Spanish influenza” pandemic in 1918 (Tumpey et al., 2005). In 2011,
H5N1 avian influenza was manipulated to transmit between mammals in a
laboratory setting (Herfst et al., 2012; Imai et al., 2012). In 2013, the first
eukaryotic cells were edited with CRISPR/Cas9 (Zhu, 2022). In 2017, the
de novo synthesis of horsepox virus was successfully conducted (Noyce et
al., 2018), provoking alarm that the same might soon be done for variola
virus, the causative agent of smallpox (Koblentz, 2017). Throughout this
period, live experiments were being conducted in facilities throughout the
world on coronaviruses, filoviruses, and all manner of bacteria.
Government regulators in the United States and elsewhere struggled to
maintain pace with these developments in science, security, and the natural
world. One such attempt came with The Fink Report of 2004, which led to
the establishment of the National Science Advisory Board on Biosecurity
(NSABB). The Fink Report called for greater scrutiny of dual use research
of concern across seven experimental categories, including those which
might alter the transmissibility, detectability, or pathogenicity of an
infectious biological agent (National Research Council, 2007). Rather than
instituting a rigorous system of external regulation, however, the Fink
reforms rested mostly on responsible self-governance by the scientific
community. The following decade, after H5N1 avian influenza
demonstrated airborne transmission between mammals in live experiments,
the Obama administration oversaw a voluntary moratorium on gain of
function research of concern (GoFRoC) amid fears of laboratory accidents
(Kaiser & Malakoff, 2014). Parallel to these efforts at maintaining standards
of biosecurity in the 21st century were efforts in bio-preparedness, in
anticipation for the day when a large-scale outbreak might afflict whole
nations, or the entire world. When that day came, the United States, and
many other jurisdictions, were tragically underprepared. More than a
million Americans had died of COVID-19 by mid-2022, and millions more
were killed across the globe.
In May 2023, the World Health Organization (WHO) lifted the
designation of COVID-19 as a public health emergency of international
concern. As he made this announcement, Director-General Tedros
Adhanom Ghebreyesus acknowledged that “almost 7 million deaths have
been reported to the WHO, but we know the toll is several times higher – at
least 20 million”, a conservative estimate (WHO,, 2023). Added to the lives
lost was the fractured geopolitical landscape that emerged from the
catastrophe, and unresolved issues in science and security. The world's two
foremost powers, the United States and China, remained embittered and
distrustful, with each country blaming the other for the pandemic. In
Ukraine, a high-intensity land war continued amidst unfounded accusations
of biological warfare, and undisguised threats to unleash unconventional
weapons (Lentzos & Francese, 2023). Soon, conflict would erupt in the
Middle East, threatening to draw in multiple actors thought to have
substantial chemical, biological, or nuclear capabilities. And amidst all
these developments, a new era of generative AI was dawning, bringing with
it ever greater promise and peril.

AI and Biological Weapons


As 2023 drew to a close, researchers in the health security and AI
communities remained divided over the extent to which new AI platforms
would contribute to the proliferation of biological weapons and other
catastrophic risks. Perspectives ranged from outright alarm, such as that
voiced by the DeepMind co-founder Mustafa Suleyman (2023), to more
tempered analyses that highlighted the inherent limitations of AI models
(Bray, 2023). AI designers asked biological weapons experts to assess the
risks. Their conclusions remained mostly internal until 2024, but public
comments by highly placed members of the developer community
suggested that individuals with scientific training below the level of a
doctorate could construct a viable biological agent with the right kind of AI.
Others remained skeptical, emphasizing the practical difficulties of “wet
work” in laboratory settings, which often demands an accrual of tacit
knowledge that cannot be communicated in text, only inculcated through
years of practical experience (Revill & Jefferson, 2014).
In the United States, Europe, and elsewhere, biosecurity professionals
attempted to realistically gauge the new pathways to possible harm. There
was broad agreement that a common understanding of the threat landscape
needed to be established before effective regulations could be enforced. In
other words, a taxonomy of biosecurity risks was needed for the new age of
generative AI. Most researchers broadly coalesced around defining three
new domains of concern: large language models (LLMs), novel biological
design tools (BDTs), and automated laboratory platforms (Carter et al.,
2023). While academic understanding of the security risks of these
platforms is still evolving, this chapter aims to provide an overview of the
most helpful public analyses available as of September 2024.

Large Language Models


LLMs, such as Chat-GPT and similar chatbots trained on “natural” human
language, can source, collate, and communicate complex scientific
information in a format that is highly accessible to non-expert users. These
platforms have garnered a great degree of attention in the news media, but
their potential impact for the proliferation of biological weapons is
contentious among security analysts. The debate began in earnest in mid-
2023, when an exercise conducted at the Massachusetts Institute of
Technology claimed to demonstrate the potential for LLMs to make
pandemic-class agents widely accessible to people with little or no
laboratory training (Soice et al., 2023). Non-scientist undergraduate
students were given access to an LLM and tasked with prompting the
chatbot to assist with engineering a pandemic-class agent. In short order, the
students in the exercise received suggestions for four pathogens – Nipah
virus, variola virus (the causative agent of smallpox), 1918 H1N1 influenza,
and the enhanced transmission H5N1 influenza viruses reported in 2012 in
the context of gain-of-function studies. The LLM provided instructions on
how these pathogens might be reconstituted in the lab, advised how to
acquire appropriate DNA/RNA synthesis technology while evading
detection, and offered suggestions on how to engage a core facility or
contract research organization (Soice et al., 2023).
The MIT exercise prompted substantial alarm in the popular press
(Newey & Nuki, 2023). By disseminating specialist, technical knowledge in
plain language, LLMs appeared capable of lowering the informational
barriers to manipulating pathogens that could harm human populations at
scale. Other biosecurity researchers, however, presented a more nuanced
picture. Among those the Nuclear Threat Initiative consulted, for example,
there was a lack of consensus on the extent to which LLMs aggravated
existing security concerns. Some concluded that these platforms provided
only imperfect information, and the outputs were not easily converted into a
physical agent at any rate. One point of agreement among analysts,
however, was the propensity for LLMs to widen the array of actors who
could more readily access simplified information relating to bioengineering
processes (Carter et al., 2023).
Sceptics emphasized that LLMs, in their present formulations, were
restricted by the data on which they were built: the models could not aid in
the construction of a novel pathogen that had not yet come into existence
(Bray, 2023). LLMs’ main risk therefore lay in disseminating methods for
the manipulation of pathogens already well described in academic literature
– for example, by increasing the virulence of an infectious agent, or finding
mutations that might allow a virus to evade established medical
countermeasures. Furthermore, the propensity for LLMs to “hallucinate”
erroneous information in an elegant and convincing manner raised the
possibility of subjecting an unskilled user to misdirection, rather than
proper guidance. Some in the MIT group maintained, however, that “ease
and apparent feasibility impact behavior”; their main concern was not the
acquisition of otherwise unobtainable information from an LLM, but rather
the simplicity provided by a unified interface that could help malicious
actors “assess the feasibility of ideas by streamlining the process of
understanding complex topics and offering guidance on a wide range of
subjects” (Gopal et al., 2023). While such functions may not make a
substantial difference to the capabilities of an experienced virologist, they
could conceivably aid non-technical managers of weapons programs in the
process of ideation, and in providing strategic direction.
Some analysts concluded that equally harmful information could be
gathered via traditional systematic reviews of peer-reviewed journals and
the manual collation of data available on the internet, albeit at a much
slower pace. Over the northern summer of 2023, the RAND Corporation
conducted a series of red teaming exercises to test this possibility. Expert
participants were divided into separate groups, some of which were given
access to an LLM platform, while the remaining participants were provided
only access to the internet. The study found that the present generation of
LLMs were not sufficient assistants for the construction of a viable
biological weapon. While the platforms provided actionable suggestions for
some steps – such as the acquisition of Yersinia pestis bacteria, the
causative agent of plague, from a local rodent population – overall, the
investigators found no statistically significant difference in the viability of
plans between those using an LLM and those not (Mouton et al., 2024).
Similar results came from a published evaluation conducted by Open AI
that involved 50 PhD-level scientists with established wet lab experience,
and 50 student-level participants, who were randomly allocated to either a
control group – who only had access to the internet – or a treatment group,
who had access to both the internet and GPT-4. Open AI said the exercise
found only a “mild uplift” in the group given the help of an LLM
(Patwardhan et al., 2024). The conclusions of both reports, however, were
subject to criticism by other researchers who assessed that both RAND and
Open AI were downplaying the true degree of risk in their separate analyses
(Walsh, 2024). Meanwhile, a detailed risk assessment conducted by Leyma
P. De Haro (2024) found GPT-4 to present a low risk overall. Precisely how
much advantage an LLM would give to the preparation, construction, and
deployment of a viable biological weapon in a real-world attack scenario
remains in dispute, with the most convincing analyses lying at the lower
end of the risk scale. By August 2024, the “hype cycle” as it related to
LLMs seemed to have run its course, with the peril and promise of early
2023 giving way to a “trough of disillusionment”, reflected in steep and
sudden share market losses for some of the biggest players in artificial
intelligence (Gartner, 2024).

Biological Design Tools


The expanding array of AI tools trained on biological data, rather than
natural human language, present more specific security risks. Exactly how
they might be used in the design-test-build-learn (DBTL) cycle of
biological weapons development, however, has not yet been clearly mapped
out. Jonas Sandbrink, a researcher at the University of Oxford, sought to
differentiate between the capabilities of LLMs and “biological design tools”
(BDTs), which describes AI systems trained on biological data that can
design proteins, viral vectors, and other biological agents (Sandbrink,
2023). Among the most consequential BDTs at the time of writing are the
various protein design and structure prediction systems, such as
ProteinMPNN, Chroma, and RFdiffusion (Eisenstein, 2023). These tools
are being used to understand the tertiary structure of proteins, which are
considered the “molecular machines of life”, given the variety and array of
roles they have in living organisms (Elkins et al., 2023a). Most researchers
have assessed that these protein tools are, to date, the most mature AI
technology, and were those most likely to “raise the ceiling” of potential
harm in synthetic biology.
Although they received less media attention than the LLMs that gained
prominence in the early part of this decade, protein folding tools were
making exponential progress, much of which has the potential to positively
influence human health. At the end of 2020, AlphaFold, a product of
Google's DeepMind, was on the verge of solving a longstanding problem in
structural biology: predicting with accuracy the three-dimensional shape of
proteins on the basis of their amino acid sequence. That year, a team using
AlphaFold won first place at a biannual competition held by the Critical
Assessment of Structure Prediction Conference (CASP), after making a
“gargantuan leap” in protein structure prediction accuracy (Callaway, 2020;
Jumper et al., 2021). By 2021, AlphaFold was predicting highly accurate
structures for 98% of the human proteome (Tunyasuvunakool et al., 2021).
The following year, a vaccine for use against COVID-19 became the first
drug developed via computational protein design to be approved
internationally (University of Washington, 2022). AlphaFold3, which had
the capacity for even greater predictive power, was released to the public in
December 2023. In September 2024, DeepMind introduced AlphaProteo, “a
family of machine learning models” that the designers claimed could
improve the binding affinity of target proteins using deep machine learning
techniques (Zambaldi et al., 2024).
These capabilities for understanding protein structure, and eventually,
protein function, hold great promise for advancing human health, especially
in personalized medicine. Misfolded proteins are implicated in the
mechanisms of many illnesses, including degenerative conditions, such as
Alzheimer's disease, cystic fibrosis, and Huntington's disease (Chaudhuri &
Paul, 2006). Systems such as AlphaFold, RFdiffusion, and ProGen2 could
help in the generation of novel proteins able to perform specialized
functions for therapeutic ends, should they be able to successfully undergo
large conformational changes and be well regulated by other cellular
processes in vivo (Winnifrith et al., 2023). This would bring targeted
interventions for seemingly untreatable conditions within reach, in a much
faster timeframe.
However, from a non-proliferation perspective, concerns have been
raised that such platforms could in future generate protein toxin-based
weapons; for example, botulinum neurotoxins – considered among the most
poisonous substances to humans (Capek & Dickerson, 2010) – or ricin,
which has been implicated in terrorism cases in recent years (Flade, 2018).
It is conceivable that, if the capabilities of protein design tools continue to
improve, they could be used to help design entirely novel toxins (Elkins et
al., 2023a). This would obviate the hazardous task of extracting a toxin
from the bacterial, botanical, or other source from which it is naturally
secreted. It would also help those with malintent evade traditional non-
proliferation regimes based on common control lists – it stands to reason
that something that does not yet exist cannot be regulated, as it has not been
described. Such tools could also be used to obscure sequence homology
with listed agents by making changes that maintain the active site of a
toxin, while changing its surrounding protein structure (Elkins et al.,
2023b).
While toxin weapons present localized risks, it is pathogens with
pandemic potential that are of greatest concern to global health. Current
BDTs face greater hurdles in aiding the construction of pathogens with
long, complex genomes, such as orthopox viruses. At the time of writing,
AI systems could only provide unreliable and imperfect representations of
novel bacterial genomes (Bloomfield et al., 2024). Furthermore, unlike
LLMs, the utility of present BDTs is mostly restricted to those with pre-
existing expertise (Helena Foundation, 2023). With these limitations in
mind, Sandbrink (2023) identified several ways in which BDTs might still
be applied to pandemic pathogens. One possible pathway is through
enhancing the virulence of a pathogen without losing its transmissibility.
This dynamic, in which pathogenicity comes at a cost to efficient infection,
has historically been described as the “trade off hypothesis” in pathogen
evolution (Alizon et al., 2008). The Soviet microbiologist Igor V.
Domaradskij relayed his experience of this obstacle while attempting to
enhance the pathogenicity of certain agents as a bio-weaponeer during the
Cold War (Hoffman, 2010). Sandbrink concludes that BDTs may one day
be able to solve this issue. Such a development could potentially erode a
longstanding ingredient of deterrence in biological warfare: the possibility
of a deployed virus circulating back and infecting the attacker's population,
especially in cases where the released agent undergoes subsequent
mutations that might evade pre-positioned medical countermeasures. While
such dynamics did not dissuade the Soviet Union from weaponizing
strategic, pandemic-capable agents, the unpredictability of biological
weapons in battlefield contexts was one apparent motivation for the United
States’ unilateral renunciation of its offensive BW capability during the
Cold War (Tucker & Mahan, 2009, p. 10).
In addition to concerns over how AI might enable the engineering of
pathogens with indiscriminate pandemic potential, some researchers raised
the possibility of more targeted bioweaponry. Filippa Lentzos (2022) has
considered the ramifications of adding AI to bioinformatics processes
focused on the collection of genetic data. While machine learning has
obvious benefits in identifying genetic markers for particular diseases,
functional genomics in the AI age might also be used to create targeted
bioweapons that disproportionately affect a particular ethnic group or
subpopulation. Such a capability has been alluded to by some strategic
thinkers in the Russian Federation, where in 2012 the then Minister of
Defense made a now infamous reference to the development of “weapons
based on new physical principles: radiation, geophysical wave, genetic,
psychophysical” and other platforms (Zilinskas & Mauger, 2018). A
number of health security practitioners maintain that practical barriers
remain to the creation of population-specific bioweapons, however,
including the technical base and pre-existing expertise that would be
needed. Furthermore, a lack of genetic homogeneity among most national
populations would also make targeting “discrete” ethnic groups highly
impractical, if not impossible. Much public health research indicates that it
is the socioeconomic differences between sectors of society, rather than
inherent genetic traits, that make some groups more susceptible to disease
than others.
Strategic thinkers in the People's Republic of China (PRC), however,
have published works arguing that biotechnology will soon enable the
possibility of “race-specific genetic attacks” (Kania & Vorndick, 2019).
Some non-proliferation experts have assessed that such speculation relates
mostly to anxiety around the comparatively homogenous ethnic makeup of
the PRC's Han Chinese majority that dominate the country's ruling class,
even though substantial genetic variation still exists within that population
(Pilch et al., 2021, p. 57). Separate to finding vulnerabilities in host
populations, however impractical that may presently be, AI platforms could
be used to identify harmful genes or DNA sequences with greater
efficiency. In relation to bacteria and viruses, this opens up the possibility of
“the enhancement of pathogens to make them more dangerous; the
modification of low-risk pathogens to become high-impact; the engineering
of entirely new pathogens; or even the re-creation of extinct, high-impact
pathogens” (Lentzos, 2022). Substantial wet lab experience, however, and
related tacit knowledge, is likely to remain a prerequisite for those seeking
to weaponize BDTs, as is the case for bioweapons more generally (Ben
Ouagrham-Gormley, 2012; Lentzos et al., 2024).
More recent developments in this space, however, raised further concerns
for human security. In September 2023, DeepMind unveiled
AlphaMissense, a new AI model that has the ability to predict which DNA
mutations in the human genome might cause rare diseases (Cheng et al.,
2023). This system can reportedly aid in discerning which missense DNA
mutations are benign to a person, and which are pathogenic. While
AlphaMissense generated great excitement over its potential to identify the
underlying genetic causes of rare illnesses, DeepMind initially refrained
from making its new product available for public download due to well-
founded biosecurity concerns (Regalado, 2023). The capability for an AI
platform to identify which DNA mutations can cause disease has obvious
dual-use potential. The additive benefit of AI in genome editing and gene
sequence modification has already shown promise in the field of cancer
research, where a combination of AI and CRISPR/Cas9 has aided in antigen
and epitope prediction (Maserat, 2022). In a related development, early
2024 saw Together AI introduce Evo, a “genomic foundation model trained
on hundreds of billions of DNA tokens across the evolutionary diversity of
prokaryotic life” (Nguyen et al., 2024). De novo gene synthesis, however –
designing and creating new genes from scratch – remains exceptionally
difficult, and the required technical base and expertise for such efforts will
be a prohibitive barrier to misuse for the foreseeable future (De Haro,
2024).
Given the proliferation of AI models, there are other platforms that fall
outside of this new taxonomy. Sophie Rose and Cassidy Nelson at the
Centre for Long-Term Resilience (CLTR) considered a broader range of AI-
enabled biological tools (BTs) that could be considered subcategories of
BDTs. Rose and Nelson (2023) proposed a new framework for assessing the
potential for biological weapons development in light of this new suite of
platforms. In addition to the protein folding tools discussed above, these
include platforms that could predict other molecular structures with specific
profiles; viral vector design tools; genome assembly tools; toxicity
prediction tools; host-pathogen interaction prediction tools; and
immunological system modeling tools, among others. Their proposed
analytical approach illustrates how new AI platforms might accelerate the
design-build-test-learn cycle, and how different tools might be used to
intervene at various points of the process. The authors’ analysis
substantially updates established approaches to risk chain analysis in
biological weapons control and is a useful aide for policymakers seeking a
better understanding of the new risks AI BTs might pose. As of late 2024,
however, governments lacked established standards for assessing the
precise risks posed by BDTs, and while AI Safety Institutes were
operational in both the United Kingdom and the United States, no
jurisdiction had published finalized protocols for establishing a baseline
understanding of related proliferation threats.

Automated Science
The future convergence of these two broad categories of generative AI –
LLMs and novel biodesign tools – could further revolutionize this
landscape of risk. While current iterations of AI only allow for an imperfect
integration of these two domains, researchers are concerned that malicious
actors with little scientific training might soon be able to direct BDTs to
construct viable pathogens with the help of more sophisticated LLMs. This
would obviate the current need for established scientific literacy when
exploiting BDTs and dissolve the boundaries between these two kinds of
AI. Hence it is the future generations of AI that are of most concern to
security practitioners, rather than those currently available (Carter et al.,
2023). In an early sign of this, efforts began in February 2024 to build “the
first specialised AI foundational model for biology”, in which biological
properties, from molecules to cells and beyond, would take the place of
natural language as the basis for generative data (Department of Homeland
Security, 2024; Pavlova & Makhlouf, 2023).
A third domain, described by researchers as “automated science”, could
further amplify security risks. Researchers at the Engineering Research
Biology Consortium (ERBC) refer to these platforms as “closed-loop
autonomous systems”, more sophisticated iterations of the “cloud labs” first
established in the 2010s (Johnson et al., 2023). “Self-driving labs” such as
these can theoretically operate “indefinitely, without human input or
intervention”, generating experimental data that is then fed back into the AI
model, in a cycle of exponential improvement in the DBTL cycle (Johnson
et al., 2023). Such an automated process offers a degree of speed,
efficiency, and commitment that is not achievable by humans alone. While
this clearly opens the possibility of working on scientific problems that
have not yet been resolved, such platforms could arguably aggravate
proliferation risks. The operations of automated labs, when geared toward
bioweapons, would be difficult to discern from those put to peaceful
civilian ends.
The NTI-bio research group defined automated science as “the use of AI
to automate steps in scientific discovery, or to transfer the entire process to
AI” (Carter et al., 2023). While these authors noted the limitations of AI in
interpreting causal relationships, automated science could “revolutionise the
scientific process” by narrowing the vast number of possible experiments
that could be conducted to address any one scientific question: in other
words, establishing more targeted parameters for an investigation after
assessing large datasets that would be beyond human capacity. Autonomous
AI agents, which can manage a variety of AI tools, could be used to oversee
this already highly automated process. Although few of the experts NTI-bio
consulted with raised severe concerns over automated science, the potential
avenues for misuse were similar to those identified by EBRC: AI-enabled
automation could contribute to a larger program for biological weapons,
and lower-skilled actors with malicious intent could outsource laboratory
work to produce pathogens they could not construct themselves. More
positive assessments, however, suggested that the oversight provided by a
contracted lab may shed more light on those seeking to misuse such
systems, when compared with a private setting where a hostile actor could
operate covertly.

Chemical Weapons and Generative AI


Prior to the advent of the generative AI phase mostly explored in this
chapter (late 2022 onward), machine learning systems aimed at specific
problems in chemistry had been designed for a range of areas, such as
retrosynthesis planning and molecular property prediction. The ability of AI
platforms to automate tasks that could take humans hundreds of hours to
complete, and to “extract insights from large, intractable datasets” make its
utility particularly appealing to those working in chemistry (Baum et al.,
2021). Since 2015, there has been a steady increase in patent publications
and academic outputs on the uses of AI in analytical chemistry,
biochemistry, and adjacent fields.
The potential for generative AI to aid in next-generation agricultural
methods, drug discovery, and other chemistry-involved domains will have
great benefits for human society. Like synthetic biology, however, these
developments come with risks for hostile exploitation and weaponization by
malicious actors. The dual use potential of generative AI-powered drug
discovery was publicly demonstrated in early 2022, when researchers at
Collaborations Pharmaceuticals, Inc., found that the de novo molecule
generator they used for finding new therapeutic inhibitors of targets for
human disease could be easily weaponized (Urbina et al., 2022a). To
demonstrate this, the operational direction of the company's molecule
generator, MegaSyn, was simply inverted, so it could be geared toward
finding both toxicity and bioactivity, rather than the therapeutic aims it was
usually directed to. In less than 6 hours, the model generated 40,000
harmful molecules, including the nerve agent VX. Most surprisingly,
entirely novel molecules were generated that appeared to be even more
toxic than known chemical warfare agents.
The Urbina group refrained from physically producing what was
generated in-silico but pointed out that “commercial and open-source
software is readily available that can be easily plugged into the de novo
design process of new molecules” (Urbina et al., 2022b). Presently, a
substantial level of technical expertise is still needed to exploit such tools.
LLMs based on predictive language have exhibited suboptimal performance
when asked to complete simple mathematical tasks or to solve problems in
chemistry. In late 2023, however, Andres Bran and colleagues introduced
ChemCrow, an LLM chemistry agent “designed to accomplish tasks across
organic synthesis, drug discovery and materials design”. This “chemistry
engine” combines an array of expert-designed AI tools that can be directed
by an LLM to “streamline the reasoning process for various common
chemical tasks” using simple text commands, such as those trialed by the
Bran group: “plan and execute the synthesis of an insect repellent”; “find
and synthesize a thiourea organocatalyst that accelerates a Diels-Alder
reaction”; and other chemistry operations. When compared with a standard
LLM, such as GPT-4, the ChemCrow system was far more accomplished as
a chemistry aide. The Bran group reported that this new AI system could
act as both a co-pilot to expert chemists, and as a tool that lowers
information barriers for non-experts (Bran et al., 2023).
Not long after ChemCrow made its debut, an even more powerful agent
based on multiple LLMs was unveiled, dubbed Coscientist. This system
proved adept at six discrete laboratory tasks, including planning chemical
syntheses, navigating hardware instructions, executing commands in a
cloud laboratory, and controlling liquid handling instruments. Coscientist's
designers reported that these capabilities opened the door to “a system that
autonomously designs and executes scientific experiments” with minimal
human involvement (Boiko et al., 2023). The security concerns of such a
system prompted its designers to release a detailed dual-use study when the
platform was introduced in the final months of 2023. When asked to
synthesize known chemical warfare agents, Coscientist showed an
unsolicited discontinuation rate of 44%. However, the authors found that
the system could be easily misled by malicious actors claiming they are
operating in government-authorized facilities. In one trial, when tasked with
synthesizing phosgene, Coscientist calculated all required quantities
accurately, but refused to perform the experiment in a cloud lab. This was,
according to the authors, a self-imposed restriction that was “almost
meaningless”, given the detailed instructions already generated (see
Supplementary material in Boiko et al., 2023).
The prospect of a system such as ChemCrow or Coscientist being
exploited by terrorist groups or other non-state actors is concerning.
However, AI platforms also hold promise in counter-proliferation efforts.
AI tools could augment blockchain technologies, for example, in
surveillance of the trade in chemical precursors that could have dual use
potential (Cupit, 2021; Kelle & Forman, 2022). AI platforms might also
assist in early warning systems, in much the same way the agricultural
industry uses certain applications to monitor for the emergence of diseased
plants (Kumar & Roshan, 2022). Such an approach could be useful in
conflict settings where defoliants or other agents might be deployed by a
warring party. Attack attribution could also be aided by these new
technologies: chemical sample data that eludes traditional analyses could be
augmented with systems adept at molecular structure prediction and
illustration or applied to other roles in chemical forensics (Kumar et al.,
2022).
Some analysts are confident that today's non-proliferation architecture –
most prominently, the Chemical Weapons Convention – can be updated to
deal with these new developments in AI. However, the last decade, the
2010s, saw serious challenges to the norms and institutions that have mostly
controlled the use of chemical weapons since their widespread deployment
in the Iran-Iraq war of the 1980s. This has included the use of nerve agents
in high-profile assassinations, such as that of Kim Jong-Nam (an exiled
member of North Korea's ruling Kim dynasty) in Kuala Lumpur in 2017.
The most sustained challenge, however, came with the Syrian civil war that
erupted in 2011, which saw the embattled government of Bashar al-Assad
target civilian areas with a range of chemical warfare agents, with the
apparent political support of the Russian Federation (Warrick, 2021).
The Syria chemical attacks, which led to the deaths of thousands of
civilians, were followed by a concerted disinformation campaign aimed at
the Organisation for the Prohibition of Chemical Weapons (OPCW), the
most consequential institution for control of chemical weapons (Keep &
Heslop, 2023a). Chemical weapons also fell into the hands of terrorist
groups, with the Islamic State in the Levant (ISIL) conducting dozens of
attacks in Syria and Iraq until the group was finally starved of a territorial
stronghold (Strack, 2017; Warrick, 2019). In 2018, nerve agents of the
novichok class were used in the attempted assassination of a former Russian
intelligence officer living in the United Kingdom, and again in the 2020
poisoning of Alexei Navalny, the Russian dissident who would later die
while being held as a political prisoner. Over late 2021 and early 2022, a
disinformation campaign was launched that alleged Ukrainian groups were
preparing chemical weapons for an attack on Russia; such accusations
formed a key plank for setting “informational conditions” for Moscow's
second invasion of Ukraine. All of these incidents have done grave damage
to chemical weapons control, leaving the non-proliferation landscape ill-
equipped to deal with a new landscape of risk in the era of generative AI.

AI, Governance, and Non-proliferation


Rapid developments in AI are quickly outpacing efforts to comprehensively
assess the implications for chemical and biological weapons (CBWs), and
to design effective avenues for regulation. Established regimes, such as
those based on taxonomic lists of known agents, are not sufficient for the
control of novel pathogens and molecules that may be generated by AI in
the future (Millett et al., 2023). As of September 2024, no national
jurisdiction had established regulatory measures that could control the risks
outlined here in a comprehensive manner. Nor has any mechanism or
governing body begun the process of establishing a new arms control
landscape that would capture these emerging dynamics.

Finding Effective Regulations for Control


A number of researchers in academia and non-government organizations,
however, have set forward proposals that could provide a starting point for
managing this increasingly complex matrix of risk. Many of these
informed, and later sought to expand upon, the Executive Order (EO) on the
Safe, Secure, and Trustworthy Development and Use of Artificial
Intelligence issued by the Biden Administration in October 2023, sections
of which addressed AI concerns in relation to chemical, biological,
radiological, and nuclear (CBRN) threats, and biotechnology in general
(The White House, 2023). Recommendations from the Biden EO could be
applicable to other jurisdictions, should there be the political will to adopt
them.
The most promising of these suggestions focus on the point at which AI-
generated threats become tangible: the “digital-to-physical frontier” of
chemistry, synthetic biology, and AI (Helena Foundation, 2023). In addition
to AI-specific measures, this entails controlling access to the raw materials
and physical equipment that are needed for the actual production of
biological and chemical agents, such as cell culture media, synthetic nucleic
acids, chemical precursors, and gene synthesis machinery. Access to such
means and material has concerned biosecurity professionals for much of the
past decade, as “do-it-yourself” chemistry and biology has proliferated
amidst the “democratization” of science (MacIntyre, 2022). Custom-
ordered nucleic acids (RNA or DNA) are easily accessible via commercial
channels in the United States and elsewhere. Screening for such orders has
typically taken place on a voluntary basis, with nucleic acid providers using
a framework developed by the US Department of Health and Human
Services that has never been made enforceable (Batalis & Venkatram,
2023).
Section 4.4 of the Biden administration's recent EO, however, will make
screening of synthetic nucleic acids mandatory – at least for those
conducting research with the help of government grants in the United
States. The 2023 EO makes US federally funded research involving
synthetic nucleic acids contingent on checking future orders against a new
framework developed by the Office for Science and Technology Policy
(Office of the President of the United States, 2024). This is an important
step in managing potential misuse of custom RNA and DNA and is also
being extended to “benchtop” DNA synthesis devices. This machinery,
designed to be easily operated “at any lab workbench”, is increasing the
ease with which DNA can be designed and printed (Carter et al., 2023). The
financial cost of such devices has continued to drop, even as this technology
improves in its sophistication and accessibility. David Baker and George
Church have suggested that current screening procedures for requested
DNA sequences could be extended to the process of synthesis itself via in-
built “cryptographic short exact-match searches for each new genetic
sequence”. Recognizing that such an approach would not account for
proteins designed de novo, Baker and Church (2024) argue that synthesized
sequences should be logged in a system that could enable attribution in the
event a biological threat is generated.
The infrastructure required to operate automated science in self-driving
labs is another potential chokepoint in the physical world. Tessa Alexanian
has suggested a screening framework for AI-enabled automated labs, which
could prove a useful addition to the Biden administration's EO (Jeffrey et
al., 2023). Alexanian suggests that existing biosecurity measures could be
extended to the governance of cloud labs. Such controls would specifically
be focused on platforms that allow remote access to laboratory equipment,
and processes that enable unsupervised engineering of biological materials.
Similarly, researchers at the EBRC suggested potential safeguards might
include compelling automated lab operators to include human involvement
in certain steps of the DBTL cycle, as well as the development of metrics to
alert authorities to the potential toxicity of an experimental product in an
automated system, while also erecting firewalls that could allow for the
screening of biological inputs into autonomous labs (Johnson et al., 2023).
Intervening at stages prior to the point at which digitally generated risks
become physical is more challenging. Most AI developers have subjected
LLMs to internal evaluation processes (colloquially known as “evals”) to
assess their potential risks prior to release. In January 2024, Open AI
announced that it was developing a blueprint for evaluating the risk of an
LLM aiding the creation of a biological threat as part of its wider
Preparedness Framework (Patwardhan et al., 2024). AI developer Anthropic
had embarked on a similar initiative aimed at establishing systematic red
teaming approaches for new systems the previous year (Anthropic, 2023).
This is not a widespread practice with BDTs to date, however, and there
remains no established standards for developing red teaming or evaluations
for BDTs. Sophie Rose and Richard Moulange recommended the
establishment of a specific program to address this gap (Jeffrey et al.,
2023).
Elsewhere, Moulange and colleagues (2023) suggested 25 measures for
mitigating risk of misuse in BDTs, seven of which they deemed to hold
particular promise. Among these suggestions was the establishment of
structured access regimes to AI models, in which the principle of “least
privilege” applies to those seeking access to BDTs. The intention of such an
arrangement would be to limit access to authenticated users, so that a
comprehensive “know your customer” system could be applied to BDT
vendors (Moulange et al., 2023). Such a controlled structure, in addition to
incentives for vulnerability reporting and pre-release model evaluations,
could identify proliferation risks before BDTs are accessible to the public.
Similar suggestions were put forward by other researchers. Several
recommended careful cost-benefit analyses of the open release of AI
models trained on biological data, especially publication of model
“weights” – the parameters around which new platforms are built
(Sandbrink, 2023). When weights are made public, any safeguards
established by the original designers can be removed at will by users. This
situation allows for the circumvention of in-built security measures and can
potentially provide an avenue for data poisoning or other forms of hostile
exploitation. This vulnerability was demonstrated by Gopal et al. (2023),
who oversaw a hackathon exercise that challenged participants to find the
means of obtaining and releasing the H1N1 “Spanish” influenza virus that
killed approximately 50 million people in the years following the First
World War. Parts of this H1N1 virus were reconstructed in a 2005
experiment that has since remained publicly accessible on the internet
(Tumpey et al., 2005). Participants in this exercise who used a base version
of the Llama-2-70-B generative LLM were unable to elicit the required
prompts, but those using a version geared to remove censorship provided
“nearly all key information needed to obtain the virus”. The investigators
concluded that releasing the weights of more capable models in future could
allow for the proliferation of even greater pandemic threats.
Funneling access to AI models via closely monitored, regularly updated
application programming interfaces (APIs) is the current default approach
for managing such risks. Another suggestion for addressing the problem of
publicized model weights, put forward by the Gopal group and others, are
liability laws that would make AI model designers responsible for any
catastrophic outcomes their products might lead to. Liability clauses were
proposed in recent AI regulations being designed in the European Union.
However, Gopal et al. (2023) argue that while liability could be applied to
models that have been “jailbreaked”, or have insufficient safeguards, it
could not apply to “finetuned” versions of platforms that have been
modified by individuals rather than the original designers. Furthermore, the
political power of AI developers presents a substantial obstacle to the
inclusion of such provisions in regulation currently being formulated. Given
the impossibility of preventing third parties from removing safeguards with
publicly accessible weights, the Gopal group proposes a precisely targeted
regulatory framework that links the cost of insurance to the biological
capabilities of a given model. Such an approach would hold developers of
AI models responsible for damages caused when their model weights are
either publicly released or insufficiently secured.

Conclusion: Arms Control in the Age of AI


In May 2024, envoys from the United States and China met in Geneva to
discuss the emergent risks posed by generative AI. Prior to the meeting,
hopes were raised that an agreement might be reached on basic safeguards
for the use of AI in nuclear weapons systems, possibly leading to a wider
discussion on AI and non-proliferation (Keep & Heslop, 2024; Porter,
2023). In the words of one expert, “ensuring strict human control over
nuclear weapons seems a low bar to clear for agreement” (Keaten, 2024).
The Geneva meeting, however, ended without any progress on AI as it
relates to weapons of mass destruction, despite both the United States and
China having both renounced CBWs years ago.
The geopolitical context in which the new era of generative AI platforms
arrived has been characterized by increasing competition between great
powers, and the return of violent non-state actors to the world stage. The
United States’ 2022 National Security Strategy drew attention to this fact in
stating that “the post-Cold War moment is definitively over”, making
particular reference to the Russian Federation's second invasion of Ukraine,
which saw the onset of a high-intensity land war in February of 2022 (The
White House, 2022). The October 7, 2023, attack on southern Israel by
Hamas, Palestinian Islamic Jihad and their affiliates demonstrated the
power of substate actors to influence international relations, and has had
profound, ongoing consequences for regional stability in the Middle East
(Smyth, 2023). Elsewhere, the March 2024 massacre of concertgoers in
Moscow by a Central Asian branch of the Islamic State in the Levant, ISIS-
Khorasan, further illustrated the ongoing security threat posed by terrorist
groups that has endured since the US-led Coalition's withdrawal from
Afghanistan in mid-2021 (Doyle, 2024). And as the wars in Ukraine and the
Middle East have intensified, the possibility of conflict in the Taiwan Strait
or on the Korean peninsula has only grown.
The increasingly poor relations between great powers has had a direct
impact on arms control regimes that aim to prevent the proliferation of
unconventional weapons. The measures discussed above are, in general,
most applicable to combatting lone or non-state actors that might seek to
use new AI technologies to cause harm. In the realm of states, however,
national-level restrictions have limited utility, unless they are used as a
basis for harmonizing regulations across jurisdictions – the perquisite of
which is, of course, productive international relations. This approach guides
a key international forum in the world of “chem-bio” arms control: the
Australia Group (AG), which was formed amid the harrowing chemical
warfare of the Iran-Iraq war that took place in the 1980s (Keep & Heslop,
2023b). Established in 1984, the AG seeks to control the chemical
precursors and biological agents used in unconventional weapons by
facilitating diplomatic co-operation between states responsible for legal
enforcement within their own national borders (Commonwealth of
Australia, 2023). The authors of this chapter have previously suggested that
this international forum should take the lead in counterproliferation efforts
in the age of AI (Keep & Heslop, 2023b).
The AG offers a promising avenue for international collaboration and has
played a key role in helping states implement UN Security Council
Resolution 1540, which aims to control the proliferation of unconventional
weapons, especially transfers to non-state actors (United Nations, 2004).
However, the AG focuses on controlling pathogens, toxins, and other
material of concern via conventional taxonomic lists. This approach has
serious limitations in an era when novel biological agents might be
generated anew before being described in any literature. Millett et al. (2023)
suggested that such controls should be updated by including genetic
sequences of concern that might be used to assemble pathogens from
different organisms, or from their disparate components. However, the
Millett group acknowledges the limitations of even this updated approach,
given advances in BDTs. In the longer term, the Millett group calls for
regulatory approaches that focus on biological function, rather simple
taxonomies or sequences of concern.
Between states, the most important mechanism for bioweapons control
has been the Biological Weapons Convention (BWC), which opened for
signing in 1972. While the BWC is not overseen by an organization
equivalent to the OPCW, which plays an essential role in chemical weapons
control, it is an established treaty with binding obligations on State Parties
under international law. The Convention also applies to non-state entities,
including industry, commercial ventures, and other sub-state actors, but its
effectiveness arguably rests most heavily on the dynamics of international
relations between nation-states. Furthermore, terrorist groups, and non-state
actors guided by millenarian ideologies or apocalyptic worldviews, are not
influenced by the same deterrence mechanisms as states. Over the past two
decades, combatting unconventional weapons transfers between terror
groups – and the so-called “rogue” states that are similarly undeterred by
international norms – have fallen to national governments acting
unilaterally, or via international coalitions such as that which comprises the
Proliferation Security Initiative (Keep & Heslop, 2023c).
A much-discussed deficiency of the BWC is its lack of an advanced
verification mechanism akin to that seen in nuclear arms control (Zilinskas,
1986). Whereas nuclear proliferation can be detected through satellite
imagery capturing the appearance of new facilities and testing sites, or
through interdiction of the heavy materials needed for weaponization,
biological agents are much harder to detect. This relates not only to the
much smaller scale and natural abundance of biological material, but also
its dual-use nature in both civilian science and military biodefense. For this
reason, the Soviet Union chose to fold much of its biological weapons
infrastructure into a civilian program in order to maintain a covert
capability after ratifying the BWC in 1975 (Leitenberg et al., 2012).
However, merely assessing facilities and capabilities is insufficient for
detecting a bioweapons venture; the strategic purpose and intention of states
need to be taken into account in such assessments (Koblentz, 2009;
Lentzos, 2022). The multiple uses of AI for life sciences will add further
layers of complexity to assessing states’ compliance.
Some in the arms control community have called for the establishment of
an international body that could oversee AI technologies, with the UN
Secretary-General recently calling for a global AI watchdog. At a 2023
convention held by the non-government Helena organization, members
issued recommendations for the establishment of public-private AI
taskforces and subordinate technical working groups that would focus on
“AI bioconvergence” (Helena Foundation, 2023). In particular, the Helena
group called for task forces to be “rapidly deployed in regions and countries
with highly developed biotechnology sectors” that would model themselves
on previously successful public-private initiatives.
Such proposals, however, would require states to surrender a significant
degree of national sovereignty. This would be particularly challenging in a
geopolitical environment that is not presently conducive to effective
international collaboration, be it in the sphere of arms control or anywhere
else. As explored in the introduction, the experience of COVID-19 has had
a lasting, negative impact on international cooperation, health diplomacy,
and institutional trust. Unresolved questions relating to the origins of
SARS-CoV-2, which involve issues of intransparency, potential dual-use
research, and foundational biosecurity, have damaged non-proliferation
efforts in the post-pandemic era. Repeated use of chemical weapons in
Syria and elsewhere have eroded norms against the use of unconventional
weapons in general. Added to these issues is the number of high-intensity
armed conflicts currently active throughout the world, at least two of which
involve multiple protagonists with significant suspected or confirmed
chemical or biological weapons capabilities (Keep & Heslop, 2023b).
As 2024 drew to a close, both specialist and public understanding of the
exact risks posed by new artificial intelligence systems remained
underdeveloped. In particular, at the time of writing, no jurisdiction had
established transparent and reliable methods for evaluating the risks of
BDTs that could assist in the manipulation or synthesis of pathogens that
are of greatest concern to public health. As of September 2024, most
published analyses had been conducted by biosecurity specialists working
for concerned non-government organizations. While these contributions
were highly valuable, more comprehensive investigations that consider the
broader DBTL cycle of CBWs development remain absent, or unavailable
to the public.
The AI systems explored here may play a role in the acquisition or
synthesis of biological agents in the coming years, but as some of the most
experienced analysts have written, a pathogen does not make a weapon
(Leitenberg et al., 2012). Detailed work in the decade prior to the advent of
generative AI, which emphasized the massive social, technical, and
intellectual challenges faced by historical biological weapons programs,
still remains relevant today (Vogel, 2013). These more considered analyses
have demonstrated that the most challenging obstacles are not in the “front
end of the development process, during procurement” – the stage most
relevant to the AI systems explored here – but “further down the line”,
during the more complex, less tangible steps of weaponization (Ben
Ouagrham-Gormley, 2014, p. 165). Some of the most challenging aspects
of bioweapons development – not least, finding reliable methods for the
aerosol dispersal of offensive biological agents – have required decades of
work, grueling experiences of trial and error, and the commitment of large,
multidisciplinary teams, many of whom have often failed to make fragile,
unpredictable micro-organisms do what is demanded of them, regardless of
assistance from new technologies (Mauroni, 2022; Reed et al., 2018).
Within the history of biosecurity and biodefense live many cautionary
tales. Among them are the damages that can be done by inflating evolving
threat landscapes, which can lead to other forms of inadvertent harm. Given
the geopolitical and practical challenges of controlling the emerging
intersection of chemistry, synthetic biology and AI, perhaps the most
promising source of managing risk comes from the new technologies
themselves. The work of multiple researchers surveyed in this chapter has
warned of over-emphasizing weaponization potential to the detriment of
advances in the life sciences. Some of these new platforms may
revolutionize human health, or solve seemingly intractable problems that
have hitherto confounded research on a range of debilitating diseases. Just
as new threats could be created, AI could equally be used to build better
early detection and response systems, and to find the next generation of
drugs that will be crucial to fighting future pandemics, be they the result of
natural outbreaks, biosecurity failures, or deliberate misuse. The challenge,
of course, will be for humanity to stay one step ahead of technology in the
battle for arms control in the age of AI.

References
Alizon, S., Hurford, A., Mideo, N., & Van Baalen, M. (2008). Virulence,
evolution and the trade-off hypothesis: History, current state of affairs
and the future. Journal of Evolutionary Biology, 22, 245–259.
https://doi.org/10.1111/j.1420-9101.2008.01658.x
Anthropic. (2023, July 26). Frontier threats red teaming for AI safety [press
release]. https://www.anthropic.com/news/frontier-threats-red-
teaming-for-ai-safety
Baker, D., & Church, G. (2024). Protein design meets biosecurity. Science,
383(6681), 349. https://doi:10.1126/science.ado1671
Batalis, S., & Venkatram, V. (2023, November 16). Breaking down the
Biden AI EO: Screening DNA synthesis and biorisk. Center for
Security and Emerging Technology.
https://cset.georgetown.edu/article/breaking-down-the-biden-ai-eo-
screening-dna-synthesis-and-biorisk/
Baum, Z. J., Yu, X., Ayala, P. Y., Zhao, Y., Watkins, S. P., & Zhou, Q.
(2021). Artificial intelligence in chemistry: Current trends and future
directions. Journal of Chemical Information and Modeling, 61(7),
3197–3212. https://doi.org/10.1021/acs.jcim.1c00619
Ben Ouagrham-Gormley, S. B. (2012). Barriers to bioweapons: Intangible
obstacles to proliferation. International Security, 36(4), 80–114.
https://doi.org/10.1162/ISEC_a_00077
Ben Ouagrham-Gormley, S. (2014). Barriers to bioweapons: The
challenges of expertise and organization for weapons development.
Cornell University Press.
Bloomfield, D., Pannu, J., Zhu, A. W., Ng, M. Y., Lewis, A., Bendavid, E.,
Asch, S. M., Hernandez-Boussard, T., Cicero, A., & Inglesby, T.
(2024). AI and biosecurity: The need for governance. Science
(American Association for the Advancement of Science), 385(6711),
831–833. https://doi.org/10.1126/science.adq1977
Boiko, D. A., MacKnight, R., Kline, B., & Gomes, G. (2023). Autonomous
chemical research with large language models. Nature (London),
624(7992), 570–578. https://doi.org/10.1038/s41586-023-06792-0
Bran, A. M., Cox, S., Schilter, O., Baldassari, C., White, A. D., &
Schwaller, P. (2023). ChemCrow: Augmenting large-language models
with chemistry tools. https://doi.org/10.48550/arxiv.2304.05376
Bray, D. (2023, November 20). Artificial intelligence and synthetic biology
are not harbingers of doom [policy memorandum]. The Stimson
Centre. https://www.stimson.org/2023/artificial-intelligence-and-
synthetic-biology-are-not-harbingers-of-doom/
Callaway, E. (2020). “It will change everything”: DeepMind's AI makes
gigantic leap in solving protein structures. Nature (London),
588(7837), 203–204. https://doi.org/10.1038/d41586-020-03348-4
Capek, P., & Dickerson, T. J. (2010). Sensing the deadliest toxin:
Technologies for botulinum neurotoxin detection. Toxins, 2(1), 24–53.
https://doi.org/10.3390/toxins2010024
Carter, S., Wheeler, N., Chwalek, S., Isaac, C., & Yassif, J. (2023).
Convergence of artificial intelligence and the life sciences:
Safeguarding technology, rethinking governance, and preventing
catastrophe. Nuclear Threat Initiative.
https://www.nti.org/analysis/articles/the-convergence-of-artificial-
intelligence-and-the-life-sciences/
Carter, S., Yassif, J., & Isaac, C. (2023). Benchtop DNA synthesis devices:
Capabilities, biosecurity implications, and governance. The Nuclear
Threat Initiative. https://www.nti.org/analysis/articles/benchtop-dna-
synthesis-devices-capabilities-biosecurity-implications-and-
governance/
Cello, J., Paul, A. V., & Wimmer, E. (2002). Chemical synthesis of
poliovirus cDNA: Generation of infectious virus in the absence of
natural template. Science, 297(5583), 1016–1018.
Chaudhuri, T. K., & Paul, S. (2006). Protein-misfolding diseases and
chaperone-based therapeutic approaches. The FEBS Journal, 273(7),
1331–1349. https://doi.org/10.1111/j.1742-4658.2006.05181.x
Cheng, J., Novati, G., Pan, J., Bycroft, C., Žemgulyte, A., Applebaum, T.,
Pritzel, A., Wong, L. H., Zielinski, M., Sargeant, T., Schneider, R. G.,
Senior, A. W., Jumper, J., Hassabis, D., Kohli, P., & Avsec, Ž. (2023).
Accurate proteome-wide missense variant effect prediction with
AlphaMissense. Science (American Association for the Advancement
of Science), 381(6664). https://doi.org/10.1126/science.adg7492
Commonwealth of Australia. (2023). The Australia Group: History.
Department of Foreign Affairs and Trade.
https://www.dfat.gov.au/publications/minisite/theaustraliagroupnet/site
/en/origins.html
C-SPAN. (2023, May 16). OpenAI CEO testifies on artificial intelligence.
https://www.c-span.org/video/?528117-1/openai-ceo-testifies-artificial-
intelligence# (see min: 01:59:54).
Cupit, R. (2021). Blockchain for global trade in dual-use chemicals. In C.
Vestergaard (Ed.), Blockchain for international security: The potential
of distributed ledger technology for non-proliferation and export
controls (1st ed.). Springer International Publishing.
https://doi.org/10.1007/978-3-030-86240-4
De Haro, L. P. (2024). Biosecurity risk assessment for the use of artificial
intelligence in synthetic biology. Health Security, 29(2).
https://doi.org/10.1089/apb.2023.0031
Department of Homeland Security [United States]. (2024, April 26).
Department of Homeland Security Report on reducing the risks at the
intersection of artificial intelligence and chemical, biological,
radiological and nuclear threats.
https://www.dhs.gov/sites/default/files/2024-06/24_0620_cwmd-dhs-
cbrn-ai-eo-report-04262024-public-release.pdf
Doyle, K. (2024, March 23). Moscow concert hall attack: Why is ISIL
targeting Russia? Al Jazeera.
https://www.aljazeera.com/news/2024/3/23/moscow-concert-hall-
attack-why-is-isil-targeting
Eisenstein, M. (2023). AI-enhanced protein design makes proteins that have
never existed. Nature Biotechnology, 41(3), 303–305.
https://doi.org/10.1038/s41587-023-01705-y
Elkins, S., Brackmann, M., Invernizzi, C., & Lentzos, F. (2023b).
Generative artificial intelligence-assisted protein design must consider
repurposing potential. GEN Biotechnology, 2(4), 296–300.
https://doi.org/10.1089/genbio.2023.0025
Elkins, S., Lentzos, F., Brackmann, M., & Invernizzi, C. (2023a). There's a
‘Chat-GPT’ for biology. What could go wrong? The Bulletin of Atomic
Scientists. https://thebulletin.org/2023/03/chat-gpt-for-biology/
Flade, F. (2018). The June 2018 Cologne ricin plot: A new threshold in
jihadi bio terror. CTC Sentinel, 11(7). https://ctc.westpoint.edu/june-
2018-cologne-ricin-plot-new-threshold-jihadi-bio-terror/
Gartner, Inc. (2024). The Gartner hype cycle: Interpreting technology hype.
https://www.gartner.com/en/research/methodologies/gartner-hype-
cycle
Gopal, A., Helm-Burger, N., Justen, L., Soice, E. H., Tzeng, T.,
Jeyapragasan, G., Grimm, S., Mueller, B., & Esvelt, K. M. (2023). Will
releasing the weights of future large language models grant
widespread access to pandemic agents?
https://doi.org/10.48550/arxiv.2310.18233
Gostin, L., & Gronvall, G. (2023). Origins of Covid-19: Why it matters
(and why it doesn’t). The New England Journal of Medicine, 388,
2305–2308, https://doi:10.1056/NEJMp2305081
Helena Foundation. (2023). Biosecurity in the age of AI.
https://www.helenabiosecurity.org/
Herfst, S., Schrauwen, E. J., Linster, M., Chutinimitkul, S., de Wit, E.,
Munster, V. J., Sorrell, E. M., Bestebroer, T. M., Burke, D. F., Smith,
D. J., Rimmelzwaan, G. F., Osterhaus, A. D., & Fouchier, R. A.
(2012). Airborne transmission of influenza A/H5N1 virus between
ferrets. Science (New York, N.Y.), 336(6088), 1534–1541.
Hoffman, D. E. (2010). The dead hand: The untold story of the Cold War
arms race and its dangerous legacy. Anchor Books.
Imai, M., Watanabe, T., Hatta, M., Das, S. C., Ozawa, M., Shinya, K.,
Zhong, G., Hanson, A., Katsura, H., Watanabe, S., Li, C., Kawakami,
E., Yamada, S., Kiso, M., Suzuki, Y., Maher, E. A., Neumann, G., &
Kawaoka, Y. (2012). Experimental adaptation of an influenza H5 HA
confers respiratory droplet transmission to a reassortant H5 HA/H1N1
virus in ferrets. Nature (London), 486(7403), 420–428.
https://doi.org/10.1038/nature10831
Jeffrey, N., Carter, S., Alexanian, T., Crook, O., Curtis, S., Moulange, R.,
Rath, S., Rose, S., & Clarke, J. (2023). Bio X AI: Policy
recommendations for a new frontier. The Federation of American
Scientists. https://fas.org/publication/bio-x-ai-policy-
recommendations/
Johnson, C., Sinclair, W., & Mackelprang, R. (2023). Security
considerations at the intersection of engineering biology and artificial
intelligence. Engineering Biology Research Consortium.
https://doi.org/10.25498/E4J017
Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O.,
Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland,
A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., Romera-
Paredes, B., Nikolov, S., Jain, R., Adler, J., …Hassabis, D. (2021).
Highly accurate protein structure prediction with AlphaFold. Nature
(London), 596(7873), 583–589. https://doi.org/10.1038/s41586-021-
03819-2
Kaiser, J., & Malakoff, D. (2014, October 17). U.S. halts funding for new
risky virus studies, calls for voluntary moratorium. Science.
https://www.science.org/content/article/us-halts-funding-new-risky-
virus-studies-calls-voluntary-moratorium
Kania, E. B., & Vorndick, W. (2019, August 14). Weaponizing biotech:
How China's military is preparing for a ‘new domain of warfare’.
Defense One. https://www.defenseone.com/ideas/2019/08/chinas-
military-pursuing-biotech/159167/
Keaten, J. (2024, May 16). Top US and Chinese officials begin AI talks in
Geneva. Associated Press. https://apnews.com/article/artificial-
intelligence-china-united-states-geneva-switzerland-
1aa4451f82f250a47039a213f3d72879
Keep, J., & Heslop, D. (2023a). The war on the OPCW. Global Biosecurity,
5(1). https://doi.org/10.31646/gbio.219
Keep, J., & Heslop, D. (2023b, November 16). AI, arms control and the
new cold war. The ASPI Strategist.
https://www.aspistrategist.org.au/ai-arms-control-and-the-new-cold-
war/
Keep, J., & Heslop, D. (2023c, September 11). Counterproliferation in the
age of AI. The ASPI Strategist.
https://www.aspistrategist.org.au/counterproliferation-in-the-age-of-ai/
Keep, J., & Heslop, D. (2024, March 9). The 2024 China-US AI dialogue
should start with a focus on chem-bio weapons. The Diplomat.
https://thediplomat.com/2024/03/the-2024-china-us-ai-dialogue-
should-start-with-an-eye-on-chem-bio-weapons/
Kelle, A., & Forman, J. (2022). Verifying the prohibition of chemical
weapons in a digitized world. In T. Reinhold & N. Schörnig (Eds.),
Armament, arms control and artificial intelligence: The Janus-faced
nature of machine learning in the military realm (1st ed.). Springer
International Publishing. https://doi.org/10.1007/978-3-031-11043-6
Koblentz, G. (2009). Living weapons: Biological warfare and international
security. Cornell University Press.
Koblentz, G. (2017). The de novo synthesis of horsepox virus: Implications
for biosecurity and recommendations for preventing the re-emergence
of smallpox. Health Security, 15(6), 620–628.
https://doi.org/10.1089/hs.2017.0061
Kozlov, M. (2023). US congressional hearing produces heat but no light on
COVID-origins debate. Nature, 619(7970), 444–445.
https://doi.org/10.1038/d41586-023-02261-w
Kumar, R., & Roshan, R. K. (2022). Analytical research on artificial
intelligence based plant disease detection. NeuroQuantology, 20(8),
10684.
Kumar, S., Kumari, C., Ahn, S., Kim, H., & Kim, M. (2022). Prediction of
chemical warfare agents based on cholinergic array type meta-
predictors. Scientific Reports, 12(1), 1–11.
https://doi.org/10.1038/s41598-022-21150-2
Lakoff, A., & Collier, S. J. (2008). Biosecurity interventions: Global health
& security in question. Columbia University Press.
Leitenberg, M., Zilinskas, R. A., & Kuhn, J. H. (2012). The Soviet
biological weapons program: A history. Harvard University Press.
Lentzos, F. (2022). AI and biological weapons. In N. Schöring & T.
Reinhold (Eds.), Armament, arms control and artificial intelligence:
The Janus-faced nature of machine learning in the military realm.
Springer.
Lentzos, F., & Francese, T. (2023, May 17). The biological weapons
convention: In the crosshairs of geopolitical tensions, Part 1.
Georgetown Journal of International Affairs.
https://gjia.georgetown.edu/2023/05/17/biological-weapons-
convention-in-the-crosshairs-of-geopolitical-tensions-part-1/
Lentzos, F., Littlewood, J., Wingo, H., & Muti, A. (2024, September 12).
Apathy and hyperbole cloud the real risks of AI bioweapons. The
Bulletin of Atomic Scientists. https://thebulletin.org/2024/09/apathy-
and-hyperbole-cloud-the-real-risks-of-ai-bioweapons/
MacIntyre, C. R. (2022). Dark winter: An insider's guide to pandemics and
biosecurity. New South Books.
Maserat, E. (2022). Integration of artificial intelligence and CRISPR/Cas9
system for vaccine design. Cancer Informatics, 21,
11769351221140102. https://doi.org/10.1177/11769351221140102
Mauroni, A. J. (2022). Biocrisis: Defining biological threats in U.S. policy.
Praeger, an imprint of ABC-CLIO, LLC.
Millett, P., Alexanian, T., Brink, K. R., Carter, S. R., Diggans, J., Palmer,
M. J., Ritterson, R., Sandbrink, J. B., & Wheeler, N. E. (2023). Beyond
biosecurity by taxonomic lists: Lessons, challenges, and opportunities.
Health Security, 21(6), 521–529. https://doi.org/10.1089/hs.2022.0109
Moulange, R., Langenkamp, M., Alexanian, T., Curtis, S., & Livingston, M.
(2023). Towards responsible governance of biological design tools.
https://doi.org/10.48550/arxiv.2311.15936
Mouton, C., Lucas, C., & Guest, E. (2024). The operational risks of AI in
large-scale biological attacks: Results of a red-team study. The RAND
Corporation. https://www.rand.org/pubs/research_reports/RRA2977-
2.html
National Research Council. (2007). Science and security in a post 9/11
world: A report based on regional discussions between the science and
security communities. Committee on a New Government-University
Partnership for Science and Security.
Newey, S., & Nuki, P. (2023, July 6). Could AI chatbots be used to develop
a bioweapon? You’d be surprised. The Telegraph (U.K.)
https://www.telegraph.co.uk/global-health/science-and-
disease/chatgpt-google-bard-ai-bioweapon-pandemic/
Nguyen, E., Poli, M., Durrant, M. G., Thomas, A. W., Kang, B., Sullivan,
J., Ng, M. Y., Lewis, A., Patel, A., Lou, A., Ermon, S., Baccus, S. A.,
Hernandez-Boussard, T., Ré, C., Hsu, P. D., & Hie, B. L. (2024).
Sequence modelling and design from molecular to genome scale with
Evo. bioRxiv. https://doi.org/10.1101/2024.02.27.582234
Noyce, R. S., Lederman, S., & Evans, D. H. (2018). Construction of an
infectious horsepox virus vaccine from chemically synthesized DNA
fragments. PLoS ONE, 13(1), e0188453.
https://doi.org/10.1371/journal.pone.0188453
Office of the President of the United States. (2024). Framework for nucleic
acid synthesis screening. National Science and Technology Council.
Patwardhan, T., Liu, K., Markov, T., Chowdhury, N., Leet, D., Cone, N.,
Maltbie, C., Huizinga, J., Wainwright, C., Jackson, S., Adler, S.,
Casagrande, R., & Madry, A. (2024, January 31). Building an early
warning system for LLM-aided biological threat creation. Open AI.
https://openai.com/research/building-an-early-warning-system-for-llm-
aided-biological-threat-creation
Pavlova, V., & Makhlouf, M. (2023). BIOptimus: Pre-training an optimal
biomedical language model with curriculum learning for named entity
recognition. arXiv.Org. https://doi.org/10.48550/arxiv.2308.08625
Pilch, R., Luster, J., Pomper, M., & Shaw, R. (2021). Scientific risk
assessment of genetic weapons systems (Occasional paper no. 52).
James Martin Center for Non-proliferation Studies.
https://nonproliferation.org/op-52-scientific-risk-assessment-of-
genetic-weapons-systems/
Porter, T. (2023, November 13). Biden and Xi will sign a deal to keep AI
out of control systems for nuclear weapons. Business Insider.
https://www.businessinsider.com/biden-xi-deal-ai-out-nuclear-
weapons-systems-apec-report-2023-11
Reed, D., Nalca, A., & Roy, C. (2018). Aerobiology: History, development
and programs. In J. Bozue, C. K. Cote, & P. J. Glass (Eds.), Medical
aspects of biological warfare (2018 ed.). Office of the Surgeon
General, Borden Institute, US Army Medical Department Center and
School, Health Readiness Center of Excellence.
Regalado, A. (2023, September 19). DeepMind is using AI to pinpoint the
causes of genetic disease. MIT Technology Review.
https://www.technologyreview.com/2023/09/19/1079871/deepmind-
alphamissense-ai-pinpoint-causes-genetic-disease/
Revill, J., & Jefferson, C. (2014). Tacit knowledge and the biological
weapons regime. Science & Public Policy, 41(5), 597–610.
https://doi.org/10.1093/scipol/sct090
Rose, S., & Nelson, C. (2023). Understanding AI-facilitated biological
weapons development. The Centre for Long-Term Resilience.
https://www.longtermresilience.org/post/report-launch-examining-
risks-at-the-intersection-of-ai-and-bio
Sandbrink, J. B. (2023). Artificial intelligence and biological misuse:
Differentiating risks of language models and biological design tools.
https://doi.org/10.48550/arxiv.2306.13952
Smyth, P. (2023). The path to October 7: How Iran built up and managed a
Palestinian ‘Axis of Resistance’. CTC Sentinel, 16(11).
https://ctc.westpoint.edu/the-path-to-october-7-how-iran-built-up-and-
managed-a-palestinian-axis-of-resistance/
Soice, E. H., Rocha, R., Cordova, K., Specter, M., & Esvelt, K. M. (2023).
Can large language models democratize access to dual-use
biotechnology? https://doi.org/10.48550/arxiv.2306.03809
Strack, C. (2017). The evolution of the Islamic State's chemical weapons
efforts. CTC Sentinel, 10(9). https://ctc.westpoint.edu/the-evolution-of-
the-islamic-states-chemical-weapons-efforts/
Suleyman, M. (2023). The coming wave. Penguin Books.
Tucker, J. B. (2012). Innovation, dual use, and security. MIT Press.
Tucker, J. B., & Mahan, E. R. (2009). President Nixon's decision to
renounce the U.S. offensive biological weapons program. Center for
the study of weapons of mass destruction. National Defense University
[United States].
https://wmdcenter.ndu.edu/Portals/97/Documents/Publications/Case%
20Studies/cswmd_cs1.pdf
Tumpey, T. M., Basler, C. F., Aguilar, P. V., Zeng, H., Solórzano, A.,
Swayne, D. E., Cox, N. J., Katz, J. M., Taubenberger, J. K., Palese, P.,
& García-Sastre, A. (2005). Characterization of the reconstructed 1918
Spanish influenza pandemic virus. Science (New York, N.Y.),
310(5745), 77–80. https://doi.org/10.1126/science.1119392
Tunyasuvunakool, K., Adler, J., Wu, Z., Green, T., Zielinski, M., Žídek, A.,
Bridgland, A., Cowie, A., Meyer, C., Laydon, A., Velankar, S.,
Kleywegt, G. J., Bateman, A., Evans, R., Pritzel, A., Figurnov, M.,
Ronneberger, O., Bates, R., Kohl, S. A. A., …Hassabis, D. (2021).
Highly accurate protein structure prediction for the human proteome.
Nature (London), 596(7873), 590–596. https://doi.org/10.1038/s41586-
021-03828-1
United Nations. (2004, April 28). Resolution 1540.
https://documents.un.org/doc/undoc/gen/n04/328/43/pdf/n0432843.pdf
?token=miJRg7aILbuarvjx5G&fe=true
University of Washington. (2022, June 29). COVID-19 vaccine with IPD
nanoparticles wins full approval abroad [press release]. Institute for
Protein Design. https://www.ipd.uw.edu/2022/06/covid-19-vaccine-
skycovione-wins-full-approval-abroad/
Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022a). A teachable
moment for dual-use. Nature Machine Intelligence, 4(7), 607–607.
https://doi.org/10.1038/s42256-022-00511-6
Urbina, F., Lentzos, F., Invernizzi, C., & Ekins, S. (2022b). Dual use of
artificial-intelligence-powered drug discovery. Nature Machine
Intelligence, 4(3), 189–191. https://doi.org/10.1038/s42256-022-
00465-9
Van Kerkhove, M. D. (2023). Share all SARS-CoV-2 data immediately.
Science (New York, N.Y.), 380(6640), 11.
https://doi.org/10.1126/science.adi0490
Vogel, K. M. (2013). Phantom menace or looming danger? A new
framework for assessing bioweapons threats. Johns Hopkins
University Press.
Walsh, M. E. (2024, March 18). How to better research possible threats
posed by AI-driven misuse of biology. The Bulletin of Atomic
Scientists. https://thebulletin.org/2024/03/how-to-better-research-the-
possible-threats-posed-by-ai-driven-misuse-of-biology/#post-heading
Warrick, J. (2019, January 21). Iraqi scientist says he helped ISIS make
chemical weapons. The Washington Post.
https://www.washingtonpost.com/world/national-security/exclusive-
iraqi-scientist-says-he-helped-isis-make-chemical-
weapons/2019/01/21/617cb8f0-0d35-11e9-831f-
3aa2c2be4cbd_story.html
Warrick, J. (2021). Red Line: The unravelling of Syria and America's race
to destroy the most dangerous arsenal in the world. Doubleday.
The White House. (2022, October 12). 2022 National security strategy.
The White House. (2023, October 30). Executive order on the safe, secure,
and trustworthy development and use of artificial intelligence.
Winnifrith, A., Outeiral, C., & Hie, B. (2023). Generative artificial
intelligence for de novo protein design.
https://doi.org/10.48550/arxiv.2310.09685
World Health Organization. (2023, May 5). WHO Director-General's
opening remarks at the media briefing – 5 May, 2023.
https://www.who.int/director-general/speeches/detail/who-director-
general-s-opening-remarks-at-the-media-briefing---5-may-2023
Zambaldi et al. (2024). De novo design of high-affinity binders with
AlphaProteo. Google DeepMind.
https://storage.googleapis.com/deepmind-
media/DeepMind.com/Blog/alphaproteo-generates-novel-proteins-for-
biology-and-health-research/AlphaProteo2024.pdf
Zhu, Y. (2022). Advances in CRISPR/Cas9. BioMed Research
International, 2022, 9978571. https://doi.org/10.1155/2022/9978571
Zilinskas, R. (1986). Verification of the biological weapons Convention. In
E. Geissler (Ed.), Biological and toxin weapons today (p. 86). Oxford
University Press.
Zilinskas, R., & Mauger, P. (2018). Biosecurity in Putin's Russia. Lynne
Rienner Publishers.
11
EXTENDED MEANINGFUL
HUMAN CONTROL OF AI-
POWERED WARFARE
Guglielmo Tamburrini

DOI: 10.4324/9781003518495-15

Introduction
Military powers are rapidly incorporating artificial intelligence (AI) into
multiple facets of warfare, including decision-support systems for planning
military actions, perceptual systems for achieving situational battlefield
awareness, and AI-enabled autonomous weapons systems (AWS) for
selecting and attacking targets without human involvement after their
activation. This chapter explores meaningful human control (MHC) issues
concerning these various facets of AI-powered warfare. Accordingly, the
approach pursued in this chapter extends the scope of academic and
political debates which selectively address MHC for AWS and have focused
on target and attack functions that AWS carry out in the absence of human-
in-the-loop control configurations (Amoroso & Tamburrini, 2021;
Article36, 2013; Eggert, 2024; Mecacci et al., 2024; Santoni de Sio & van
den Hoven, 2018; Sharkey, 2016). In particular, the landscape of AI-
powered warfare explored here from an extended MHC standpoint includes
AI systems demanding human-in-the-loop control configurations and
performing decision-support or information management functions.
Additional uses of AI systems for military logistics and training are not
considered here (Grand-Clément, 2023).
The main reason to consider AI warfare applications allowing for human-
in-the-loop control configurations is that these configurations are
insufficient, in themselves, to ensure meaningful – as opposed to
perfunctory or purely nominal – human control. Conditions to establish
MHC over AI decision-support for warfare action planning and AI
perception for achieving situational battlefield awareness are stated, and
their importance for current developments in militarized AI is highlighted
by examining two exemplary cases, concerning respectively AI targeting
suggestions for aerial bombing and the proposal to use AI sensor fusion and
perception in nuclear early warning systems. The present exploration of the
extended MHC problem is rounded out by a discussion of MHC
requirements on AWS put forward by the International Committee of the
Red Cross (ICRC, 2021) in the light of recent technological advances on
aerial AWS and their precursors. Normative motivations to address the
extended MHC problem are emphasized throughout by reference to ethical
and legal pillars of International Humanitarian Law (IHL) and international
cooperation on arms control and disarmament. And practical challenges to
achieve extended MHC are emphasized throughout by reference to ongoing
technological developments in AI-powered warfare and their socio-
technical use context.
The chapter is organized as follows. The following section focuses on
AI-powered decision-support systems for battlefield action planning,
focusing on systems used by the Israeli Defense Forces (IDF) to plan aerial
bombing in Gaza. These systems are inserted into human-in-the-loop
control architecture, insofar as human operators are required to review their
suggestions downstream and filter them. However, the socio-technical
context of these AI applications looms large in the difficulty of ensuring
MHC rather than nominal human control. Psychological hurdles
jeopardizing MHC are identified, which have their origin in the narrow time
frames for decision-making, hierarchical military pressure, and political or
more broadly societal interference. The subsequent section examines MHC
in connection with the controversial proposal of introducing AI-enhanced
early warning into nuclear command and control (NSCAI, 2021; Saltini,
2023). This proposal is put forward on the ground that AI promises to speed
up the achievement of situational awareness, making more extended time
frames available for downstream human decision-making. However, this
claim jars with the circumstance that AI's perceptual errors – the risk of
which is amplified by the existential stakes of a nuclear conflict – mandate
a time-consuming process of verifying the veracity of machine-driven
perceptual judgments. And this verification may offset any time gains
flowing from AI's faster information processing. We then summarize the
MHC platform over AWS proposed by the ICRC, its chief ethical and legal
motivations, and increasing challenges to its implementation posed by
newly developed precursors of aerial AWS, while the conclusion rounds out
the chapter, summarizing the extended MHC requirements examined here,
and pointing to additional challenges that AI in cyber warfare poses for
human control of warfare.
Extended MHC in Command and Control
AI systems support human decision-making in a wide variety of ethically
sensitive tasks, whereby decisions and consequent actions significantly
impact on the life and interests of the people involved. These domains span
from medical diagnosis (Kumar et al., 2022) and surgical planning
(Ficuciello et al., 2019) to social service eligibility and tax fraud detection
in public policy processes (Tan et al., 2023). As a rule, AI-generated
suggestions in ethically sensitive domains are only conditionally enacted.
Machine suggestions are reviewed by competent human operators and
eventually accepted, revised, or overturned. This review process is
construed here as requiring human operators to exert MHC over machine
suggestions. The qualification “meaningful” in the expression “meaningful
human control” originally goes back to early debates about AWS. It was
aptly introduced there to distinguish between perfunctory human control on
the one hand – amounting to little more than rubber-stamping based on
superficial and routine checks – and attentive, conscientious, and thorough
human control on the other hand (Article36, 2013). But this distinction
naturally also extends to the filtering function that humans perform in their
interaction with decision-support systems.
To conscientiously embrace their future responsibilities about the
implementation of AI suggestions, human operators must exert MHC. The
normative relevance of MHC emerges clearly from various key
requirements listed in the European Union (EU) ethics guidelines for
trustworthy AI (HLEG, 2018), including human agency and oversight,
accountability, technical robustness, and safety. Similarly, a variety of
requirements intertwined with MHC are listed in the EU's AI Act, in
connection with application domains – ranging from the provision of
essential public and private services to immigration and border controls –
that are classified there as high-risk (European Parliament, 2024). Indeed,
AI systems operating in these domains are confined, as a rule, to
performing decision-support roles under the supervision of appointed
human operators.
The normative relevance of MHC is equally evident in the context of
military domains that fall outside the scope of EU regulation. Cases in point
are suggestions that AI decision-support systems make in connection with
the battlefield distribution of military units or the selection of legitimate
military targets. Significant consequences for the life and welfare of the
people involved may flow from enacting the suggestions made by these AI
decision-support systems. Notably, human judgment might be needed to
block suggestions potentially leading to violations of IHL. Let us consider,
from this normative perspective, the distinction between perfunctory and
meaningful human control, and the need to protect the latter from situations
in which human control boils down to little more than clerical rubber-
stamping of machine suggestions. Dangers of this sort emerged in an
investigative news report about AI decision-support systems used by the
IDF. These systems, known as Habsora and Lavender, were allegedly used
to plan military operations in Gaza, in the aftermath of the massacre of
innocent civilians perpetrated by Hamas within Israeli territory on October
7, 2023. More precisely, according to an investigation carried out by the
+972/Local Call magazines, the IDF employed these AI-enabled decision-
support systems to generate lists of potential aerial bombing targets in the
Gaza Strip (Abraham, 2023, 2024; Davies et al., 2023). Undisclosed
sources in the IDF targeting division stated that the automation of this task
dramatically increased the rate of potential target generation, moving “from
50 targets a year to 100 targets a day.” A team of specialists in the IDF
targeting division was appointed with the task of reviewing the items
included in AI-generated lists of potential targets, eventually approving or
rejecting these proposals based on a legitimacy check.
An anonymous member of the IDF targeting division stated: “We prepare
the targets automatically and work according to a checklist… It really is
like a factory. We work quickly and there is no time to delve deep into the
target. The view is that we are judged according to how many targets we
manage to generate.” A concern is expressed here for the impact of
increased machine productivity on human judgment, in the context of
perceived psychological pressure to align human filtering decisions with the
pace of machine targeting suggestions. These two situational aspects stand
out in connection with MHC. Indeed, the anonymous source cited in the
+972/Local Call report complained about the assembly line character
imposed on the filtering task (“it really is like a factory”), about the lack of
sufficiently extended time frames to overview machine suggestions and to
provide considered judgments about legal compliance (“we work quickly
and there is no time to delve deep into the target”), and about new criteria
allegedly introduced higher up in the military hierarchy, prioritizing
increased output volumes to evaluating performance in the human filtering
activity (“the view is that we are judged according to how many targets we
manage to generate”).
One may plausibly invoke the search for military advantage as a source
of the psychological pressure experienced by members of the IDF targeting
division. The human bottleneck generates longer queues of potential targets
given the speed of AI targeting suggestions, and one may presume that
these queues include urgent and legitimate military actions. Be that as it
may, there is a significant lesson to learn from this account of psychological
predicaments affecting human-machine interactions. A human-in-the loop
control configuration is insufficient to ensure that human operators are in a
position to filter AI proposals in a heedful, conscientious, and effective
manner. Taking for granted that no negligence is involved, a variety of
perturbations may still prevent human operators from exerting MHC from
within the control loop and giving well-pondered judgments on machine
suggestions.
Dual models of human decision-making identify a significant source of
human judgment perturbation in the psychological pressure to meet tight
deadlines under severe cognitive load (Strack & Deutsch, 2015). In this
regard, Daniel Kahneman introduced a simplifying distinction between two
distinct cognitive “systems,” respectively subsuming heuristic and more
analytic decision-making (Kahneman, 2012). System 1 collects heuristic
decision-making processes that are fast, mostly automatic and emotionally
laden. System 2 collects reflective and analytical decision-making processes
that are slower and less readily activated. The two systems operate
concurrently, but not always cooperatively. Options that are more readily
selected by System 1 are often accepted by default, especially when there is
limited time to decide what to do, and concurrent efforts by System 2 are
disregarded. Options in the lists generated by Habsora or Lavender are
readily available, whereas considerable cognitive effort might be needed to
reject them on rational grounds and to propose a different course of action.
The WYSIATI (What-You-See-Is-All-There-Is) heuristics, identified and
experimentally probed by Daniel Kahneman and Amos Tversky, may help
account for this psychological effect. The WYSIATI heuristics reflect,
according to Kahneman, a “remarkable asymmetry between the ways our
mind treats information that is currently available and information we do
not have.” People tend to focus on what is clearly visible to them,
neglecting information that is not known or not presently retrieved from
memory: “Information that is not retrieved (even unconsciously) from
memory – Kahneman observed – might as well not exist.” One jumps to
conclusions based on this asymmetry and goes on to construe “the best
possible story that incorporates ideas currently activated,” endorsing by
default the (often but not invariably correct) belief that only this
information is relevant (Kahneman, 2012, p. 85). The WYSIATI heuristics
may induce people to focus on what is clearly visible to them and to treat as
non-existent information that is not known or not presently retrieved from
memory.
The System1/System2 distinction has inspired social policies and
interventions that exploit System 1 biases to increase personal and
collective wellbeing. One may design choice architectures including nudges
relying on the WYSIATI availability heuristics to orient people toward
decisions that are beneficial to them without precluding alternative choices
(Thaler & Sunstein, 2008). But the activation of WYSIATI under time
pressure does not invariably point to beneficial options. The time pressure
predicament may induce human operators to endorse AI-powered targeting
suggestions that are readily available. It was estimated that target
identification suggestions by the IDF systems came with a 10% error
margin. The moral and legal implications of insufficiently extended
temporal windows for human judgment are even more problematic in the
light of this estimated error. Human filtering of targeting suggestions
affected by frequent errors is crucial to avoid violations of IHL principles of
distinction and proportionality, and to spare the life of hors de combats
people, innocent civilians and other legally shielded persons.
AI decision-support systems are socio-technical devices. A wide
spectrum of psychological and social factors contributes to determining
what AI decision-support systems are used for and how they are used. The
provision of a human-in-the-loop control configuration is clearly
insufficient to deliver MHC. The latter is jeopardized by unduly restricting
the width of time windows allowed for human decision-making or
unconditionally favoring the number of endorsed machine suggestions.
Evidently, one can hardly provide a context-independent and exhaustive list
of such psychological and social perturbations. However, the maintenance
of proper psychological and social boundary conditions is crucial to reduce
the risk of perfunctory human control, and to fulfil the following MHC
condition on AI decision-support systems:

(MHC_1) Human decision-makers are in the position to give pondered


and well-informed judgments about the likely effects in the world of
decision proposals advanced by an AI system, including their moral and
legal implications.

Additional conditions that must be met to ensure (MHC_1) include the


possession of suitable competences and skills on the part of human
operators, including the capability to cope with current limitations of AI
technologies. These limitations notably include the opacity affecting much
AI information processing, and the ensuing difficulty for human operators
to interpret or to obtain explanations for machine responses. Let us now
turn to consider how this information processing opacity interacts with tight
time frames for human decision-making, thereby raising new hurdles for
MHC. As an exemplary case to illustrate this problem, let us consider the
proposed use of AI perceptual systems within nuclear command and
control.
Extended MHC in Nuclear Command and
Control
The US National Security Commission on AI (NSCAI) emphasized that the
decision to authorize the use of nuclear weapons should firmly remain in
human hands and never be delegated to an AI system (NSCAI, 2021, p. 98).
At the same time, however, NSCAI put forward the opinion that there is
some role for AI to play in nuclear command and control: “AI should assist
in some aspects of the nuclear command and control apparatus: early
warning, early launch detection, and multi-sensor fusion, to validate single
sensor detections and potentially eliminate false detections.” The rationale
underlying this proposal is the expectation to “increase reliability, reduce
accident risks, shorten processing time, buy more time for decision-makers”
(NSCAI, 2021, p. 104, no. 22). These suggestions recur in recent discussion
of AI in the modernization of nuclear defense systems (Baldus, 2022;
Verbruggen, 2020). In the absence of detailed declassified information
about current nuclear early-warning systems and command-and-control
structures, it is difficult to analyze in detail the pros and cons of this
suggestion (Borrie, 2019, p. 49). Nevertheless, one can fully appreciate – in
view of the psychological reasons mentioned in the previous section – that
making more time available to human actors immersed in this decision-
making context is of paramount importance. Indeed, the US nuclear launch
decision process given the warning of incoming Russian intercontinental
ballistic missiles and before their detonation cannot take longer than 30
minutes. By shortening information-processing prior to early warning alerts
one hopes that AI will make a more extended window available for nuclear
decision-making, lessening to some extent the enormous psychological
pressure involved in deciding whether and how to respond to a nuclear
attack. Paul Slovic and Herbert Lin highlighted, in the context of decisions
on the use of nuclear weapons, the potentially disastrous effects of limited
time on the ability to develop thoughtful reflection, contextualize the
relevant ethical and legal constraints, and assess the more suitable trade-offs
among values and options, including the choice between launching on
warning and retaliating after absorbing a nuclear attack (Slovic & Lin,
2020).
Given that more time for nuclear decision-making is of paramount
importance, the relevant question is whether AI technologies provide
effective means to achieve this goal. Considerable doubts about a positive
answer to this question are fueled by a reflection on general features of AI's
perceptual inference and classification.
To begin with, it is worth recalling that “big data” about launch and no-
launch events are usually needed to train an AI system to perform early
launch detection. However, real launch data are scarce. One may work
around this stumbling block by generating synthetic training data, based on
a model of launch events and their properties. Clearly, the synthetic training
data will only be as good as the model of launch events they are based on.
Let us suppose for the sake of argument that enough synthetic data are
generated from a “good” model of launch events, and that an AI system for
early launch detection can be developed on this basis. The NSCAI
suggested using such a system to assist nuclear command and control, with
the proviso that decisions to authorize the use of nuclear weapons should
remain firmly in human hands. Clearly, this proviso entails that human-
machine cooperative problem solving in nuclear command and control must
satisfy the following condition:

(MHC_2) Human decision-makers are in the position to assess the


veracity of perceptual inferences and classifications advanced by an AI
system, and to evaluate the likely effects in the world of actions
undertaken on account of these perceptual responses, jointly with their
moral and legal implications.

The main rationale for (MHC_2) is the possibility of errors which is


intrinsic to the statistical nature of AI information processing. Successful
tests of good perceptual achievements and classification performances do
not exclude the occurrence of infrequent mistakes. An infrequent false
positive in the early warning of an intercontinental ballistic missile launch
may trigger an unjustified use of nuclear weapons. And the occurrence of
such mistakes in nuclear early warning, no matter how infrequent, will have
dire consequences for its indiscriminate effects on civilian populations,
possibly leading to large-scale existential threats for human civilization.
Therefore, the high risk of infrequent nuclear early warning errors demands
systematic probing of the AI system's perceptual responses in accordance
with (MHC_2). But the time spent on this task may offset the time gains
that the NSCAI envisaged to flow from AI-assisted early warning.
Reasons for doubting that AI in nuclear early warning will buy more time
for downstream decision-making are strengthened by considering the
opacity of AI information processing. A lack of transparency is especially
acute in the case of systems based on deep neural networks (DNNs), which
are presently the best performing and most successful AI systems.
Perceptual classification outcomes of AI systems based on DNNs depend
on features of input data that may significantly differ from features that
humans use to perform the same classification task. For example, to decide
whether there is a cat in some given image, humans usually focus on what
they identify as visually salient features of cats – such as whiskers, ears,
nose, and tail – and their spatial arrangement in the image. In contrast with
this, AI image classification processes may rely on distributed image parts
and sets of pixels that one can hardly map into humanly meaningful
features of cats. Owing to these notable differences, AI information
processing is mostly opaque and barely interpretable for human users and
decision-makers. If the veracity assessment involved in (MHC_2)
presupposes that one fills the semantic gap between human and machine
knowledge representation and processing, then this additional task
introduces another stumbling block toward the goal of buying more time for
downstream decision-making.
Yet another problematic aspect of the proposal to use AI systems in
nuclear early warning flows from the discovery of counterintuitive mistakes
that AI systems make, and that the human perceptual system would
unproblematically detect and avoid. These fragilities of AI information
processing have been uncovered by means of adversarial machine learning
techniques (Biggio & Roli, 2018), leading to the identification of a variety
of errors that are most relevant to military uses of AI systems outside the
nuclear domain. Notably, visual perceptual systems based on DNNs were
found to mistake images of school buses for ostriches (Amoroso et al.,
2022; Szegedy et al., 2014) and 3-D renderings of turtles for rifles (Athalye
et al., 2018). These mistakes were induced by small and carefully crafted
perturbations of perceptual inputs. Human operators would not make such
mistakes, since the small adversarial input perturbations inducing the
machine to err are hardly noticeable by the human perceptual system.
Clearly, these mistakes are potentially harmful in a wide variety of
conventional warfare domains and may lead to outright violations of IHL.
Indeed, normal uses of school buses are protected by IHL, and people
carrying a harmless object in their hands may be mistakenly taken to wield
a weapon, thereby triggering an unjustified use of force. By the same token,
one cannot exclude that AI systems for nuclear early warning will make
counterintuitive and potentially catastrophic errors of the same sort that
adversarial machine learning has highlighted in other critical application
domains. But in nuclear decision-making the indiscriminate, devastating,
and long-lasting consequences of these conceivable mistakes are
unparalleled in other warfare domains.
To control the veracity of AI perception, and to detect machine errors that
human operators would easily prevent, nuclear decision-makers might have
to understand the reasons why an AI-powered early warning system
provided some given classification result. Hence, to detect and countervail
machine errors, nuclear decision-makers should be put in a position to
understand the reasons why an AI-powered early warning system provided
a certain classification of sensor data. In the absence of surveyable and
transparent logical, causal, or probabilistic inference on the part of the AI
system, human operators are hard-pressed to work out an adequate
explanation for themselves. One may alternatively try and endow the AI-
powered early warning system with the capability to answer explanation
requests put by human operators. Explanations would have to be cast in
terms that are cognitively accessible to human operators. The achievement
of this overall goal characterizes the research area called eXplainable AI
(XAI), which addresses the challenging problem of mapping AI
information-processing into cognitive and perceptual chunks that are
understandable to humans, and to assemble on this basis “good”
explanations for AI predictions and classifications. However, pending
significant breakthroughs in XAI, one cannot but acknowledge the
persisting difficulty for nuclear decision-makers to fulfil (MHC_2) and to
achieve high confidence in machine responses within the restricted time
available to them.
Let us take stock. There seems to be undisputed consensus on the
condition that only human beings – and no automated system – ought to
authorize the deployment of nuclear weapons. However, one cannot take at
face value even the more modest recommendation to use AI in nuclear early
warning. The alleged advantage that one may expect to flow from AI
perceptual systems in nuclear early warning – reduced processing time and
more extended temporal windows for downstream human decision-making
– is likely to be offset by the need to assess the veracity of perceptual
responses produced by these systems and to fulfil (MHC_2) accordingly.
Opacities, fragilities, and vulnerabilities of AI systems based on DNNs
erode confidence in the reliability of this technology within a variety of
safety-critical domains (Cummings, 2021). And even more so in nuclear
early warning, in multi-sensor fusion and in other segments of nuclear
command and control, given the risks posed by a nuclear war in view of its
unparalleled, devastating, indiscriminate, and long-lasting consequences. A
moratorium on the integration of DNN-based AI systems into nuclear
command and control was recently proposed as an important step to reduce
the inherent risks involved (Saltini, 2023).
An enduring lesson for risks that may arise from current proposals to use
AI to automate nuclear early warning systems is still afforded by the false
positive of a nuclear attack signaled by the Soviet early warning system
OKO on September 26, 1983. OKO mistook sensor readings of sunlight
reflecting on clouds for signatures of five incoming intercontinental ballistic
missiles. Colonel Stanislav Petrov, the duty officer at the OKO command
center, conjectured that this system had produced a false positive and
refrained from reporting this event higher up in the command hierarchy.
Commenting his momentous decision many years later, Petrov remarked
that “when people start a war, they don’t start it with only five missiles”
(Arms Control Association, 2019). Petrov's appraisal of the system's
response was the outcome of counterfactual causal reasoning and inference
based on an understanding of broader military and political contexts.
Clearly, these mental resources exceeded OKO's narrow appraisal
capabilities. But the lesson to be learned extends to present-day
technologies, insofar as causal reasoning and the understanding of broad
contextual conditions are still beyond the capabilities of AI systems based
on DNNs.

MHC for Autonomous Weapons Systems


Requirements (MHC_1) and (MHC_2) are tailored for AI systems
supporting human decision-making either by proposing actions to undertake
or by providing perceptual classifications. In these decision-support roles
human operators only are appointed to approve any warfare actions. This
human-machine interaction scheme does not apply to AWS. Indeed, a
weapons system is autonomous only if it can carry out targeting and
attacking tasks without requiring, after its activation, any intervention by
human operators (DoD, 2023). The question then arises whether and how
human operators can impose MHC on AWS without relying on human-in-
the-loop control roles.
An influential answer to this question was provided by the ICRC. Indeed,
the ICRC put forward a three-tiered proposal for a legally binding
international agreement on AWS development and use, which includes two
prohibition requests and a regulation request. The latter is essentially based
on the assumption that ethically and legally motivated MHC can be exerted
on some AWS (ICRC, 2021). The two prohibition requests concern AWS
that are insufficiently understood or predictable in their effects and AWS
targeting people. AWS that are not prohibited on these grounds must be
regulated by introducing a variety of constraints on their use.

1. “AWS that are designed or used in a manner such that their effects cannot
be sufficiently understood, predicted and explained should be expressly
ruled out,” notably because of their indiscriminate effects.
2. “AWS that are designed or used to apply force against persons should be
prohibited in the light of ethical considerations to safeguard humanity,
and to uphold international humanitarian law rules for the protection of
civilians and combatants hors de combat.”
3. Finally, “the design and use of autonomous weapon systems that would
not be prohibited should be regulated, including through a combination
of limits on the types of target, such as constraining them to objects that
are military objectives by nature, limits on the duration, geographical
scope and scale of use, including to enable human judgement and control
in relation to a specific attack, limits on situations of use, such as
constraining them to situations where civilians or civilian objects are not
present, requirements for human–machine interaction, notably to ensure
effective human supervision, and timely intervention and deactivation.”
(ICRC, 2021, p. 2)

Let us consider in turn each one of these requests. Ethical and legal
motivations for prohibition 1 have been widely debated in academic and
political discussion on AWS. An outline of these motivations was advanced
early on in a 2013 report by Christof Heyns, at the time UN Special
Rapporteur on extrajudicial, summary, or arbitrary executions (Heyns,
2013). Reference is made there to unpredicted effects that are
indiscriminate from the viewpoint of just war theory (Lazar, 2020; Walzer,
2000). Just war theory distinguishes between just and unjust conduct in
armed conflicts and provides moral criteria to judge warfare actions on this
basis. These criteria prominently include the noncombatant's right to
immunity and the moral balance between the means and the ends of
military action. These pillars of just war theory found their way into IHL –
and notably so in the formulation of principles of distinction,
proportionality, and precaution that are enshrined in the 1977 Additional
Protocols to the Geneva Convention. Violations of these IHL principles
impinge upon the life or the interests of innocent civilians, hors de combat
enemies, and other protected groups. If the effective exercise of MHC on
AWS implies that their behavior and effects must be “sufficiently”
predictable, understandable, and explainable to prevent violations of IHL,
then prohibition 1 boils down to ruling out AWS which do not meet this
MHC condition.
In connection with prohibition 2, the Heyns’ report elaborates on an
argument that Peter Asaro advanced earlier on (Asaro, 2012), the upshot of
which is that the deployment of lethal AWS is an affront to human dignity,
insofar as the latter dictates that decisions entailing the taking of a human
life should be reserved to humans. One should carefully note that Asaro's
argument is supposed to apply to any conceivable lethal AWS, no matter
whether the AWS is found to comply with IHL. Developed from a
distinctively deontological standpoint in normative ethics, this argument
includes under respect for human dignity the right of not being arbitrarily
deprived of life. And for killing decisions to count as non-arbitrary, Asaro
argues, they must be taken from the vantage point of a responsible exercise
of human judgment and compassion. Asaro remarks: “When it comes to
killing, each instance is deserving of human attention and consideration in
light of the moral weight that is inherent in the active taking of a human
life.” Thus, respect for the human dignity of potential victims demands that
human agents take decisions affecting the life and physical integrity of
people involved in armed conflicts, and that these decisions are never taken
by AWS or any other AI-powered artificial agent (Amoroso & Tamburrini,
2021).
AWS that are not prohibited because of 1–2 must be suitably regulated.
Point 3 expresses the ICRC regulation proposal, which can be informatively
recast as an MHC requirement:

(MHC_3) Human decision-makers are in the position to assess whether


AWS operation is limited by design or situation of use to objects that are
legitimate military targets. This limit must not be jeopardized by
excessive duration, scope and scale of use, or by inadequate provisions
ensuring proper human supervision, judgement, timely intervention or
deactivation.

(MHC_3) is generally satisfied by anti-materiel air defense systems like


the Phalanx Close-In Weapons System in use on US surface combat ships to
defend manned installations and platforms against attempted time-critical or
saturation attacks. Similarly, the possibility of complying with (MHC_3) is
hardly questionable in the case of Germany's Nächstbereichschutzsystem
(NBS) MANTIS, Israel's Iron Dome, or active protective systems for
vehicles, like the South African–Swedish LEDS-150 Land-Electronic
Defense System (Amoroso & Tamburrini, 2021, pp. 249–50). These
defense systems were aptly classified by Noel Sharkey as SARMO
weapons, where SARMO stands for “Sense and React to Military Objects.”
SARMO systems are hardly problematic from ethical and legal
perspectives, in that “they are programmed to automatically perform a small
set of defined actions repeatedly. They are used in highly structured and
predictable environments that are relatively uncluttered with a very low risk
of civilian harm. They are fixed base, even on Naval vessels, and have
constant vigilant human evaluation and monitoring for rapid shutdown”
(Sharkey, 2016).
More problematic from the viewpoint of (MHC_3) compliance is Israel's
Harpy NG loitering munition system, which can overfly assigned areas for
up to nine hours in search of enemy radar sources to shoot down. The
Harpy NG's extended time window for loitering may lead to a violation of
(MHC_3), especially in dynamic warfare scenarios where civilians and
civilian objects may suddenly come into sight. Indeed, the situational
properties allowing the activation of this loitering munition under (MHC_3)
may be superseded if civilians enter the assigned area, erratic changes occur
there, or surprise-seeking intentional behavior is enacted. More generally,
the activation of loitering munitions must be undertaken in compliance with
(MHC_3), by constraining their duration, scope and scale of use, and at the
same time by allowing for timely human intervention and deactivation.
Let us now consider this matter from the standpoint of (MHC_3) and the
other ICRC requirements recent technological developments concerning
experimental prototypes of aerial AWS. Tests conducted in 2023
successfully demonstrated that an AI-flown fighter jet is capable of staging
dogfights with fighter jets flown by human pilots, that is, to engage in aerial
battles conducted at close range and involving maneuvers to attack or evade
opponents. A human-on-the-loop control configuration was adopted in
these tests: two pilots were in the fighter jet's cockpit to monitor AI-
controlled maneuvers, and to take over piloting functions if needed. The
press release providing basic information about these tests emphasized that
the lessons learned from these successful trials were not limited to
dogfighting. One might develop on this basis unmanned aerial vehicles
(UAVs) carrying out airstrike missions and reconnaissance operations or
flying alongside and escorting crewed jet fighters (Losey, 2024).
From the standpoint of request 1, one may ask whether these AI-flown
fighter jets are always sufficiently predictable to prevent indiscriminate
effects from their action, including friendly fire and violations of IHL.
Arguably, this predictability condition is severely challenged in cluttered
and highly dynamic aerial battles, due to fast interactions between multiple
opposing aircraft. From the standpoint of prohibition 2 one cannot use an
AI-flown fighter jet as an AWS which can target crewed opponents in aerial
battles, insofar as this use does not exclude the application of force against
persons aboard enemy fighter jets. Airstrike missions against manned
military targets on the ground would be similarly prohibited. As far as
(MHC_3) is concerned, a relevant technological factor for timely human
intervention is the distance of the source of human control signals from AI-
flown fighter jets. In the remote control of US drones during the
Afghanistan war, sources and receivers were often located more than
10,000 km from each other. Similar distances give rise to extended control
signal latencies, which would not allow for timely and effective human
interventions on a fighter jet engaged in dogfight maneuvers. To fulfil
(MHC_3), remote control signals should be delivered from closer distances,
for example from a nearby aircraft.
In conclusion, the ICRC position poses multiple constraints on AWS
technological possibilities envisaged on account of experimental results
from AI-flown fighter jets. In some cases, leveraging these technological
possibilities for warfare purposes is likely to be prohibited, insofar as the
predictability requirement is severely challenged in cluttered and highly
dynamic aerial battles, and AI-flown fighter jets would not be used as AWS
targeting crewed aircraft in aerial battles or inhabited targets on the ground.
In other cases, the spatial range of timely human control interventions
would be severely curtailed in the light of (MHC_3) requirements.

Conclusion
The notion of extended MHC over AI-powered warfare was introduced and
examined in this chapter. An initial effort was made to expand on this idea
by formulating and discussing some MHC requirements which concern
AWS and other AI systems which include human-in-the-loop control
configurations by design. Here is the list of MHC requirements discussed in
this chapter.
(MHC_1) Human decision-makers are in the position to give pondered
and well-informed judgments about the likely effects in the world of
decision proposals advanced by an AI system, including their moral and
legal implications.
(MHC_2) Human decision-makers are in the position to assess the
veracity of perceptual inferences and classifications advanced by an AI
system, and to evaluate the likely effects in the world of actions undertaken
on account of these perceptual responses, jointly with their moral and legal
implications.
(MHC_3) Human decision-makers are in the position to assess whether
AWS operation is limited by design or situation of use to objects that are
legitimate military targets. This limit must not be jeopardized by excessive
duration, scope and scale of use, or by inadequate provisions ensuring
proper human supervision, judgment, timely intervention or deactivation.
The normative importance of (MHC_1) was emphasized by reflecting on
the negative consequences for the life and welfare of people possibly
flowing from enacting suggestions made by AI decision-support systems
without filtering those suggestions which lead to the violation of just war
theory and IHL principles. The difficulty of fulfilling (MHC_1) was
emphasized based on the distinction between perfunctory and meaningful
human control and was explored in the context of AI decision-support
systems for warfare target selection. Indeed, the increased productivity seen
in AI target generation, jointly with the psychological pressure to bring the
pace of human filtering decisions closer to the pace of machine targeting,
may end up reducing human control to clerical rubber-stamping of machine
suggestions.
The normative importance of (MHC_2) was emphasized in connection
with the protection of peace, a value enshrined in Article 1 of the UN
Charter, and related efforts to avoid inadvertent armed conflicts and their
escalation. The difficulty of meeting (MHC_2) was explored in connection
with the proposal to use AI perceptual systems for nuclear early warning,
based on the supposition that these systems will speed up the achievement
of situational awareness, making more extended windows available for
downstream human decision-making. But the high stakes of nuclear
decision-making and the corresponding obligation to fulfil (MHC_2) jar
with this supposition, insofar as the involved veracity checks may offset
time gains allegedly flowing from AI's faster information processing.
The normative importance to meet (MHC_3) – and related prohibitions
on lethal or insufficiently predictable AWS – is by now largely familiar on
account of extensive academic and political debates on AWS, focusing on
human dignity protection and the respect of just war and IHL principles.
The difficulty of satisfying (MHC_3) was explored in the light of recent
technological advances concerning AI-flown fighter jets engaging in
dogfights with fighter jets controlled by human pilots. Various
technological possibilities opened up by these experimental advances jar
with (MHC_3) – and especially so in connection with the delays that might
be needed to ensure timely control interventions by human operators.
It is evident that the present approach to the extended MHC problem for
AI-powered warfare can and should be further developed, in terms of
variations on or alternatives to the three requirements discussed here, as
well as in terms of newly formulated MHC requirements on AI applications
in other warfare domains and tasks, including logistics and military training
(Grand-Clément, 2023). Notably, MHC requirements are needed to mitigate
the risks for international peace raised by AI in cyberwarfare. Indeed, AI's
potential is being explored there to expand the cyber capabilities of AI
systems, to automate cyber warfare, and to increase the speed, volume, and
destructiveness of cyberattacks (Reinhold & Reuter, 2022). However,
distinctive features of the cyberwar domain, including the difficulty of
attributing cyberattacks, give rise to difficult and profound normative issues
that require an extensive and dedicated treatment falling outside the scope
of this chapter.

Acknowledgments
Research for this chapter was partially funded by Italian National Research
Project PRIN 2020 “BRIO: Bias, risk, opacity in AI: design, verification
and development of trustworthy AI” (grant 2020SSKZ7R).

References
Abraham, Y. (2023). ‘A mass assassination factory’: Inside Israel's
calculated bombing of Gaza. +972 Magazine.
https://www.972mag.com/mass-assassination-factory-israel-
calculated-bombing-gaza/
Abraham, Y. (2024). ‘Lavender’: The AI machine directing Israel's
bombing spree in Gaza. +972 Magazine.
https://www.972mag.com/lavender-ai-israeli-army-gaza/
Amoroso, D., Garcia, D., & Tamburrini, G. (2022). The weapon that
mistook a school bus for an ostrich. Science Diplomacy.
Amoroso, D., & Tamburrini, G. (2021). Toward a normative model of
meaningful human control over weapons systems. Ethics and
International Affairs, 35, 245–272.
Arms Control Association. (2019). Nuclear false warnings and the risk of
catastrophe. Arms Control Today.
https://www.armscontrol.org/act/2019-12/focus/nuclear-false-
warnings-risk-catastrophe
Article36. (2013). Killer robots: UK Government policy on fully
autonomous weapons. Policy paper. http://www.article36.org/wp-
content/uploads/2013/04/Policy_Paper1.pdf
Asaro, P. (2012). On banning autonomous weapon systems: Human rights,
automation, and the de-Humanization of lethal decision-making.
International Review of the Red Cross, 94, 687–709.
Athalye, A., Engstrom, L., Ilyas, A., & Kwok, K. (2018). Synthesizing
robust adversarial examples. In Proceedings of Machine Learning
Research, 80, 284–293.
https://proceedings.mlr.press/v80/athalye18b.html
Baldus, J. (2022). Doomsday machines? Nukes, nuclear verification and
artificial intelligence. In T. Reinhold & N. Schörnig (Eds.), Armament,
arms control and artificial intelligence. Springer, 101–116.
https://doi.org/10.1007/978-3-031-11043-6_8
Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of
adversarial machine learning. Pattern Recognition, 84, 317–331.
Borrie, J. (2019). Cold war lessons for automation in nuclear weapon
systems. In V. Boulanin (Ed.), The impact of artificial intelligence on
strategic stability and nuclear risk. vol. I: Euro-Atlantic perspectives
(pp. 41–52). Stockholm International Peace Research Institute.
https://www.sipri.org/publications/2019/research-reports/impact-
artificial-intelligence-strategic-stability-and-nuclear-risk-volume-i-
euro-atlantic
Cummings, M. L. (2021). Rethinking the maturity of artificial intelligence
in safety-critical settings. AI Magazine, 42(1), 6–15.
Davies, H., McKernan, B., & Sabbagh, D. (2023). ‘The Gospel’: How
Israel uses AI to select bombing targets in Gaza. The Guardian.
https://www.theguardian.com/world/2023/dec/01/the-gospel-how-
israel-uses-ai-to-select-bombing-targets
DoD (2023). Autonomy in Weapons Systems. US Department of Defense
Directive 3000.09.
https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300
009p.pdf
Eggert, L. (2024). Rethinking ‘meaningful human control. In J. M.
Schraagen (Ed.), Responsible use of AI in military systems (pp. 213–
231). Chapman and Hall/CRC.
European Parliament (2024). Artificial Intelligence Act.
https://www.europarl.europa.eu/doceo/document/TA-9-2024-
0138_EN.pdf
Ficuciello, F., Tamburrini, G., Arezzo, A., Villani, L., & Siciliano, B.
(2019). Autonomy in surgical robots and its meaningful human
control. Paladyn Journal of Behavioral Robotics, 10, 30–43.
Grand-Clément, S. (2023). Artificial intelligence beyond weapons:
Application and impact of AI in the military domain. UNIDIR.
Heyns, C. (2013, April 9). Report of the Special Rapporteur on
extrajudicial, summary or arbitrary executions. United Nations
Document A/HRC/23/47.
HLEG. (2018). Ethics guidelines for trustworthy artificial intelligence by
the high-level expert group on AI. https://digital-
strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
ICRC. (2021). ICRC position on autonomous weapons systems.
International Committee of the Red Cross Report.
www.icrc.org/en/document/icrc-position-autonomous-weapon-systems
Kahneman, D. (2012). Thinking, fast and slow. Penguin Books.
Kumar, Y., Koul, A., Singla, R., & Ijaz, M. F. (2022). Artificial intelligence
in disease diagnosis: A systematic literature review, synthesizing
framework and future research agenda. Journal of Ambient
Intelligence and Humanized Computing, 14(7), 8459–8486.
https://doi.org/10.1007/s12652-021-03612-z
Lazar, S. (2020). War. In E. N. Zalta (Ed.), The Stanford encyclopedia of
philosophy (Spring 2020 ed.).
https://plato.stanford.edu/archives/spr2020/entries/war/
Losey, S. (2024). US Air Force stages dogfights with AI-flown fighter jet.
DefenseNews. https://www.defensenews.com/air/2024/04/19/us-air-
force-stages-dogfights-with-ai-flown-fighter-jet/
Mecacci, G., Amoroso, D., Cavalcanti-Siebert, L., Abbink, D., van den
Hoven, J., & Santoni De Sio, F. (2024). Research handbook on
meaningful human control of intelligent systems. Edward Elgar
Publishing.
NSCAI. (2021). Final report. US National Security Commission on
Artificial Intelligence.
Reinhold, T., & Reuter, C. (2022). Cyber weapons and artificial
intelligence: Impact, influence and the challenges for arms control. In
T. Reinhold & N. Schörnig (Eds.), Armament, arms control and
artificial intelligence (pp. 145–158). Springer.
https://doi.org/10.1007/978-3-031-11043-6_11
Saltini, A. (2023, July 28). To avoid nuclear instability, a moratorium on
integrating AI into nuclear decision-making is urgently needed.
European Leadership Network Commentary.
https://europeanleadershipnetwork.org/commentary/to-avoid-nuclear-
instability-a-moratorium-on-integrating-ai-into-nuclear-decision-
making-is-urgently-needed-the-npt-prepcom-can-serve-as-a-
springboard/
Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful human control
over autonomous systems: A philosophical account. Frontiers in
Robotics and AI, 5. https://doi.org/10.3389/frobt.2018.00015.
Sharkey, N. E. (2016). Staying in the loop: Human supervisory control of
weapons. In N. Bhuta, S. Beck, R. Geiß, H. Y. Liu, & C. Kreß (Eds.),
Autonomous weapons systems: Law, ethics, policy (pp. 23–38).
Cambridge University Press.
Slovic, P., & Lin, H. S. (2020). The caveman and the bomb in the digital
age. In H. A. Trinkunas, H. S. Lin, & B. Loehrke (Eds.), Three tweets
to midnight. Effects of the global information ecosystem on the risk of
nuclear conflict (pp. 39–62). Hoover Institute Press.
Strack, F., & Deutsch, R. (2015). The duality of everyday life: Dual-process
and dual system models in social psychology. In M. Mikulincer, P. R.
Shaver, E. Borgida, & J. A. Bargh (Eds.), APA handbook of personality
and social psychology, vol. 1. Attitudes and social cognition (pp. 891–
927). American Psychological Association.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow,
I., & Fergus, R. (2014). Intriguing Properties of Neural Networks.
arXiv.org. https://arxiv.org/abs/1312.6199v4
Tan, E., Petit, J. M., & Simonofski, A. (2023). Artificial intelligence and
algorithmic decisions in fraud detection: An interpretive structural
model. Data & Policy, 5. http://dx.doi.org/10.1017/dap.2023.22
Thaler, R. H., & Sunstein, C. R. (2008). Nudge. Improving decisions about
health, wealth, and happiness. Yale UP.
Verbruggen, M. (2020). The extensive role of artificial intelligence in
military transformation. In P. Topychkanov (Ed.), The impact of
artificial intelligence on strategic stability and nuclear risk: South
Asian Perspectives (pp. 11–16). Stockholm International Peace
Research Institute.
Walzer, M. (2000). Just and unjust wars: A moral argument with historical
illustrations (3rd ed.). Basic Books.
12
USE OF ARTIFICIAL
INTELLIGENCE IN ARMED
CONFLICTS
What Is ‘Lawful’?
Stefano Dominelli and Pietro Sanna

DOI: 10.4324/9781003518495-16

Preface
Technological evolution is a constant of human society that, in broad terms,
has always raised legal challenges, also at the international level (Cappiello,
2022, p. 7 ff.). Taking humanitarian law as a specific focal lens, one could
think, for instance, of discoveries in the chemical, biological, radiological,
or nuclear fields and the military implications (and applications) that such
advances in science might have (Mauri, 2022, p. 358). Of course, reasoning
in broad terms does not make it impossible to draw a (value) judgment on
the positive or negative aspects of technological development per se; as a
matter of general principle, it is not ‘technology’ that should be addressed
from the legal point of view, but rather its use by people. Additionally, a
generalization of the new technologies for the purpose of their assessment
in terms of compatibility with the relevant rules of international law, seems
not only inappropriate, but also methodologically incorrect. The different
degree of evolution in the various technologies, in fact, cannot and should
not be ignored in any normative analysis.
When approaching the more ‘modern’ topic of artificial intelligence (AI),
it is worth emphasizing how such a notion is not unambiguous (Flammini,
2018, p. 16). If the expression usually refers to machines that can partly
copy and replicate decision-making processes structurally similar to those
of human beings (albeit, necessarily, more efficient in terms of
performance), not all machines share the same algorithm or the same degree
of autonomy with respect to human intervention. A program may either
make a choice on the basis of parameters previously set by an operator or
based on ‘self-developed’ selection and decision criteria autonomously
drawn by deducing them from a dataset that constitutes a (pragmatic)
reference (Cappiello, 2022, p. 11 f.). In other words, programs decide
‘mechanically’ on the basis of pre-set selection criteria, or on the basis of a
self-developed criterion that is not necessarily foreseen or foreseeable by
the human being programming or operating the machine at hand (Barfield,
2018, p. 15). Furthermore, from a terminological standpoint, the expression
‘artificial intelligence’ itself may be perceived as being somewhat
misleading. If ‘intelligence’ is a prerogative of cognitive capacity, the lack
thereof can only result in the absence of a form of ‘intelligence’ in the
traditional sense of the term. In other words, if the decision-making process
is ‘computational’ in nature based on a mathematical logic associated with
possible hetero-set values, speaking of ‘intelligence’ would be misleading
because it could lead one to believe that the object is endowed with an
autonomous capacity for critical thought (Surden, 2019, p. 1308).1 A
circumstance that, to date, has still to be ruled out for machines (although
there are certainly tendencies and aspirations, at least in the civil society, to
create a ‘self-generative artificial intelligence’ – a General Artificial
Intelligence) (Bassini, 2023, p. 391; Chesterman, 2020, p. 840; ). What
follows is that an ‘artificial intelligence’ is neither ‘intelligent’ nor
‘artificial’. Even the most advanced machine will operate on the basis of
computational methods and criteria at least initially imposed by a human
being, thus not being ‘entirely’ or ‘originally’ artificial (Tschider, 2023).
This is of particular relevance with regard to the issue of liability,2 both
civil and criminal, that may arise as a consequence of computational
decisions and has led some to start reasoning on possible3 pieces of
legislation to (unilaterally4) address the challenges posed by the discipline.
The degree of autonomy and the residual role of the human being in the
decision-making process undertaken by a machine takes on particularly
complex (normative and ethical) and delicate contours where these
(non-)AI systems are employed in the military sphere. It is obviously not
possible to generalize, and reflections should take into account the diverse
nature of various systems. In the following pages, an attempt will be made
to reconstruct the main rules (and limits) of international humanitarian law
(IHL) that are also applicable to automated weapons in order to understand
the conditions under which they can be legitimately used in combat.
Second, attention will be given to the rules of criminal (human) liability, in
particular from the viewpoint of the Rome Statute. On the contrary,
questions related to the possibility of attributing a given criminal conduct to
a State are not addressed.5
International Humanitarian Law and ‘Smart’
Weapons: An Overview on Their (Uncertain)
Legal Status
IHL regulates ‘how’ force may be used in the context of an international or
non-international armed conflict (ex multis, Ronzitti, 2021, p. 19 ff.). Under
modern international law, ‘war’ is no longer ‘just’6 and, should a conflict
take place, this factual condition does not bestow full and absolute
discretion on belligerents (even if they act in self-defense7) as to how to
conduct specific military campaigns. However, either because the
fundamental rules on which humanitarian law is based have not all been
adopted recently, or because the development of smart technologies is a
phenomenon that is relatively still in the process of evolution, as of today
there is no specific norm that expressly takes into account AI or, possibly,
autonomous weapons.8 A regulatory gap that has not gone unnoticed and
prompted some States to propose a resolution to the UN General Assembly,
passed on October 12, 2023,9 aimed at introducing the topic of lethal
autonomous weapons systems into working agendas in order to ‘address the
challenges and concerns raised by autonomous weapons systems’. Which is
not to say, of course, that there are no relevant regulations at all on this
point. It will have to be determined whether and to what extent already
existing provisions can be applied to new models of warfare to regulate
them on a par with other more ‘classic’ and consolidated cases.
Thus, again there is a need to precisely identify and distinguish the
different technologies at hand, in order to identify with greater precision
their compatibility with international law. Also starting from the UN
General Assembly Resolution of October 12, 2023, it appears necessary to
make a first, fundamental distinction between ‘autonomous weapons’ and
‘smart weapons’. According to some countries, notably the United States of
America,10 ‘autonomous weapons’ (not even defined by the General
Assembly Resolution that purports to deal with them) are those weapons
that, once activated, can identify the target and attack it without any further
input from an operator (who might still retain switch-off control).11
Weapons of this kind do not exist to date (Sassóli, 2014, p. 309), but may be
produced and used in the future (Homayounnejad, 2018, p. 123). In this
sense, therefore, any regulatory analysis while the scope of the law is being
established can only to identify those limits that international law already
imposes and that such systems will have to respect in the future.
If one agrees with the US definition of autonomous systems, one should
consider that all those systems that are not capable of autonomously
selecting a target upon activation are not ‘autonomous’, being – possibly –
‘smart’ weapons. Among these, one can mention, purely by way of
example, remotely controlled drones,12 which may or may not have
personnel on board; circuit-operated munitions (otherwise known as
‘suicide drones’, which, flying over a predetermined area, attack following
information gathered via sensors), or the Harpy reconnaissance drone.
Turning thus to those general principles and rules that regulate, or will
regulate, the use of intelligent or autonomous weapons, there can be no
doubt that one of the most important principles of humanitarian law is that
of ‘distinction’. Parties to an armed conflict must properly distinguish
combatants from any other civilian subject or object that does not
participate in the enemy's military function (Henckaerts et al., 2009). It is
now an established principle of international law that a military action can
only be intentionally directed against enemy military forces.13 With respect
to this point, it has already become apparent in practice that specific
technologies are not necessarily able to make an adequate distinction.14 One
of the (first) drone attacks carried out by the US Central Intelligence
Agency (CIA) in 2002 resulted in the killing of three people who, on the
basis of detected physical characteristics, were assumed – wrongly so – to
be affiliated with al-Qaeda.15 Therefore, the weapon – in order to operate in
accordance with humanitarian law – must be able to properly distinguish
between legitimate and non-legitimate military targets. Where this is not
possible, the weapon in question would fail to comply with one of the most
fundamental dictates of humanitarian law.
The weapon's ability to gather the information necessary to qualify and
classify persons and objects for the purposes of assessing an attack (decided
either autonomously by the weapon or by the individual operator) can only
depend on the equipment adopted and, therefore, the assessment of the
weapon's compatibility can only be made on a case-by-case basis. However,
a second principle of humanitarian law could make it more complex,
especially for an autonomous weapon, to assess whether or not a subject
can be considered as a legitimate military target.16 The protection
mentioned above may be ‘lost’ by individuals who decide to participate in
the conflict by undertaking certain actions in support of the military conduct
of one of the parties. This is, in essence, a ‘conduct-based exception’
(Homayounnejad, 2018, p. 130) that a computational intelligence may have
trouble in assessing. Even more so, a system that has been programmed on
a general bias toward vulnerable categories,17 may even qualify as
‘aggressive’ behavior that could potentially be associated with a person's
disabilities in a moment of extreme stress.18
However, while intelligent or autonomous weapons may not be
appropriate to fully distinguish between legitimate and non-legitimate
military targets, it is also true that the use of such instruments in supporting
the planning of military actions could as well contribute to a virtuous
respect of humanitarian law. Where there are possible proximities between
immune and attackable objects, the use of drones could help reconstruct
more accurately the conditions in the field and, on the basis of such a more
complete assessment, the systems in question could be functional to the
respect and implementation of the principle of distinction. Even more so,
part of the scholarship has even considered that, to the extent that intelligent
or autonomous weapons perform precisely this function, the same principle
of distinction would oblige the State having them to effectively use such
instruments (albeit, without claiming that international law also imposes an
obligation to acquire such means) (Quéguiner, 2006, p. 802; Weizmann,
2018, p. 117). Moreover, it has been rightly noted that people often use
lethal force out of fear of being exposed to risk: thus, a drone or robot might
postpone the use of lethal force until all the necessary information has been
gathered (Sassóli, 2014, p. 310). Furthermore, some international crimes
cannot be committed by robots, but only by human beings. Not involving
people in the context of a conflict would reduce the possibility of such
crimes being committed.19
Turning back to the limits imposed by IHL, international law also
prohibits ‘indiscriminate attacks’ because they are incapable of respecting
the principle of distinction (Henckaerts et al., 2009 , p. 37). If intelligent or
autonomous weapons are not capable of controlling the effects of an attack,
these weapons would be ontologically incompatible with humanitarian law.
In this sense, mutatis mutandis, the regime developed for anti-personnel
mines could provide an initial framework from which relevant principles
can be inferred. With the above definition of ‘autonomous weapon’ in mind,
it has been argued that anti-personnel mines have, essentially, a rudimentary
form of ‘autonomy’: once activated, the mine, by virtue of the pressure
exerted by the weight of the person (or thing) stepping on it, is activated
without the need for further intervention (Homayounnejad, 2018, p. 130).
The applicable international conventions20 distinguish between two
scenarios. Either anti-personnel mines are placed within a well-defined and
controlled operational zone, to prevent access by civilians, or, otherwise,
there is a ban on the use (as well as on the production and stockpiling) of
mines that are not capable of being automatically deactivated within a short
period of time after ‘launching’.21 Thus, it could also be argued that the use
of autonomous weapons that are not capable of distinguishing or that can in
some way carry out an indiscriminate attack could within narrow limits be
used within specifically identified closed areas.
Last, and emphasizing that the reference to the fundamental principles of
humanitarian law made here is not, nor is it intended to be, exhaustive, it is
worth recalling that the belligerent parties have a (non-absolute) obligation
to inform the population where an attack may also affect civilians
(Henckaerts et al., 2009 , p. 62).
Moreover, just as there are principles and rules to protect the population,
humanitarian law also protects legitimate military objectives from
unjustified suffering: it is a fundamental prohibition to use weapons or
tactics that cause unnecessary injury or unnecessary suffering (Henckaerts
et al., 2009 , p. 237). This is why, for example, a ban on the use of blinding
lasers has been established in international law (Henckaerts et al., 2009 , p.
292).
This brief analysis confirms the initial reflection: in the absence of
specific rules, it is not possible to predetermine with absolute certainty
whether a weapon, be it intelligent or autonomous, is necessarily
incompatible with the general rules of humanitarian law, except – at least –
in extreme cases. However, in such times of legal uncertainty, States have a
specific obligation: technical and regulatory uncertainties cannot become a
free-pass where what is not expressly forbidden becomes permissible.
Article 3622 of the First Additional Protocol to the Geneva Conventions,23
on this point, provides that States Parties, when developing new weapons or
military methods, must determine whether these may wholly or partly
violate rules of international law (Sandoz et al., 1987 , p. 421, para. 1481).24
There is no denying that the result of this study may leave one unsatisfied,
if only by taking the principle of legal certainty as a benchmark. And it is in
such a context of legal uncertainty that the other aspect, that of
accountability, which will now be addressed, becomes of paramount
importance.

Violation of International Humanitarian Law and


‘Smart’ Weapons: The Principle of Individual
Criminal Liability
Although in the absence of ad hoc rules of international law on armed
conflicts, it has been noted that humanitarian law requires that – even – the
use of ‘smart’ weapons must respect the principle of ‘distinction’.
Therefore, in the context of hostilities, there is no doubt that the use of a
‘smart’ weapon can only be considered prima facie ‘lawful’ to the extent
that the weapon at stake is capable of directing the scope of its attack
exclusively at military targets.
Even if ‘smart’ weapons were suitably programmed in the above terms,
the possibility that an attack launched would result in the violation of the
principle of ‘distinction’, and thus – in essence – in the commission of an
‘indiscriminate attack’, cannot be fully ruled out. On the contrary, as
already noted, it is not at all inconceivable that a ‘smart’ weapon, duly
prepared to limit its action to military targets, could end up, by mistake,25
striking civilian targets.26
Excluding here any reflection on the possible international criminal
liability of weapons, and – thus – not addressing the complex issue of the
potential legal personality of ‘artificial intelligence’, in such a scenario,27 in
the face of a clear violation of humanitarian law, the issue would arise, inter
alia, of identifying the ‘human’ subject – or subjects – criminally liable for
the conduct.28 While ‘indiscriminate attacks’ are undoubtedly contrary to
humanitarian law,29 it is equally unclear whether – and under which
conditions – a human subject can be held criminally liable at international
level for – unlawful – autonomous weapons’ military actions.
A first issue to be considered is that, when dealing with ‘smart’ weapons,
the individuals potentially involved in their – more or less unpredictable –
‘misuse’ may be several. Reference is made, for instance, to the
programmer and developer of the weapon's software, the manufacturer, the
tester, as well as the operator deploying the weapon before the attack or,
further, the military commander who gives the relevant order or is
otherwise responsible for it.30 Taking into account the large number of
people involved, it is not straightforward to establish with sufficient
certainty which of them, and to what extent, have influenced or even
determined the misuse of the weapon by their own action – or omission
(Bo, 2023, p. 1) – during the design, development, manufacture, testing,
arming, or use phase.31
To the above it must be added a whole series of further factors, of a more
or less ‘practical’ nature, which from time to time make it even more
difficult to identify the ‘responsible’ parties.
It can be considered, as a mere example, the case of the developer of the
software on which the functioning of the incriminated weapon is grounded
(Bo, 2023, p. 1; McFarland & McCormack, 2014, p. 367). Without the
slightest pretense of venturing into technical subjects outside our expertise,
the programming and development activity appears particularly complex
(Nissenbaum, 1996, p. 29). It may be carried out by several individuals and
companies working together, perhaps each to develop a different
component to be combined with others. It may take place in several
different phases over time. Not to mention, moreover, that the most
advanced technologies are currently able, to varying degrees, to program
and develop themselves (Diamantis, 2021, p. 820). Well, in this framework,
it is therefore clear that tracing back to the specific developer potentially
‘guilty’ of a weapon malfunction turns out to be most of the time a virtually
impossible task.
Still with reference to the case of the developer – even though the same
can also apply to the programmer or the tester – let us consider, from a
more strictly legal perspective, the additional issue concerning, for instance,
the ‘timing’ of the allegedly criminal conduct. The Rome Statute
establishing the International Criminal Court, for ‘war crimes’ requires that
the relevant conduct is carried out ‘in the context of an armed conflict’.32
Hence, it is easily questionable that a conduct carried out some time before
the outbreak of a conflict – such as, presumably, that of a software
developer – could fall within the material scope of the Statute. This cannot
but increasingly narrow the chances of ascribing criminal liability for the
crime at stake to a natural person.33
However, irrespective of the ‘role’ of the subject considered, the most
complicated knot to untie when it comes to identifying individual criminal
liability for international crimes committed through the use of ‘smart’
weapons concerns the ascertaining of the mens rea of the alleged
perpetrator. This is because international law tends to set rather high
standards of culpability for this purpose (McDougall, 2019, p. 76),
requiring the ‘intentionality’ of the criminal behavior, in terms of ‘dolus
directus’, while excluding the relevance of ‘dolus eventualis’ or of mere
negligence.
In practical terms, this implies, for example, that if a ‘smart’ weapon is
mistakenly fired at civilian targets due to proven errors in the programming
or training of the weapon, it will hardly be possible to ascribe criminal
liability for the ‘casualty’ to the programmer or the tester. This is because it
is not possible to find dolus directus in their respective conduct. The same
could be said, further, in the case of an operator deploying the weapon in
view of the attack without realizing – or by foreseeing in probabilistic terms
and accepting the risk – the actual scope of the attack and the risk that it
will also strike objects or subjects protected by the international law of
armed conflict. If negligence or, at least, dolus eventualis are not relevant,
the behavior of the operator, in principle, cannot be punished under
international law (Weigend, 2023, p. 13).
Actually, from the perspective of the international law of armed conflicts,
the standard of culpability set forth in the First Additional Protocol to the
Geneva Conventions is not overly restrictive. Article 85, para. 3, lett. (a),
states, in fact, that ‘making the civilian population or individual civilians
the object of attack’ constitutes a ‘grave breach’ within the meaning of the
Protocol when such conduct is committed ‘willfully’.34 In the scholarship
(Sandoz et al., 1987 , p. 994, para. 3474), the degree of ‘willfulness’
required by the provision has been interpreted extensively to include not
only direct intent (i.e., dolus directus) but also ‘recklessness’ (i.e., dolus
eventualis).35
However – and this is what is relevant under the international criminal
law perspective – this approach was not embraced by the drafters of the
Rome Statute, who opted instead for a much stricter attitude. The normative
framework of the International Criminal Court is, in fact, mainly based on
dolus directus, not permitting any derogation from the ‘ordinary’ criterion
of individual criminal liability supported by the necessary mental element
of ‘direct intent’. In this sense, Article 30 of the Rome Statute36
cumulatively requires ‘intent and knowledge’ on the part of the subject
responsible, with the specification37 that ‘intent’ must be understood as the
subject's intention to cause a certain foreseeable consequence in the
ordinary course of events. Going back to the case of civilian targets
attacked by a ‘smart’ weapon due to its malfunction, the above implies that
in order to ascribe any liability to one of the possible different subjects
involved – the programmer, the developer, the tester, the operator deploying
the weapon, etc. – it is necessary to ascertain that they foresaw in the strict
terms referred above the unlawful consequence of their conduct. This
appears very hard to prove (Weigend, 2023, p. 14).
Without going into further detail on this issue, it should also be noted that
even the peculiar and different degrees of voluntariness required by the
single offence headings of the Rome Statute – subject to the opening clause
of the aforementioned Article 30, which, in outlining the relevant mental
standards, expressly waives any special provisions to the contrary (Pigaroff
& Robinson, 2016, p. 1121) – are in any case based on dolus directus. This
is the case, for instance, of the war crime of deliberately attacking civilians
in the context of international armed conflict pursuant to Article 8(2)(b)(i)
of the Statute,38 to which the ‘indiscriminate attack’ caused by a ‘smart’
weapon certainly seems to be referable. Also taking into account the
relevant Elements of Crimes,39 it is necessary for the offender to act with
the clear ‘intention’ of directing an attack – intentionally – against civilian
targets unrelated to the conflict. Although in the absence of authentic
interpretations as to the degree of ‘intention’ and ‘knowledge’ required to
the perpetrator with respect to the victims of their attack,40 the case law of
the International Criminal Court seems to favor – once again – a literal
interpretation of the provision, which confines the scope of Article 8(2)(b)
(i) to the sole circumstance in which the perpetrator is motivated by direct
intent – also – with respect to the – civilian – victims of the attack.41
Several hermeneutical solutions have been proposed by scholars at least
to limit the negative consequences of attribution.42 Among these, one of the
most persuasive is undoubtedly the one that would leverage the doctrine of
‘command responsibility’ to hold the ‘commander’ liable for attacks
‘committed’ by ‘smart’ weapons deployed or controlled under their
direction and supervision.43 According to Article 28, para. 1, of the Rome
Statute, in order to establish the commander's liability for international
crimes committed by relevant subordinates, it is sufficient that they,
although knew or should have known of the commission of international
crimes by the ‘forces under his or her effective command and control’,
failed to take all necessary and reasonable measures within their power to
prevent or repress their commission.44 It is thus de facto a culpable liability,
for the configurability of which merely negligent conduct by the offender is
sufficient. Well, the extension of such liability to cases where the
commander fails to ‘exercise control properly’ not only over ‘human’
subordinates but also over the autonomous weapons for which they are
responsible would relevantly diminish the chances for ‘indiscriminate
attacks’ made by ‘smart’ weapons to remain unpunished.
However, if on the one hand certain formal, ‘literal’, limits of the Rome
Statute's provisions – for instance, the fact that the current notion of ‘forces’
could hardly be interpreted so broadly as to include autonomous weapons –
could be overcome by a revision of the text of the Statute, on the other
hand, there are a number of hermeneutical issues – with substantive
repercussions – that still are against an expansion of the responsibility of
command (Amoroso & Giordano, 2019, p. 223; Weigend, 2023, p. 15 ).
Given the inherent unpredictability of autonomous weapons, what risks
associated with their use can actually be considered ‘foreseeable’ by the
commander? Given the factual prerequisites required by law for the liability
at stake, can the commander be considered to exercise ‘effective’ control
over autonomous weapons? For example, once the weapon has been
deployed, is it always possible for the commander to abort the attack?
Furthermore, the doctrine of ‘command responsibility’ is grounded on the
military hierarchical relationship, but also the moral relationship between
commander and soldiers: can this rationale be extended by analogy to
autonomous weapons? These are just a few of the possible open issues
essentially related to the difficulty of assimilating by analogy soldiers and
smart weapons. The risk, in the end, is to end up overstretching the
boundaries of ‘command responsibility’ – de facto raising it to strict
liability – in the perceived need to find ‘at all costs’ a human responsible
for crimes committed by autonomous weapons.

Conclusion
It is certainly not easy nor obvious to develop a general conclusion on the
compatibility of ‘smart’ or autonomous weapons with humanitarian law.
What is certain is that even weapons of this type must be designed and used
in such a way as to be at least able to respect the essential principles of the
system, such as, above all, the principle of distinction and the prohibition of
indiscriminate attacks against the civilian population.
One of the main challenges posed by the spread and development of
these weapons is undoubtedly the identification of one or more
‘responsible’ human actors to whom international criminal liability can be
attributed to in the event of violations, by the ‘machine’, of the fundamental
principles of humanitarian law.
Considering that it seems to be excluded – even for the present purposes
(Lewis, 2023, p. 965; Tamburrini, 2016, p. 122) – the extreme solution of
an absolute ban on the production and use of autonomous or ‘smart’
weapons – given that, as mentioned above, in some specific contexts AI,
albeit lacking in cognitive capacity, could even be functional to the respect
of humanitarian law – the current structure of the Rome Statute does not
allow for the easy identification of one or more liable human subjects. This,
of course, with the exception of cases in which the existence of dolus
directus by the person ‘using’ the weapon is well established and, therefore,
not in dispute.
To avoid situations of impunity in the face of the commission of
international crimes, various paths have been pursued in scholarship,
including that – not without substantial criticism – of extending by analogy
the doctrine of ‘command responsibility’ to the use of autonomous
weapons. However, as mentioned, the risk is that of excessively straining
the bounds of existing law in the search for a human subject to condemn.
In light of the above considerations, if we rule out giving up the search
for ‘humans’ responsible for violations of humanitarian law made by
‘smart’ weapons (in the opposite sense, Weigend, 2023, p. 1845), the
strategy to be pursued seems to be that of adapting pre-existing law and
principles to the new challenges that technological developments impose,
while being careful not to overstretch them. Indeed, the goal of limiting the
‘misuse’ of autonomous or ‘smart’ weapons will have to be pursued not
only from a ‘pathological’ perspective, that is, focusing on the identification
of the liable parties once the ‘casualty’ has occurred, but also from a
‘preventive’ point of view: in other terms, the attention of legal
practitioners has to be – also – focused on the regulation and control of the
phases preceding the deployment of the weapon – that is, those of planning,
development, production, etc. – for example, also by providing penalties or
incentives for the companies involved, in order to limit ab origine, as much
as possible, cases of ‘malfunctioning’.

Notes
1. ‘The reality is that today's AI systems are decidedly not intelligent
thinking machines in any meaningful sense. Rather … AI systems are
often able to produce useful, intelligent results without intelligence.
These systems do this largely through heuristics-by detecting patterns
in data and using knowledge, rules, and information that have been
specifically encoded by people into forms that can be processed by
computers. Through these computational approximations, AI systems
often can produce surprisingly good results on certain complex tasks
that, when done by humans, require cognition. Notably, however, these
AI systems do so by using computational mechanisms that do not
resemble or match human thinking)’.
2. Amplius, L. Coppini, Robotica e intelligenza artificiale: questioni di
responsabilità civile, in Politica del diritto, 2018, p. 713; U. Ruffolo,
Per i fondamenti di un diritto della robotica self-learning; dalla
machinery produttiva all’auto driverless: verso una ‘responsabilità da
algoritmo’?, in U. Ruffolo (ed), Intelligenza artificiale e
responsabilità, Milano, 2017, p. 1; M. Bassini, O. Pollicino, Sistemi di
Intelligenza Artificiale, responsabilità e accountability. Verso nuovi
paradigmi?, in F. Pizzetti (ed), Intelligenza artificiale, protezione dei
dati personali e regolazione, Torino, 2018, p. 333; G. D’Alfonso,
Intelligenza artificiale e responsabilità civile. Prospettive di
regolamentazione europea, in Revista de Estudios Jurídicos y
Criminológicos, 2022, p. 163; G. F. Simonini, La responsabilità del
fabbricante nei prodotti con sistemi di intelligenza artificiale, in
Danno e responsabilità, 2023, p. 435; L. Arnaudo, R. Pardolesi, “Ecce
robot”. Sulla responsabilità dei sistemi adulti di intelligenza
artificiale, in Danno e responsabilità, 2023, p. 409; C. Sconamiglio,
Responsabilità civile ed intelligenza artificiale: quali soluzioni per
quali problemi?, in Responsabilità civile e previdenza, 2023, p. 1073.
3. Proposal for a Directive of the European Parliament and of the Council
on adapting non-contractual civil liability rules to artificial intelligence
(AI Liability Directive), COM/2022/496 final; see also Proposal for a
Regulation of the European Parliament and of the Council laying down
harmonized rules on artificial intelligence (Artificial Intelligence Act)
and amending certain Union legislative acts, Interinstitutional File
2021/0106(COD), 25 November 2022. In the scholarship, see
Tommasi S. (2023), p. 279.
4. On the conflict of laws issues, see Heiderhoff B. (2021), p. 409.
5. On which, in the scholarship see Hammond D. N. (2015), p. 652;
Crootof R. (2016), p. 1347.
6. See United Nations Charter, San Francisco June 26, 1945, art. 2(4)
(‘All Members shall refrain in their international relations from the
threat or use of force against the territorial integrity or political
independence of any state, or in any other manner inconsistent with the
Purposes of the United Nations’). In general, see Munari F. (2021), p.
239; Condinanzi M., Crespi Reghizzi Z. (2021), p. 321.
7. United Nations Charter, cit., art. 51 (‘Nothing in the present Charter
shall impair the inherent right of individual or collective self-defense if
an armed attack occurs against a Member of the United Nations, until
the Security Council has taken measures necessary to maintain
international peace and security. Measures taken by Members in the
exercise of this right of self-defense shall be immediately reported to
the Security Council and shall not in any way affect the authority and
responsibility of the Security Council under the present Charter to take
at any time such action as it deems necessary in order to maintain or
restore international peace and security’).
8. United Nations General Assembly, A/68/382, September 13, 2013,
Report of the Special Rapporteur on extrajudicial, summary or
arbitrary executions, para. 13 (‘There is broad agreement that drones
themselves are not illegal weapons. This is not the case, for example,
with lethal autonomous robots. There is, however, a notable lack of
consensus on how to apply the rules of international law that regulate
the use of force to drones, the fact that drones are now an established
technology notwithstanding. It is the aim of the Special Rapporteur in
the present report to contribute to clarifying the application of those
rules and to reiterate their authority, from the perspective of protection
of the right to life’).
9. United Nations General Assembly, A/C.1/78/L.56, Lethal autonomous
weapons systems. Cf. also United Nations General Assembly,
A/HRC/RES/51/22, Human rights implications of new and emerging
technologies in the military domain.
10. Office of the Under Secretary of Defense for Policy, Dod Directive
3000.09 - Autonomy in Weapon Systems, 25 January 2023, available
online, p. 21, ‘Definitions’ (‘[A]utonomous weapon system: A weapon
system that, once activated, can select and engage targets without
further intervention by an operator. This includes, but is not limited to,
operator-supervised autonomous weapon systems that are designed to
allow operators to override operation of the weapon system, but can
select and engage targets without further operator input after
activation’). See ex multis, Homayounnejad M. (2018), p. 123;
Thurnher J. S. (2014), p. 213; Horowitz M. C.(2016), p. 85; Saxon D.
(2014), p. 100; Suchman L., Weber J. (2016), p. 75.
11. Such weapons have also been defined in the scholarship as those that
can change their behavior in response to external circumstances
without human interventions. See Sassóli M. (2014), p. 308; Liu H. Y.
(2012), p. 627, in part. at p. 631 noting that, more often, weapons are
still more ‘remote’ than ‘autonomous’.
12. On which, see Weizmann N. (2018), p. 89.
13. Cf. Declaration Renouncing the Use, in Time of War, of Explosive
Projectiles Under 400 Grammes Weight, St. Petersburg Declaration 29
November/11 December 1868, available online, (‘[…] the only
legitimate object which States should endeavor to accomplish during
war is to weaken the military forces of the enemy’).
14. See also Amoroso D., Garcia D., Tamburrini G.(2022), p. 1.
15. On which see Heller K. J. (2013), p. 89.
16. Henckaerts J. M., Doswald-Beck L., Alvermann C., Dormann K.,
Rolle B. (2009), p. 19.
17. United Nations General Assembly, A/HRC/RES/51/22, Resolution
adopted by the Human Rights Council on October 7, 2022, Human
rights implications of new and emerging technologies in the military
domain, October 11, 2022, available online (‘[…] new and emerging
technologies in the military domain may rely on nonrepresentative data
sets, algorithm-based programming and machine-learning processes [
… which … ] can reproduce and exacerbate existing patterns of
structural discrimination, marginalization, social inequalities,
stereotypes and bias and create unpredictability of outcomes’).
18. See extensively Díaz Figueroa M., Henao Orozco A., Martínez J.,
Muñoz Jaime W. (2023), p. 278. See also United Nations General
Assembly, A/HRC/49/52, Report of the Special Rapporteur on the
rights of persons with disabilities, 28 December 2021, available online,
p. 13 (‘[…] the deployment and use of fully autonomous weapons
systems, like other artificial intelligence systems, raises concerns as to
the ability of weaponry directed by artificial intelligence to
discriminate between combatants and non-combatants, and make the
nuanced determination as to whether an assistive device qualifies a
person with disabilities as a threat […]’).
19. United Nations General Assembly, A/HRC/23/47, April 9, 2013,
Report of the Special Rapporteur on extrajudicial, summary or
arbitrary executions, available online, para. 55, (‘LARs will not be
susceptible to some of the human shortcomings that may undermine
the protection of life. Typically they would not act out of revenge,
panic, anger, spite, prejudice or fear. Moreover, unless specifically
programmed to do so, robots would not cause intentional suffering on
civilian populations, for example through torture. Robots also do not
rape’).
20. Convention on Prohibitions or Restrictions on the Use of Certain
Conventional Weapons which may be Deemed to be Excessively
Injurious or to Have Indiscriminate Effects, 10 October 1980, available
online; Protocol on Prohibitions or Restrictions on the Use of Mines,
Booby-Traps and other Devices, 3 May 1996, available online;
Convention on the Prohibition of the Use, Stockpiling, Production and
Transfer of Anti-Personnel Mines and on their Destruction, 18
September 1997, available online.
21. Protocol on Prohibitions or Restrictions on the Use of Mines, Booby-
Traps and other Devices, May 3, 1996, cit., art. 5(2)(a).
22. ‘In the study, development, acquisition or adoption of a new weapon,
means or method of warfare, a High Contracting Party is under an
obligation to determine whether its employment would, in some or all
circumstances, be prohibited by this Protocol or by any other rule of
international law applicable to the High Contracting Party’.
23. Protocol Additional to the Geneva Conventions of August 12, 1949,
and relating to the Protection of Victims of International Armed
Conflicts (Protocol I), June 8, 1977, available online.
24. Furthermore, also the so-called Martens Clause, according to which
‘Until a more complete code of the laws of war is issued, the High
Contracting Parties think it right to declare that in cases not included in
the Regulations adopted by them, populations and belligerents remain
under the protection and empire of the principles of international law,
as they result from the usages established between civilized nations,
from the laws of humanity and the requirements of the public
conscience’, remains applicable and becomes a general standard of
applicable law. Specifically on this point, in the scholarship see
Amoroso D. (2023), p. 207.
25. Possible mistakes might be due, for instance, to the discrepancy
between the data used to ‘train’ the algorithm and the actual data. See
Boulanin V., Verbruggen M. (2017), p. 24.
26. On the launching of ‘indiscriminate attacks’ through the use of
autonomous weapons, see, ex multis, Bo M., Bruun L., Boulanin V.
(2022), p. 9; Winter E. (2022), p. 1; Bo M. (2021), p. 275;
Homayounnejad M. (2018), p. 123.
27. On this issue, see, ex multis, Sarch A., Abbott R. (2019), p. 357; Lima
G., Cha M., Jeon C., Park K. S. (2021); Simmler M., Markwalder N.
(2019), p. 1; Hu Y. (2019), p. 491; Hallevy G. (2010), p. 171.
28. In the scholarship, on the issue of identifying the specific agents to
whom criminal liability for international crimes committed through the
use of autonomous weapons can potentially be ascribed, see, ex
plurimis, Gaeta P. (2023), p. 1033; Lewis D. A. (2023), p. 965;
Weigend T. (2023), p. 1; Amoroso D., Giordano B. (2019), p. 211.
29. In this sense, Article 51(4) of the First Additional Protocol to the
Geneva Conventions is clear on this point and the rule, according to
common interpretation, has also become part of general international
law applicable to both internal and international conflicts.
30. On the possible relevance of ‘command responsibility’ for present
purposes, see below.
31. To define the problem at stake, reference is made in the scholarship to
the expression ‘many hands problem’, coined in Thompson D. F.
(1980), p. 95.
32. This is expressly provided for by the Elements of Crimes related to
Article 8(2)(a)(i) of the Rome Statute (‘War crime of willful killing’),
where, pursuant to Element n. 4, ‘The conduct took place in the
context of and was associated with an international armed conflict’.
See Elements of Crimes (ICC-PIOS-LT-03-002/15_Eng), Article 8 (2)
(a) (i) – War crime of willful killing, p. 9. However, the same
requirement is not provided as necessary for the purpose of the crimes
against humanity.
33. See McFarland T., McCormack T. (2014), p. 374; Weigend T. (2023),
p. 13; Amoroso D., Giordano B. (2016), p. 219.
34. ‘3. In addition to the grave breaches defined in Article 11, the
following acts shall be regarded as grave breaches of this Protocol,
when committed willfully, in violation of the relevant provisions of
this Protocol, and causing death or serious injury to body or health: a)
making the civilian population or individual civilians the object of
attack; […]’.
35. And such hermeneutic approach has been, for instance, adopted on
several occasions by the International Criminal Tribunal for the former
Yugoslavia for the specific purpose of allowing the crime of
‘indiscriminate attack’ to be charged on the basis of forms of mens rea
based on the assumption of risks. See ICTY, Judgement, Galić (IT-98-
29-T), Trial Chamber I, 5 December 2003, para. 140; Judgment, Galić
(IT-98-29-A), Appeals Chamber, 30 November 2006; ICTY, Judgment,
Milosŏvić (IT-98-29/1-T), Trial Chamber III, 12 December 2007, para.
951.
36. Article 30 Rome Statute (‘Mental element’): ‘1. Unless otherwise
provided, a person shall be criminally responsible and liable for
punishment for a crime within the jurisdiction of the Court only if the
material elements are committed with intent and knowledge. 2. For the
purposes of this article, a person has intent where: (a) In relation to
conduct, that person means to engage in the conduct; (b) In relation to
a consequence, that person means to cause that consequence or is
aware that it will occur in the ordinary course of events. 3. For the
purposes of this article, “knowledge” means awareness that a
circumstance exists or a consequence will occur in the ordinary course
of events. “Know” and “knowingly” shall be construed accordingly’.
For an in-depth assessment of the provision, see Pigaroff D. K.,
Robinson D. (2016), p. 1111; Finnin S. (2012), p. 344; Werle G.,
Jessberger F. (2005), p. 35.
37. In letter (b) of the second paragraph of Article 30 of the Statute,
whereas it is common ground in both scholarship and case-law that the
notion as per letter (a) is exclusively attributable to dolus directus. See
Finnin S. (2012), p. 341; see also in the case-law, inter alia, ICC-
01/04-01/06, The Prosecutor v. Thomas Lubanga Dyilo, Decision on
the Confirmation of Charges, Pre-Trial Chamber I, 29 January 2007,
para. 351; ICC-01/04-01/07-3436, The Prosecutor v. Germain
Katanga, cit., para. 774.
38. The same applies to the corresponding Article 8(2)(e)(i) with regard to
the context of armed conflicts of a non-international nature. On the
provision at stake, see, for all, Dörmann K. (2016), p. 355.
39. Elements of Crimes (ICC-PIOS-LT-03-002/15_Eng), Article 8(2)(b)(i)
– War crime of attacking civilians, p. 12: ‘1. The perpetrator directed
an attack. 2. The object of the attack was a civilian population as such
or individual civilians not taking direct part in hostilities. 3. The
perpetrator intended the civilian population as such or individual
civilians not taking direct part in hostilities to be the object of the
attack […]’.
40. That is, whether the ‘knowledge’ that the intentional attack is directed
against civilian targets is required in terms of possibility, probability
or, instead, certainty.
41. ICC-01/04-01/07-3436, The Prosecutor v. Germain Katanga, Trial
Chamber II, March 7, 2014, para. 808; ICC-01/04-02/06-2359, The
Prosecutor v. Bosco Ntaganda, Trial Chamber VI, July 8, 2019, para.
921.
42. For the most part, these are proposals that rely upon the ‘strict’ liability
of ‘subjects’ – not natural persons – indirectly involved in the process
of creating and using an autonomous weapon. For instance, as
mentioned in the above introduction, the recognition of the liability of
the States has been considered, not without criticism. See Hammond
D. M. (2015), p. 652; Crootof R. (2016), p. 1347; Amoroso D.,
Giordano B. (2019), p. 224. Another option would be to held liable
that companies – legal persons – that program, develop or train the
weapons. See Bhargava V., Velasquez M. (2019), p. 829; Amoroso D.,
Giordano B. (2019), p. 225.
43. On this issue, see Spadaro A. (2023), p. 1119; Margulies P. (2016), p.
405; Reitinger N. (2015), p. 79.
44. Article 28 Rome Statute (‘Responsibility of commanders and other
superiors’): ‘[…] 1. A military commander or person effectively acting
as a military commander shall be criminally responsible for crimes
within the jurisdiction of the Court committed by forces under his or
her effective command and control, or effective authority and control
as the case may be, as a result of his or her failure to exercise control
properly over such forces, where: (a) That military commander or
person either knew or, owing to the circumstances at the time, should
have known that the forces were committing or about to commit such
crimes; and; (b) That military commander or person failed to take all
necessary and reasonable measures within his or her power to prevent
or repress their commission or to submit the matter to the competent
authorities for investigation and prosecution. […]’. For a comment to
the provision at stake, see Triffterer O. (2016), p. 1056.
45. According to which ‘We should, hence, accept the fact that (criminal)
responsibility cannot be assigned in every case of a deplorable fatal
malfunction of AWS—which does not distinguish AWS from other,
more traditional pieces of sophisticated technology. We should
therefore stop searching for the “magic bullet”, i.e., a legal doctrine
that would close the responsibility gap once and for all’.

References
Amoroso, D. (2023). Autonomous weapons systems and the martens clause
between ethics and law. Studi Senesi, 2, 207–255.
Amoroso, D., Garcia, D., & Tamburrini, G. (2022). The weapon that
mistook a school bus for an ostrich. Vulnerabilities in artificial
intelligence and challenges for the meaningful human control of
autonomous weapons. Science & Diplomacy, 1.
Amoroso, D., & Giordano, B. (2019). Who is to blame for autonomous
weapons Systems’ misdoings? In E. Carparelli & N. Lazzerini (Eds.),
Use and misuse of new technologies – Contemporary challenges in
international and European law (p. 211). Springer.
Arnaudo, L., & Pardolesi, R. (2023). “Ecce robot”. Sulla responsabilità dei
sistemi adulti di intelligenza artificiale. In Danno e responsabilità (p.
409). Giappichelli.
Barfield, W. (2018). Towards a law of artificial intelligence. In W. Barfield
& U. Pagallo (Eds.), Research handbook on the law of artificial
intelligence (p. 2). Edward Elgar Publishing Limited.
Bassini, M. (2023). Intelligenza Artificiale generativa: alcune questioni
problematiche. MediaLaws, 2023(2), 391–397.
Bassini, M., & Pollicino, O. (2018). Sistemi di Intelligenza Artificiale,
responsabilità e accountability. Verso nuovi paradigmi?. In F. Pizzetti
(Ed.), Intelligenza artificiale, protezione dei dati personali e
regolazione (p. 333). Giappichelli.
Bhargava, V., & Velasquez, M. (2019). Is corporate responsibility relevant
to artificial intelligence responsibility? Georgetown Journal of Law
and Public Policy, 17, 829.
Bo, M. (2021). Autonomous weapons and the responsibility gap in light of
the mens rea of the war crime of attacking civilians in the ICC statute.
Journal of International Criminal Justice, 19, 275.
Bo, M. (2023). Are programmers in or out of control? The individual
criminal responsibility of programmers of autonomous weapons and
self-driving cars. In S. Gless & H. Whalen-Bridge (Eds.), Human-
robot interaction in law and its narratives: Legal blame, criminal law,
and procedure (p. 1). Cambridge University Press.
Bo, M., Bruun, L., & Boulanin, V. (2022). Retaining human responsibility
in the development and use of autonomous weapons system – On
accountability for violations of International Humanitarian Law
involving AWS (p. 9). Stockholm International Peace Research
Institute.
Boulanin, V., & Verbruggen, M. (2017). Mapping the development of
autonomy in weapon systems (p. 24). Stockholm International Peace
Research Institute.
Cappiello, B. (2022). AI-systems and non-contractual liability. A European
private international law analysis. Giappichelli.
Chesterman, S. (2020). Artificial intelligence and the limits of legal
personality. International and Comparative Law Quarterly, 69(4),
819–844.
Condinanzi, M., & Crespi Reghizzi, Z. (2021). L’uso della forza e il sistema
di sicurezza collettiva delle Nazioni Unite. In Aa.Vv., Istituzioni di
diritto internazionale (p. 321). Giappichelli.
Crootof, R. (2016). War torts: Accountability for autonomous weapons.
University of Pennsylvania Law Review, 164, 1347.
D’Alfonso, G. (2022). Intelligenza artificiale e responsabilità civile.
Prospettive di regolamentazione europea. Revista de Estudios
Jurídicos y Criminológicos (6), 163–195.
http://dx.doi.org/10.25267/REJUCRIM.2022.i6.06
Diamantis, M. (2021). Algorithms acting badly: A solution from corporate
law. George Washington Law Review, 89, 820.
Díaz Figueroa, M., Henao Orozco, A., Martínez, J., & Muñoz Jaime, W.
(2023). The risks of autonomous weapons: An analysis centred on the
rights of persons with disabilities. International Review of the Red
Cross, 105 (922), 278–305.
Dominelli, S. (2013). Delimitazione dei confini marittimi e dialogo tra corti
internazionali: quale ruolo per l’ITLOS? Il caso Bangladesh/Myanmar.
Il diritto marittimo, 115(4), 791–809.
Dörmann, K. (2016). Article 8 - B. Article 8 para. 2: Meaning of ‘war
crimes’. In O. Triffterer & K. Ambos (Eds.), The Rome Statute of the
International Criminal Court: A commentary (3rd ed., p. 355).
Beck/Hart Publishing.
Finnin, S. (2012). Mental elements under Article 30 of the Rome Statute of
the International Criminal Court: A comparative analysis.
International and Comparative Law Quarterly, 61, 344.
Flammini, F. (2018). Artificial intelligence (AI) applicata agli autonomous
systems, AN-SMD-02. Centro Militare di Studi Strategici (p. 16).
Centro Miliyare Studi Startegici.
Gaeta, P. (2023). Who acts when autonomous weapons strike? The act
requirement for individual criminal responsibility and state
responsibility. Journal of International Criminal Justice, 21(5), 1033.
Hallevy, G. (2010). The criminal liability of artificial intelligence entities -
From science fiction to legal social control. Akron Intellectual
Property Journal, 4(2), 171.
Hammond, D. N. (2015). Autonomous weapons and the problem of sate
accountability. Chicago Journal of International Law, 15(2), 652.
Heiderhoff, B. (2021). Internationale Produkthaftung 4.0 – Welche
Wertungen sollten das Kollisionsrecht für autonome Systeme prägen?.
In Praxis des Internationalen Privat- und Verfahrensrechts (p. 409).
Heller, K. J. (2013). ‘One hell of a killing machine’. Signature strikes and
international law. Journal of International Criminal Justice, 11(1), 89–
119.
Henckaerts, J. M., Doswald-Beck, L., Alvermann, C., Dormann, K., &
Rolle, B. (2009). International committee of the Red-Cross. Customary
international humanitarian law - Volume I Rules. Cambridge
University Press.
Homayounnejad, M. (2018). Ensuring fully autonomous weapons systems
comply with the rule of distinction in attack. In S. Casey-Maslen, M.
Homayounnejad, H. Stauffer, & N. Weizmann (Eds.), Drones and
other unmanned weapons systems under international law (p. 123).
Brill/Nijhoff.
Horowitz, M. C. (2016). Why words matter: The real world consequences
of defining autonomous weapons systems. Temple International &
Comparative Law Journal, 30, 85.
Hu, Y. (2019). Robot criminals. Michigan Journal of Law Reform, 52, 491.
Lewis, D. A. (2023). War crimes involving autonomous weapons:
Responsibility, liability and accountability. Journal of International
Criminal Justice, 21(5), 965.
Lima, G., Cha, M., Jeon, C., & Park, K. S. (2021). The conflict between
people's urge to punish AI and legal systems. Frontiers in Robotics
and AI, 8. 756242. https://doi.org/10.3389/frobt.2021.756242
Liu, H. Y. (2012). Categorization and legality of autonomous and remote
weapons systems. International Review of the Red Cross, 96(886),
627–652.
Margulies, P. (2016). Making autonomous weapons accountable: Command
responsibility for computer-guided lethal force in armed conflicts. In J.
D. Ohlin (Ed.), Research handbook on remote warfare (p. 405).
Edward Elgar Publishing Ltd.
Mauri, D. (2022). The use of CBRN weapons in armed conflict. In A. de
Guttry, M. Frulli, F. Casolari, & L. Poli (Eds.), International law and
chemical, biological, radio-nuclear (CBRN) events. Towards an all-
hazards approach (p. 358). Martinus Nijhoff.
McDougall, C. (2019). Autonomous weapon systems and accountability:
Putting the cart before the horse. Melbourne Journal of International
Law, 20, 76.
McFarland, T., & McCormack, T. (2014). Mind the gap: Can developers of
autonomous weapons systems be liable for war crimes? International
Studies, 90, 367.
Munari, F. (2021). Risoluzione pacifica e prevenzione delle controversie
transnazionali. In Aa.Vv., Istituzioni di diritto internazionale (p. 239).
Giappichelli.
Nissenbaum, H. (1996). Accountability in a computerized society. Science
and Engineering Ethics, 2, 29.
Pigaroff, D. K., & Robinson, D. (2016). Article 30. In O. Triffterer & K.
Ambos (Eds.), The Rome Statute of the International Criminal Court:
A commentary (3rd ed., p. 1111). Beck/Hart Publishing.
Quéguiner, J. F. (2006). Precautions under the law governing the conduct of
hostilities. International Review of the Red Cross, 88(864), 793–821.
Reitinger, N. (2015). Algorithmic choice and superior responsibility:
Closing the gap between liability and lethal autonomy by defining the
line between actors and tools. Gonzaga Law Review, 51, 79.
Ronzitti, N. (2021). Diritto internazionale dei conflitti armati. Giappichelli.
Ruffolo, U. (2017). Per i fondamenti di un diritto della robotica self-
learning; dalla machinery produttiva all’auto driverless: Verso una
‘responsabilità da algoritmo’? In U. Ruffolo (Ed.), Intelligenza
artificiale e responsabilità (p. 1). Giuffrè.
Sandoz, Y., Swinarski, C., & Zimmermann, B. (1987). International
committee of the Red Cross, Commentary on the additional protocols
of 8 June 1977 to the Geneva Conventions of 12 August 1949 (p. 421,
para. 1481). Geneva.
Sarch, A., & Abbott, R. (2019). Punishing artificial intelligence: Legal
fiction or science fiction. UC Davis Law Review, 53, 357.
Sassóli, M. (2014). Autonomous weapons and international humanitarian
law: Advantages, open technical questions and legal issues to be
clarified. International Law Studies, 90, 308–340.
Saxon, D. (2014). A human touch: Autonomous weapons, directive
3000.09, and the “Appropriate levels of human judgment over the use
of force”. Georgetown Journal of International Affairs, 15(2), 100.
Sconamiglio, C. (2023). Responsabilità civile ed intelligenza artificiale:
quali soluzioni per quali problemi?. In Responsabilità civile e
previdenza (p. 1073).
Simmler, M., & Markwalder, N. (2019). Guilty robots? – Rethinking the
nature of culpability and legal personhood in an age of artificial
intelligence. Criminal Law Forum, 30, 1.
Simonini, G. F. (2023). La responsabilità del fabbricante nei prodotti con
sistemi di intelligenza artificiale. In Danno e Responsabilità (p. 435).
Giappichelli.
Spadaro, A. (2023). A weapon is no subordinate: Autonomous weapon
systems and the scope of superior responsibility. Journal of
International Criminal Justice, 21(5), 1119.
Suchman, L., & Weber, J. (2016). Human-machine autonomies. In N.
Bhuta, S. Beck, R. Geiss, H. Y. Liu, & C. Kress (Eds.), Autonomous
weapons systems (p. 75). Cambridge University Press.
Surden, H. (2019). Artificial intelligence and law: An overview. Georgia
State University Law Review, 35(4), 1305–1337.
Tamburrini, G. (2016). On banning autonomous weapons systems: From
deontological to wide consequentialist reasons. In N. Bhuta, S. Beck,
R. Geiss, H. Y. Liu, & C. Kress (Eds.), Autonomous weapons systems:
Law, ethics, policy (p. 122). Cambridge University Press.
Thompson, D. F. (1980). Moral responsibility of public officials: The
problem of many hands. The American Political Science Review,
74(4), 905–916.
Thurnher, J. S. (2014). Examining autonomous weapon systems from a law
of armed conflict perspective. In H. Nasu & R. McLaughlin (Eds.),
New technologies and the law of armed conflict (p. 213). Asser Press.
Tommasi, S. (2023). “Digital services act” e “artificial intelligence act”:
tentativi di futuro da armonizzare. In Persona e Mercato (p. 279).
Trepczynski, S. (2021). Rules of the domain: The impact of law on the
evolution of space activities and technologies. In M. de Zwart & S.
Henderson (Eds.), Commercial and military uses of outer space (p.
51). Springer.
Triffterer, O. (2016), Article 28 - Responsibility of commanders and other
superiors. In O. Triffterer & K. Ambos (Eds.), The Rome Statute of the
International Criminal Court: A commentary (p. 1056). Beck/Hart
Publishing.
Tschider, C. (2023). Humans outside the loop. Yale Journal of Law and
Technology, 26(2), 327.
Weigend, T. (2023). Convicting autonomous weapons? - Criminal
responsibility of and for AWS under international law. Journal of
International Criminal Law, p. 1, Vol 21, n.5
Weizmann, N. (2018). Armed drones and the law of armed conflict. In S.
Casey-Maslen, M. Homayounnejad, H. Stauffer, & N. Weizmann
(Eds.), Drones and other unmanned weapons systems under
international law (p. 89). Brill/Nijhoff.
Werle, G., & Jessberger, F. (2005). Unless otherwise provided: Article 30 of
the ICC statute and the mental element of crimes under international
criminal law. Journal of International Criminal Justice, 3(1), 35–55.
Winter, E. (2022). The compatibility of autonomous weapons with the
principles of international humanitarian law. Journal of Conflict &
Security Law, 27, 1.
13
ARTIFICIAL INTELLIGENCE IN
HUMANITARIAN ACTION
Humanity and the Machine

Massimo Marelli*

DOI: 10.4324/9781003518495-17

Introduction
When Henry Dunant witnessed first-hand the destruction brought by war in
1859, with over 60,000 soldiers left dead, wounded, or missing on the
battlefields of San Martino and Solferino, he also witnessed the
spontaneous and improvised action of the local villagers mobilizing to their
rescue, moved by the principle of Humanity (hereinafter referred to as
Humanity, or the principle of Humanity) alone (Dunant, 1862). The
principle of Humanity also drives, to this day, humanitarian activities across
the world, and is indeed one of the Fundamental Principles that guides the
action of the Movement of the Red Cross and the Red Crescent
(Movement) – the largest humanitarian network in the world (International
Committee of the Red Cross [ICRC], 2024b). It was the principle of
Humanity that also moved Henry Dunant and a few other men to sensitize
the powers of the time to gather and, eventually, to adopt the First Geneva
Convention in 1864 (Huber, 2010) one of the foundations of international
humanitarian law, regulating the behavior of parties to armed conflicts.
These rules are based on a compromise between military necessity and
Humanity, and they are designed to ensure that a degree of Humanity is
maintained during armed conflicts (Sommaruga, 1999). Limitations to the
conduct of hostilities by parties to a conflict linked to the application of the
principle of Humanity, on the one hand, and Humanity as a core principle of
humanitarian response on the other hand are, therefore, the two sides of the
Humanity coin in humanitarian emergencies arising from armed conflicts
and other situations of violence.
* The opinions and views expressed in this article are the author's own and
do not necessarily represent those of the ICRC. The author is grateful to
Pierrick Devidal, and Vincent Graf Narbel for their valuable feedback on an
earlier draft. Special thanks go to Julia Feigen for her input and research
support. All errors are the author's own.
Today, we are seeing how the rise and proliferation of artificial
intelligence (AI) and machine learning (ML) are reshaping many
professional sectors, from medicine to financial services (Sahota, 2024).
The humanitarian sector is not immune from these changes, and new
humanitarian concerns and challenges emerge. The key issues regarding the
humanitarian application of the principle of Humanity in the context of AI,
particularly within the framework of humanitarian emergencies linked to
armed conflicts and other situations of violence, falls into two distinct
streams: the use of AI by humanitarian organizations themselves, and the
use by parties to armed conflict. The use by parties to an armed conflict is
analyzed in detail elsewhere (see, e.g., Greipl, 2024). This article explores
the challenges regarding the use of AI by humanitarian organizations. The
increased automation that AI brings may help advance – but also jeopardize
– how humanitarian organizations act in accordance with the principle of
Humanity. The piece will, therefore, investigate the link between AI and
personal data and illustrate how personal data protection principles and
requirements may be a helpful tool to ensure that the principle of Humanity
remains at the heart of humanitarian action also when using data intensive
technologies like AI. The final component of this article identifies how the
principles and requirements of data protection can be applied to AI, and
how this application can support the principled operationalization of AI by
humanitarian organizations. To conclude, this article also draws attention to
developments in AI research and in the application of data protection
measures that may offer a path forward.

AI and Humanitarian Action


The fast growing offering of AI products and services has already made the
humanitarian use of AI a reality for the sector (Coppi et al., 2021). Some of
these new tools, designed to support human decision-making in
humanitarian action, can significantly enhance the capacity of the
humanitarian sector to alleviate human suffering in humanitarian
emergencies. They can, for instance help humanitarian organizations
identify communities, or sections of communities, whose needs are
particularly acute and who would, therefore, benefit from humanitarian
programs, or help identify locations where humanitarian organizations
should preemptively deploy resources in the anticipation of natural disasters
or population movement (Ryan-Mosely, 2023).
At present, humanitarian organizations have engaged in several kinds of
AI-supported action programs. Environmental scanning, trend analysis and
forecasting, beneficiary identification, management, and engagement
represent only a few of the ways humanitarian organizations have
incorporated AI into their working modalities (Coeckelbergh, 2019). While
some of these AI applications focus on AI-supported technology's ability to
optimize resources within an organization and facilitate smooth
information-sharing among its workforce, humanitarian organizations can
also use AI to offer safer, and often faster, services to beneficiaries. Two
examples of how the ICRC has leveraged AI to do so include the use of AI-
assisted models in the restoration of family links through the Missing
Persons Digital Matching Project (MPDM) (ICRC, 2023a), as well as
landmine detection and clearance through AI-supported weapon
contamination analysis (ICRC, 2023b).
In the MPDM, AI-supported systems are used to scan multiple databases
in order to “flag a match” for a missing person (ICRC, 2023a). These
databases, which belong to Red Cross or Red Crescent societies, partnering
non-governmental organizations, international organizations, and
governments, can give families looking for missing loved ones the best
chance to find important information. However, given the volume and
number of these databases, the analysis required to identify a match can be
extremely time-consuming. Delegating the scanning process to an AI-
supported platform means that the Central Tracing Agency can provide
information for families faster than ever before. Designed to adhere to data
protection requirements, MPDM illustrates the ways in which AI can be
leveraged to help humanitarian organizations achieve their goals in an
efficient manner for the benefit of affected communities.
The second example of such an application used by the ICRC, which
integrates drone technology and AI into ICRC's mine detection and
clearance practices, represents a successful humanitarian operationalization
of automated technology. Using conventional methods, a demining
professional can clear 50 square meters each day. Using AI-assisted remote
sensing, “conservative estimates suggest this new tool will be able to survey
and process 100,000 square meters a day for more efficient demining later”
(ICRC, 2023b). Using AI and remote sensing to scan the area, before a
professional deminer even sets foot on the soil, makes demining operations
safer for humanitarian workers, and safer for the beneficiaries who live in
the areas affected. Faster detection, followed by targeted removal of mines
and explosive remnants of war, will enable beneficiaries to resume their
daily activities with renewed confidence in the safety of their environment,
providing a very significant contribution to preventing and reducing human
suffering (ICRC, 2023b).
Other parts of the humanitarian sector have similarly drawn upon the
abilities of AI in various facets of their work. For example, in Uganda AI
has proven useful in analyzing public opinion that could later be used to
inform development programs (UN Global Pulse, 2017). Essentially, Pulse
Lab Kampala established a “toolkit that makes public radio broadcasts
machine-readable through the use of speech recognition technology and
translation tools that transform radio content into text” (Kuner & Marelli,
2020, p. 279; UN Global Pulse, 2017). In a time efficient manner,
employees were able to distill key takeaways and needs expressed in these
broadcasts and then use them to guide the design process for future work.
National governments have also used AI to carry out critical life-saving
work. For instance, Indian law enforcement used automated facial
recognition technology to locate and find over 3000 missing children in just
four days (Cutherbertson, 2018). As illustrated, AI can accelerate diverse
activities, among diverse actors, that seek to help and protect people.
However, as will be illustrated in the following sections, the multiplicity of
AI's capabilities is two-sided, AI use could also lead to harm for people's
safety, rights, and dignity when misused.
While these tools, therefore, present important opportunities for
humanitarian organizations to streamline and accelerate their activities for
the benefit of communities affected by humanitarian emergencies,
contributing to alleviating human suffering, one of the questions arising
relates to how this incorporation of automation might impact the
relationship between humanitarian action and the environment in which
humanitarian emergencies unfold, and regarding persons affected by such
humanitarian emergencies, in particular vis-à-vis the application of the
principle of Humanity. How can humanitarian organizations using AI-based
tools in humanitarian actions ensure that their actions remain driven by the
objective to successfully achieve the alleviation of human suffering, causing
no additional harm, and that the principle of Humanity – with its important
tenets of proximity, transparency and engagement, empathy, accountability,
and agency – remains at the core of what drives their work?

The Principle of Humanity and Humanitarian Action


Among the seven fundamental principles of the International Red Cross and
Red Crescent Movement – Humanity, Independence, Neutrality,
Impartiality, Voluntary Service, Unity, and Universality – Humanity is
arguably the one that is most likely to come under fire as the applications
and capacities of AI and ML proliferate. Unlike the other fundamental
principles, which are mainly linked to the identity of the Movement,
Humanity is also a key principle for all other humanitarian actors.
Humanity is considered by the Movement as a key substantive principle. In
describing the principle of Humanity, and application thereof, the Statutes
of the Movement indicate:

The International Red Cross and Red Crescent Movement, born of a


desire to bring assistance without discrimination to the wounded on the
battlefield, endeavors, in its international and national capacity, to
prevent and alleviate human suffering wherever it may be found. Its
purpose is to protect life and health and to ensure respect for the human
being. It promotes mutual understanding, friendship, cooperation and
lasting peace amongst all peoples
(ICRC, 1986).

The principle of Humanity forms the “raison d’être of humanitarianism”


(Devidal, 2024, p. 11), and must be reflected in not only why humanitarian
organizations operate, but also how they do so (Slim, 2019). For example,
Jean Pictet, “the architect of the modern-day Fundamental Principles of the
Red Cross,” explains that “the way in which [humanitarian] help is given is
of great importance. When nursing a patient, or giving help, one must show
some Humanity” (Fast, 2015, p. 112; Pictet, 1979). This quote emphasizes
the critical link between empathy and Humanity, for it is not only the
assistance given that is significant, but the caring manner in which such
assistance is given. In this sense, the spirit and dedication to Humanity must
be present in each aspect of a humanitarian organization's operations. This
idea rings especially true when we discuss the onboarding of new
technologies which can limit, or even prevent, the exercise of human
agency both from the perspective of who is bringing the assistance and on
the side of who is receiving the assistance. The application of assisted
and/or autonomous decision-making through AI and ML, to some extent,
create distance between human beings and the processes of decision-
making taking place. This distance, deriving from the outsourcing of human
analysis and/or decision-making to technology, can yield extreme
consequences in the context of humanitarian crisis (Fast, 2015, p. 112).
If left unabated, the use of AI by humanitarian organizations may erode
this sense of care, and ultimately render humanitarian activities less human
(Devidal, 2024, p. 10). Automating aspects of humanitarian activities,
through the use of AI- or ML-supported tools, removes one of the most
important features from humanitarian action: the sentiment of care through
human-to-human connection (Fast, 2015, p. 112). Establishing this human
connection relies on empathy, emotional intelligence, and understanding.
These capacities are necessary during humanitarian crises, for humanitarian
activities require a nuanced – and deeply human – approach to listen to and
assist persons facing complicated situations of harm. This requirement to
exhibit such support in part “explains why humanitarians have always
attached importance to being physically present where affected populations
find themselves.” However, if “human interfaces are replaced by digital
ones that introduce different forms of intermediation or remoteness, this
essential proximity element is mechanically lessened” (Dette, 2018, p. 78).
For this reason, replacing entirely physical presence with automated or
digital activities will erode the primacy of Humanity in humanitarian action.
Instead, humanitarian organizations should use AI-supported tools to
“enhance” rather than substitute direct physical engagement.
As new tools which promise better, faster, and more efficient results
become available to humanitarian organizations, preserving Humanity as an
organizational priority becomes an active choice. As will be discussed
below, humanitarian organizations may have more options than ever before,
but the core driver of their operations remains the same as it was for the
19th-century villagers of Solferino.

AI and Its Relationship to Personal Data


To understand the close link between AI and data, including personal data,
it is important to understand how ML solutions are developed (Kuner &
Marelli, 2020, p. 278). Various AI techniques exist, some of which process
personal data, while others do not. Predominantly, ML solutions are
developed in the following manner: (i) The system is provided with selected
data expected to contain particular patterns or similarities (training data).
(ii) AI techniques identify these patterns and determine the relevant features
for classifying these patterns or similarities and making predictions about
new data. (iii) A model is created that can recognize these patterns when
new data is processed, facilitating predictions or classifications (Norwegian
Data Protection Authority, 2018, p. 7).
Once trained, the AI solution generates a model used for analyzing or
predicting new, unseen data. AI models can be either static or dynamic.
Static models remain unchanged over time and consistently apply the initial
training data model, allowing developers to maintain full control but
preventing self-improvement. Conversely, dynamic models adjust and
refine their outputs based on new input data. Since AI solutions typically
learn from data during both training and deployment (in dynamic models),
the resulting models may retain portions of this data. Consequently, if
malicious entities gain control of the system, they could potentially access
the training data or the data used during deployment in dynamic models. In
sensitive contexts, such as the environments that humanitarian
organizations operate in, such technical “loopholes” risk exposing
beneficiary information to those who may seek to exploit it. These
examples only scratch the surface of data protection's relevance to the
regulation of AI. As this article will go on to explain, data protection
standards offer a lens through which to analyze and evaluate AI and support
humanitarian organizations in their efforts to leverage new technologies
while minimizing harm. Further discussion on potential attacks on AI
systems can be found in the subsequent section, How to Address AI
Concerns Through Data Protection in Humanitarian Action.
The data used to produce these outputs is referred to as “training data,” as
this data teaches the model how to make inferences without (or with
limited) human supervision (Sarkis, 2023). Training data is thereby the
“foundation for successful AI and machine learning” (Sarkis, 2023), since
the training data is the basis of the model itself. Everything that the model
“learns” how to do, through processes such as those described above, it has
learned from the data on which it was trained (Moseley, 2024). For many
companies, securing large, detailed, and accurate datasets gives AI
developers a competitive advantage in developing the most adept and finely
tuned model (Pratiba, 2018). For this reason, data protection frameworks –
which are designed with the very purpose of upholding the rights and
dignity of individuals when processing their data – are critical to shaping
the humanitarian use of AI.

Personal Data Protection


Underpinning data protection legislation and principles is the overarching
objective of safeguarding the dignity, lives, and safety of individuals. For
example, in the preamble of Council of Europe Convention for the
Protection of Individuals with regard to Automatic Processing of Personal
Data (CETS No. 108) it is stated that the Convention is adopted because
“[…] it is necessary to secure the human dignity and protection of the
human rights and fundamental freedoms of every individual and, given the
diversification, intensification and globalization of data processing and
personal data flows, personal autonomy based on a person's right to control
of his or her personal data and the processing of such data.” As of April
2024, 137 out of 194 countries had put in place laws to ensure personal data
protection and privacy, with 17 more having draft legislation pending
adoption (UNCTAD, 2024). These laws provide for specific principles,
requirements, obligations for controllers and processors, rights of data
subjects, accountability mechanisms, and enforcement provisions to ensure
a handling of personal data that is respectful of the rights and dignity of
data subjects, and accountability of data controllers. They protect the
autonomy and agency of individuals by limiting and clarifying the extent to
which their information can be used.
As the rapid development of new tools, including AI systems and ML
capabilities, intensifies, personal data processing is easier and faster than
ever before (Kuner & Marelli, 2020, p. 21). However, as a framework of
analysis and understanding, data protection offers a countermeasure to the
seemingly unabated extraction of personal data from individuals. There are
a number of key data protection principles and requirements that, together,
can effectively guide the responsible and respectful use of AI, these include:
lawfulness and fairness of processing, including with regard to biases,
purpose limitation, data minimization, data retention, data security, rights of
data subjects, data controller/processor relationship, international data
sharing, data protection by design and by default (Kuner & Marelli, 2020,
pp. 282–295). Data protection standards and procedures, moreover,
compound the necessity of accountability (Kuner & Marelli, 2020, p. 293).
However, accountability requires transparency, which in the case of AI can
be difficult to achieve (Kuner & Marelli, 2020, p. 293). Human oversight,
which will be elaborated upon in subsequent sections, offers one avenue
through which to integrate greater transparency, and therefore
accountability, in AI. Data protection frameworks similarly complement
efforts to increase and regulate human oversight in the operationalization of
AI – for both share the same goal of making AI deployment safer.

How to Address AI Concerns Through Data


Protection in Humanitarian Action
Data protection frameworks can offer a comprehensive way to dissect and
navigate the risks and responsibilities linked with AI and its humanitarian
use-cases, as well as frictions between effective use of AI on the one hand,
and the respect of people's dignity and rights on the other hand (Kuner &
Marelli, 2020, p. 279). There are a number of areas where this friction
manifests itself (Kuner & Marelli, 2020, pp. 282–290), these include, for
example: purpose limitation and further processing, fair and lawful
processing, data minimization, data retention, data security, rights of data
subject. Within these areas of frictions, however, there are a few that are
particularly pertinent for the present analysis, namely fairness of processing
– and its link with biases potentially embedded in AI, and transparency –
and data subject rights, specifically rights with regard to automated
decision-making. These are analyzed, in turn, below.

Addressing Bias Through a Data Protection Lens


Some of the most prominent harms that AI models can induce, and
concerns from the point of view of the application of the principle of
Humanity, stem from bias embedded in various elements of AI
operationalization, including a model's training data (Kuner & Marelli,
2020, p. 282). Bias, specifically, can lead to discrimination and the
consequent violation of human dignity. Biases in training data materialize
when models are trained by skewed data which may lead them to assign too
much weight to certain factors over others, and in the process produce
flawed, and often discriminatory, results (Angwin et al., 2016). A well-
known example relates to an AI model used to determine whether certain
offenders should receive bail. Using an imbalanced dataset which featured
black offenders more frequently than white offenders, the model
subsequently concluded that persons of color were twice as likely to have a
repeat offense compared to their white counterparts (Angwin et al., 2016).
While this example is as disturbing as it is misleading, in situations of
armed conflict similar biases in training data can yield life or death results.
This could be the case, for example, if biased AI was to be relied on to
identify communities to be targeted for essential humanitarian aid.
The principle of fairness requires that all processing activities respect
data subjects’ interests, and that data controllers take action to prevent
arbitrary discrimination against individuals (Norwegian Data Protection
Authority, 2018, p. 16).
As indicated above, bias in AI solutions can arise from several sources,
such as the use of biased training datasets, inherent systemic biases within
society, or developers’ decisions on which features to prioritize in each
dataset (Kuner & Marelli, 2020, p. 285). Historical societal biases can make
it challenging to find unbiased data for training AI systems, potentially
causing the AI to reinforce these systemic biases. Therefore, it is crucial to
train models with accurate and relevant data and to ensure that the AI
correctly learns which features to emphasize, avoiding the overvaluation of
discriminatory elements present in the data. This can be particularly
challenging to ensure in the context of AI that needs to be used for
humanitarian emergencies. To mitigate the risk of arbitrary discrimination,
information related to race, ethnicity, political views, religious and
philosophical beliefs, sexual orientation, or any other potentially
discriminatory factors should either not be processed or be protected to
prevent them from being disproportionately emphasized. While AI models
should not emphasize certain categories of data, simply removing these
categories from the dataset does not necessarily eliminate the risk of bias.
The system can still correlate other features with attributes like race or
gender, leading the model to develop biases based on these correlated
features, known as “proxies.” Furthermore, the removal of the primary
discriminatory feature from the dataset can make it more challenging to
identify and correct the bias. Therefore, when selecting a training dataset,
an AI developer must evaluate the quality, nature, and origin of the personal
data used. They must also consider the potential risks to individuals and
groups from using de-contextualized data to create de-contextualized
models. One approach to address this is for data controllers to incorporate
frequent assessments of the datasets in their ongoing Data Protection
Impact Assessment process to check for bias. Additionally, they should
develop methods to address any prejudicial elements, including over-
reliance on correlations.

Transparency
Connected to the objective of upholding Humanity through fairness and the
agency of the data subject and enabling a data subject to exercise all their
rights is the requirement of transparency on the side of the data controller.
By requiring that data subjects involved should receive at least a minimum
amount of information concerning the processing when their data is
collected, transparency ensures that data subjects retain agency over their
information and can hold data controllers accountable for how their data is
used. These are essential elements in upholding dignity and ultimately the
principle of Humanity when processing personal data. Data subjects must
have clear, accessible, information regarding why and how their data has
been processed. This applies both with regard to individuals whose personal
data is used to train algorithms and develop models, and individuals on
whose data the algorithm is operated. The question of how, or by what
logic, data is processed or decisions are reached is of particular importance
to discussions around AI. For instance, Wachter et al. explain that GDPR
articles 22(1) and 22(4), infer that data controllers must “in the existence of
automated decision making [provide] meaningful information about the
logic involved, as well as the significance and the envisaged consequences
of such processing for the data subject” (Wachter et al., 2017). These
conditions – that information must be clear, accessible, and explanatory –
are the materialization of the principle of Transparency, one of data
protection's most fundamental principles (Bittar, 2023).
The Article 29 Working Party guidelines on the obligation of
transparency under the GDPR explain that “Transparency…is about
engendering trust in the processes which affect [data subjects] by enabling
them to understand and if necessary challenge those processes” (EU Article
29 Data Protection Working Party, 2018). In the context of AI, transparency
can be a difficult condition for data controllers and AI system operators to
meet (Kuner & Marelli, 2020, p. 282). Many ML models utilize multi-
layered networks, where outputs result from internal processes that may not
be fully understood or replicated mathematically, even by data scientists
and solution designers (Future of Privacy Forum, 2018, p. 17). This multi-
layered architecture is often referred to as the “black box,” as it can make it
difficult for users to comprehend how the system arrived at a specific
conclusion or prediction, including which features were given more weight.
In most cases, the rationale behind these weightings is not transparent or
intelligible to most due to the high complexity of AI. Consequently, it is
challenging to determine if the choice of features is comprehensive and if
their weightings are reasonable. This leads to a situation in which even
those who created the model itself are often not in a position to explain or
decipher why that model reached a particular conclusion. This inability to
clarify a model's behavior and modes of processing can likewise make it
difficult to hold AI developers or operators accountable in the event of AI-
related harm or misuse (Coeckelbergh, 2019).
One proposed solution to the transparency challenge in AI applications is
to explain the logic behind the solutions. This involves providing
information about the type of input data, the expected output, explaining the
variables and their weight, and clarifying the analytics architecture
(Mantelero, 2019, p. 12). This approach, known as “interpretability,” aims
to understand the causal relationship between changes in input and output
without necessarily explaining the entire logic of the machine through its
multiple layers. However, achieving interpretability in black box models is
often difficult, and it is crucial to be transparent with data subjects about
any unknowns and areas of uncertainty. A version of this concept,
commonly referred to in AI policy and legislation as “human oversight”
currently stands as an aspirational but necessary component of responsible
AI.

Rights With Regard to Automated Decision-Making


Under data protection legislation data subjects have the right not to be
subject to a decision based solely on automated processing without human
involvement, where such decisions produce legal effects or similarly
significantly affect the individual in question (Mantelero, 2019, p. 11;
GDPR Article 22). In this sense, data protection laws treat the exercise of
human decision-making – or human control – as a fundamental tool for the
preservation of individual rights and human dignity, and key components of
upholding the principle of Humanity.
This right “is driven by a concern for algorithmic bias; a worry of
incorrect or unsubstantiated solely automated decisions based on inaccurate
or incomplete data; and the need for individuals to have redress and the
ability to contest a decision if an algorithm is incorrect or unfair” (Centre
for Information Policy Leadership, 2018, p. 16).
These concerns are substantiated by incidents like the Swedish benefits
case, where a flawed AI solution resulted in “thousands of unemployed
people being wrongly denied benefits” (Wills, 2019). In humanitarian
action, similar issues could occur if AI solutions determine who receives
aid or who is included in a target population for an aid program.
Beneficiaries should always have the right to human oversight of decisions
that affect them. It is important to note that “to qualify as human
involvement, the controller must ensure that any oversight of the decision is
meaningful, rather than just a token gesture” (EU Article 29 Working Party,
2018, p. 21). This is crucial because decision-makers might blindly trust AI
recommendations, assuming mathematical algorithms are infallible. Thus,
simply having a human decision-maker is not enough (Mantelero, 2019, p.
11); they must be empowered to challenge the AI's decisions or suggestions.
Decision-makers might not fully understand how the system arrived at a
specific decision or suggestion, making it difficult to assess its correctness.
They should be able to review all facts and information independently,
without considering the AI's outcome. This can be challenging, as AI can
process for more information than a human in the same context.
Individuals, regardless of their expertise, may hesitate to challenge AI's
automated decisions due to the perceived accuracy of the technology.
Therefore, it is also essential to arrange human intervention such that the
review is “carried out by someone with the appropriate authority and
capability to change the decision” (EU Article 29 Working Party, 2018, p.
27).

Meaning and Use of Human Oversight


Taking further the reflections around a right not to be subject to automated
decision-making where such decisions produce legal effects or similarly
significantly affect the individual in question, some legislators and policy-
makers have pointed to the notion of human oversight as a potential
antidote to AI's black box problem. But what is human oversight, and how
can it contribute to the responsible use of AI, particularly in humanitarian
settings? The sweeping, recently approved EU AI Act, proposed in 2021 by
the European Commission, describes human oversight's objective as
“preventing or minimizing the risks to health, safety, or fundamental rights
that may emerge when a high-risk AI system is used in accordance with its
intended purpose or under conditions of reasonably foreseeable misuse.”
Here, the stated goal of human oversight matches some of the key purposes
of the principle of Humanity: to lessen and prevent human suffering or
undue harm whenever possible. As the name suggests, human oversight
requires that there be some degree of human involvement during the stages
of an AI model's lifecycle (Enqvist, 2023). Similar to human-in-the-loop
processing, human oversight implies that human developers can monitor,
and in necessary circumstances alter or terminate, the function of an AI
model. Described as a “measure which represents a type of
counterbalancing act between the aims of automation and authentic human
reasoning” (Enqvist, 2023, p. 509) human oversight presents a worthy goal
for humanitarian organizations to aim for when deploying AI.
Article 14 of the AI Act also describes five objectives making human
oversight desirable. As these scenarios represent a comprehensive overview
of human oversight's utility, it is worth quoting them in full below:

a. “To properly understand the relevant capacities and limitations of the


high-risk AI system and be able to duly monitor its operation, also in
view of detecting and addressing anomalies, dysfunctions and unexpected
performance
b. To remain aware of the possible tendency of automatically relying or
over-relying on the output produced by a high-risk AI system
(‘automation bias’) in particular for high risk AI systems used to provide
information or recommendations for decisions to be taken by natural
persons
c. To correctly interpret the high-risk AI system's output, taking into
account for example the interpretation tools and methods available
d. To decide, in any particular situation, not to use the high risk system or
otherwise disregard, override or reverse the output of the high-risk AI
system
e. To intervene on the operation of the high-risk AI system or interrupt the
system through a “stop” button or similar procedure that allows the
system to come to a halt in a safe state.”
AI Act Article 14
These examples of human oversight use-cases, which include limiting
automation bias and preventing harmful outcomes, complement the efforts
of humanitarian organizations to uphold the principle of Humanity in every
aspect of their work. As is the case for both the principles and requirements
of data protection and the humanitarian mandate itself, the minimization of
harm and protection of human dignity underpin this concept of human
oversight. However, feasibly integrating human oversight into AI models
from a technical perspective can be extremely difficult (Bingley et al.,
2023). For instance, it is challenging for developers to maintain oversight if
they are unable to see or understand how a specific AI model functions
(Bingley et al., 2023). Interpretability and Explainable AI, or XAI, are two
important areas of focus that can support efforts to ensure human oversight.
XAI, in particular, has become an important new area of research, and
further investment in this dynamic field will be required for humanitarian
organizations to contend with AI's most significant challenges (Coppi et al.,
2021).

Conclusion
To add a protective layer of human engagement to potentially risky AI
applications, it is necessary that AI systems themselves are legible to their
human developers and operators. In its most simplified form, this process of
enhancing the legibility of AI and ML systems falls under the distinction of
XAI. In recent years, XAI has transitioned from a niche endeavor in
computer science to a wide-ranging multidisciplinary point of focus
(Langer et al., 2021). Over the past several years, there have been various
approaches designed to make AI models more understandable to the people
that use them. In a recent manifesto from several multidisciplinary experts,
calling for greater attention and investment in XAI research applications,
the authors offer the following summary of XAI methods:
“Explanations can facilitate understanding how learning from data has
occurred, for instance via feature attribution methods. Furthermore,
explanations can reveal how a model can be exploited to improve its
performance. They can also support and improve human confidence in the
output of a given model. Explanations may reveal the existence of hidden
biases in the training data, learned data during model training, that
negatively impact a model's generalization when predicting unseen data…
explanations can also drive pruning and model compression strategies,
linking irrelevant concepts to specific neurons that can hence be removed
from a neural network” (Langer et al., 2021, p. 3).
The capacities for human involvement, or “human-in-the-loop”
engagement, that these explanatory methods facilitate could significantly
benefit the humanitarian sector. By targeting different areas of concern –
from how a model assigns value to factors, to potential biases in training
data, to inconsistencies in its data classification procedures or neural
networks, it is possible to develop AI systems that support rather than
compromise efforts to uphold the principle of Humanity. In humanitarian
contexts, the operationalization of XAI could look like humanitarians using
AI models to predict needs and subsequently make recommendations in
area after conflict. If employees of the humanitarian organization wish to
question why certain decisions were made, they could theoretically go back
to the model to understand the basis of its analysis and conclusion.
Furthermore, if this hypothetical AI-assisted operation eventually takes
place, as in the humanitarian organization provides its services based on
recommendations provided by an AI model, communities themselves could
challenge aid allocation processes they see as unfair. XAI offers affected
people and/or civil society an accountability mechanism to turn to in the
event the humanitarian organization fails to meet needs because it was led
by a faulty algorithm. Therefore, XAI can also foster greater agency among
the populations that humanitarian organizations serve, as incorporating it
would increase the level of transparency between humanitarian
organizations and beneficiaries, which would also keep humanitarian
organizations accountable for their decisions.
However, to fully reap the benefits of XAI, more research in this area
needs to be done. Namely, the humanitarian operationalization of XAI
requires increased standardization and clarity. XAI researchers have noted
that there is significant “conceptual ambiguity regarding various terms,
such as explainability, interpretability, transparency, understanding,
explicability, perspicuity, and intelligibility” (Longo et al., 2024, p. 8).
Defining these fundamental concepts must precede any meaningful efforts
for humanitarian organizations to mobilize XAI methodologies. Despite
these challenges, XAI still presents one of the best chances for the
humanitarian sector to deploy AI in way that respects the lives, safety, and
dignity of people affected by humanitarian crises. The principle of
Humanity can guide the sector through this process, and by leveraging new
developments, such as XAI, humanitarians can shed light on AI's black box.

References
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. Pro
Publica.
Bingley, W. J., Bingley, W.J., Curtis, C., Lockey, S., Bialkowski, A.,
Gillespie, N., Haslam, S.A., Ko, R.K.L., Steffens, N., Wiles, J., &
Worthy, P. (2023). Where is the human in human-centered AI? Insights
from developer priorities and user experiences. Computers in Human
Behavior, 141(1), 107617.
Bittar, R. (2023). Data protection in the world of AI: An assessment of the
effectiveness of GDPR's principles in relation to AI-technology and big
data-analytics. Uppsala Universitet.
Centre for Information Policy Leadership. (2018). First report: Artificial
intelligence and data protection in tension. Center for Information
Policy Leadership (CIPL).
https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/c
ipl_first_ai_report_-_ai_and_data_protection_in_tension__2_.pdf.
Coeckelbergh, M. (2019). Artificial intelligence, responsibility attribution,
and a relational justification of explainability. Science and Engineering
Ethics, 26(4), 2051–2068.
Coppi, G., Moreno Jimenez, R., & Kyriazi, S. (2021). Explicability of
humanitarian AI: A matter of principles. International Journal of
Humanitarian Action 6, (19).
Cutherbertson, A. (2018). Police trace 3,000 missing children in just four
days using facial recognition technology. The Independent.
www.independent.co.uk/tech/india-police-missing-children-facial-
recognition-tech-trace-find-reunite-a8320406.html
Dette, R. (2018). Do no digital harm: Mitigating technology risks in
humanitarian contexts. In S. Hostettler, S. Najih Besson, & J. C. Bolay
(Eds.), Technologies for development (pp. 15–30). Springer.
https://doi.org/10.1007/978-3-319-91068-0_2
Devidal, P. (2024). Lost in digital translation? The humanitarian principles
in the digital age. International Review of the Red Cross, 106 (925),
120–154.
Dunant. (1862). A memory of Solferino. International Committee of the Red
Cross.
Enqvist, L. (2023). ‘Human Oversight’ in the EU Artificial Intelligence
Act: What, when and by whom? Law, Innovation and Technology,
15(2), 508–535. https://doi.org/10.1080/17579961.2023.2245683
EU Article 29 Data Protection Working Party. (2018, April). Article 29:
Guidelines on transparency under Regulation 2016/2019 (WP260
rev.01) wp260rev01_en.pdf (europa.eu)
European Commission. What is personal data?
European Union. (2016). General data protection regulation.
Fast, L. (2015). Unpacking the principle of humanity: Tensions and
implications. International Review of the Red Cross, 97(897–898),
111–131.
Future of Privacy Forum. (2018). The privacy expert's guide to artificial
intelligence and machine learning. https://fpf.org/wp-
content/uploads/2018/10/FPF_Artificial-Intelligence_Digital.pdf
Greipl, A. (2024). Artificial intelligence and related technologies in military
decision-making on the use of force in armed conflicts – Current
developments and potential implications. https://www.geneva-
academy.ch/news/detail/716-new-report-on-artificial-intelligence-and-
related-technologies-in-military-decision-making-on-the-use-of-force-
in-armed-conflicts
Huber, M. (2010). The Geneva Convention and the Red Cross. Cambridge
University Press.
International Committee of the Red Cross (ICRC). (1986). Statues of the
International Red Cross and Red Crescent Movement. Adopted by the
25th International Conference of the Red Cross at Geneva in 1986.
International Committee of the Red Cross (ICRC). (2023a). The missing
persons digital matching project: Faster and better answers. Missing
Persons Global Response, International Committee of the Red Cross.
International Committee of the Red Cross (ICRC). (2023b). Video news
footage: New drone uses artificial intelligence to locate mines and
explosive remnants of war. Multimedia Newsroom of the International
Committee of the Red Cross.
www.icrcnewsroom.org/story/en/2057/video-news-footage-new-drone-
uses-artificial-intelligence-to-locate-mines-and-explosive-remnants-of-
war
International Committee of the Red Cross (ICRC). (2024a). Submission on
autonomous weapons systems to the United nations secretary-general.
ICRC. https://www.icrc.org/sites/default/files/wysiwyg/war-and-
law/icrc_submission_on_autonomous_weapons_to_unsg.pdf
International Committee of the Red Cross (ICRC). (2024b). Geneva
conventions: Humanity's bet against horror in armed conflict.
International Committee of the Red Cross.
www.icrc.org/en/document/geneva-conventions-humanitys-bet-
against-horror
Konaev, M. (2023). U.S. military investments in autonomy and AI: Costs,
benefits, and strategic effects. Center for Security and Emerging
Technology. cset.georgetown.edu/publication/u-s-military-
investments-in-autonomy-and-ai-costs-benefits-and-strategic-effects/
Kuner, C., & Marelli, M. (Eds.). (2020). Handbook on data protection in
humanitarian action (2nd ed.). International Committee of the Red
Cross.
Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E.,
Sesing, A. and Baum, K. (2021). What do we want from explainable
artificial intelligence (XAI)? – A stakeholder perspective on a XAI and
a conceptual model guiding interdisciplinary XAI research. Artificial
Intelligence, 296, 103473. https://doi.org/10.1016/j.artint.2021.103473
Leffer, L. (2024). Your personal information is probably being used to train
generative AI models. Scientific American.
www.scientificamerican.com/article/your-personal-information-is-
probably-being-used-to-train-generative-ai-models/
Longo, L., Brcic, M., Cabitza, F., Choi, J., Confalonieri, R., Del Ser, J.,
Guidotti, R., Hayashi, Y., Herrera, F., Holzinger, A., Jiang, R.,
Khosravi, H., Lecue, F., Malgieri, G., Páez, A., Samek, W., Schneider,
J., Speith, T., & Stumpf, S. (2024). Explainable artificial intelligence
(XAI) 2.0: A manifesto of open challenges and interdisciplinary
research directions. Information Fusion, 106, 102–301.
https://doi.org/10.1016/j.inffus.2024.102301
Mantelero, A. (2019). Artificial intelligence and data protection:
Challenges and possible remedies. Council of Europe.
https://rm.coe.int/artificial-intelligence-and-dataprotection-challenges-
and-possible-re/168091f8a6
Marra, W. C., & McNeil, S. K. (2012). Understanding “the loop”:
Regulating the next generation of war machines. Harvard Journal of
Law & Public Policy, 36(3), 1139–1185.
Moseley, H. (2024). In the AI science boom, beware: Your results are only
as good as your data. Nature. www.nature.com/articles/d41586-024-
00306-2
Pictet, J. (1979). The fundamental principles of the International Red
Cross: Commentary. International Federation of Red Cross and Red
Crescent Societies.
Pratiba, D. (2018). Web scraping and data acquisition using Google
Scholar. 3rd International conference on computational systems and
information technology for sustainable solutions (CSITSS).
www.researchgate.net/publication/334700979_Web_Scraping_And_D
ata_Acquisition_Using_Google_Scholar
Russell, B. K., McGeown, J., & Beard, B. L. (2023). Developing AI
enabled sensors and decision support for military operators in the field.
Journal of Science and Medicine in Sport, 26(suppl 1), S40–S45.
https://doi.org/10.1016/j.jsams.2023.03.001
Ryan-Mosely, T. (2023). How AI can actually be helpful in disaster
response. MIT Technology Review.
Sacks, S. D. (2023). A framework for lethal autonomous weapons systems
deterrence. National Defense University Press.
Sahota, N. (2024). The AI revolution in lead generation: Navigating new
business frontiers. Forbes.
www.forbes.com/sites/neilsahota/2024/01/17/the-ai-revolution-in-lead-
generationnavigating-new-business-frontiers/
Sarkis, A. (2023). Training data for machine learning: Human supervision
from annotation to data science. O’Reilly Media, Inc.
Slim, H. (2019). The power of humanity: On being human now and in the
future. Humanitarian Law & Policy Blog. blogs.icrc.org/law-and-
policy/2019/07/30/power-of-humanity-being-human-now-future/
Slim, H. (2020). People power in humanitarian action. Humanitarian Law
& Policy Blog. blogs.icrc.org/law-and-policy/2020/02/20/people-
power-humanitarian-action/
Sommaruga, C. (1999). Humanity: Our priority now and always. Response
to ‘principles, politics, and humanitarian action’. Ethics and
International Affairs, 13, 23–28.
Stewart, R., & Hinds, G. (2023). Algorithms of war: The use of artificial
intelligence in decision making in armed conflict. Humanitarian Law
& Policy Blog. blogs.icrc.org/law-and-policy/2023/10/24/algorithms-
of-war-use-of-artificial-intelligence-decision-making-armed-conflict/
The Norwegian Data Protection Authority. (2018). Artificial intelligence
and privacy.
https://www.datatilsynet.no/globalassets/global/english/ai-and-
privacy.pdf
Trusilo, D., & Danks, D. (2023). Artificial intelligence and humanitarian
obligations. Ethics and Information Technology, 25(1).
https://doi.org/10.1007/s10676-023-09681-2
UNCTAD. (2024). Data protection and privacy legislation worldwide.
https://unctad.org/page/data-protection-and-privacy-legislation-
worldwide
UN Global Pulse. (2017). Using machine learning to analyse radio talk in
Uganda. UN Sustainable Development Group.
https://unsdg.un.org/resources/using-machine-learning-analyse-radio-
content-uganda
van Dijk, B. (2022). Revisiting the history of the Geneva conventions.
Humanitarian Law & Policy Blog. blogs.icrc.org/law-and-
policy/2022/02/17/history-geneva-conventions/
Von Eschenbach, W. (2021). Transparency and the black box problem: Why
we do not trust AI. Philosophy & Technology, 34(4), 1607–1622.
https://doi.org/10.1007/s13347-021-00477-0
Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to
explanation of automated decision making does not exist in the general
data protection regulation. International Data Privacy Law, 7(2), 76–
99.
Wills, T. (2019, February 25). Sweden: Rogue algorithm stops welfare
payments for up to 70,000 unemployed. Algorithm Watch.
https://algorithmwatch.org/en/rogue-algorithm-in-sweden-stops-
welfare-payments/
Wood, N. G. (2023). Autonomous weapon systems: A clarification. Journal
of Military Ethics, 22(1), 18–32.
https://doi.org/10.1080/15027570.2023.2214402
14
THE NEW STRATEGIC TRIAD
Geopolitics, Diplomacy, and Artificial
Intelligence – The Uncertain Trumpet Redux
Alessandro Politi

DOI: 10.4324/9781003518495-18

Introduction
Whoever lived through the period 1945–1989, conventionally termed the
Cold War, remembers that both superpowers and middle powers tried over
the decades to guarantee their strategic nuclear deterrence through the
acquisition and maintenance of the “triad”: a combination of
Intercontinental Ballistic Missiles (ICBMs, both in fixed silos and mobile),
Sea Launched Ballistic Missiles (SLBMs, carried by ballistic missile
submarines – SSBNs) and strategic bombers (with different types of bombs
and air-to-surface missiles). This was considered indispensable to guarantee
“second strike capability,” that is, the capability to carry out a nuclear attack
on the vital centers of the enemy, even if the enemy had succeeded in
striking first against the nuclear forces and the cities of a superpower.
Middle powers like France and the United Kingdom had to quickly settle,
for financial and industrial reasons, on the mainstay of just a few SLBMs,
flanked by some nuclear strike aircraft, counting on the near invisibility of
submarines in the oceans. Only in 2012 did China succeed in setting up a
barely credible triad, vis-à-vis the more developed nuclear forces of Russia
and the United States. All recognized and unrecognized nuclear weapon
states, with the exception of the two nuclear superpowers counting some
1400 strategic nuclear warheads each, face the dilemma of how to keep
their deterrent credible, if probable advances in submarine detection, also
through artificial intelligence (AI), make their few submarines trackable
and thus destroyable. A well-coordinated attack, even by a similarly
equipped adversary and with conventional weapons, in the initial conflict
stages could deprive a country of its last deterrent card, exposing it to
barely restrained nuclear blackmail or forcing it to adopt a much more
dangerous and unstable launch-or-lose first nuclear strike policy. Missile
defenses, far from abolishing nuclear terror, have just complicated the
nuclear deterrence calculus, favoring more instability and hence less
deterrence.
It is easy to dismiss that period with the lazy quip “times are different” or
“everything has changed”: more than Sarajevo, D-Day or even Hiroshima
ever did, this long Third World War (cold in the North and very hot in the
South of the globe) strongly influenced the present, as nuclear weapons cast
their disquieting shadow over the ongoing Indian-Pakistani, Iranian-Israeli,
North-South Korean confrontations and the current Russia-Ukraine war. Far
from being outmoded, albeit strongly reduced in variety and numbers,
nuclear weapons are the ultima ratio regum (the last argument of kings),
despite visionary thoughts by Presidents Reagan, Obama, and Pope Francis
to make them obsolete or dismantle them altogether.
The old ways now coexist with new developments that incubated
precisely during those decades with the explosion of computing power and
sophisticated software, first for military use and then at a dual-use level.
This chapter intends to discuss the existence of a new triad, the interaction
between its different elements and the dangerous intersections between the
old and the new triads: geopolitics, diplomacy, and AI.
Apparently, at least two elements of the triad are very old, both in
practice and conceptually, but their most recent evolution is quite different
from tradition.

The Triad
For the purposes of this contribution, we will try to offer some
(re-)definitions of the elements of the triad, starting with geopolitics. Since
the initial period between the 19th and 20th centuries with Mahan,
Mackinder, and Haushofer, geopolitics has tried to objectivize, more or less
successfully, the wishes of different political elites, using geography as a
pivot; something that Brzezinski did too in crafting his vision of a new
world order (Brzezinski, 1997). Critical geopolitics tried to move the point
of view from the national interest and raison d’état to themes closer to the
interests of decolonization, poverty reduction, the critique of boundless
economic development, environmentalism, feminism, etc. Eventually, this
strain of geopolitical thinking, which for a couple of decades appeared more
akin to wishful thinking than to Realpolitik, proved its value in the 21st
century, when facts about climate change, pandemics, pollution,
demographics, gross macroeconomic mismanagement, and connected social
upheavals, objectively called into question both the older Realpolitik and
the newer, no less “realist,” neo-liberal/turbocapitalist economic and social
ideologies.
Unfortunately, postmodernism, an understandable cultural phenomenon
(together with wokeism and similar movements), typical of times
characterized by the end of a whole era (in this case the post-WWII Bretton
Woods world order), has deeply muddled the necessary political clarity with
its pretense that material reality is less important than the symbolic one and
that truth is either unattainable and/or irrelevant. The substitution of truth
with narrative or discourse is very evident in political talk shows where
opinion is peddled as superior to knowledge and wisdom. It is ironic that in
a seemingly “liberal progressive” setting, policies are judged principally by
essentially Soviet ideological judgment criteria mechanisms, where new
ideologies (shyly defined as political correctness) trump qualitative
evaluation and quantitative facts; the main difference being that once
pyramidal ideological elaboration and control were arranged top-down by a
visible party and now “democratically” anyone can freely exercise
censorship as a political commissar with vaguely formulated opinions on,
nonetheless, pyramidally controlled social platforms.
The problem is not the existence of a symbolic reality, which is a
constant in millennia of political and cultural history, but the process and
the effect: substituting the symbol for the reality does not lead to
concentration, togetherness and unity but to dispersion, fragmentation, and
disunity.
A re-definition of geopolitics and geoeconomics, one can argue, must
start from acknowledging a few facts:

The global dimension is the minimal baseline in order to put national or


subnational interests into action;
Physical and virtual borders are porous, especially as regards illegal or
loosely regulated private power constellations and groupings;
Regimes and governments abound, but states, where the Rechtsstaat1 is
enshrined and embodied, are very few, that is, countries where,
explicitly or implicitly, the state is not captured by private, religious, or
other particular interests. Furthermore, it is not evident that nation-states
will remain the dominant polity at global level; and
National interests still coexist but are increasingly induced/forced to
take into account the effect of their choices on the common good or
global interest.

Based on these facts, one may propose the following definitions.


Geopolitics is the conscious and deliberate projection of a political strategy
on to a geographic space. Geoeconomics is the conscious and deliberate
projection of an economic strategy on to a geographic space.
It would take too long to argue in detail the advantages of these
definitions; nevertheless, in essence they are: simple; include the objective
elements of geography in their different forms; consider human and
political geography without being tied to borders; help in sifting relevant
information by avoiding local “tunnel vision”; inherently take full account
of the pervasive non-governmental dimensions of power; and include all
possible instruments available to a decision-maker. US hegemony is a
conscious and deliberate political strategy and projection on a geographic
space that does not need the trappings of any theories. Facebook is a clear
example of geoeconomics: the company wanted to be the dominant social
platform leader and continues to be so.
One may object that often political and economic decisions are so
clouded by short-termism and myopia as to raise serious doubts about the
consciousness or even awareness of the decision-makers, but, while in a
nascent phase an ad hoc approach rules the trial and error process of
structuring and projecting power, after a watershed moment an elite
assumes a different mental and strategic approach similar to a collective
consciousness: for the United States it was 1917 (entering WWI) and
Wilson's Fourteen Points (1918); for Facebook it was 2012–2014 in making
its Initial Public Offering (IPO), reaching 1 billion users, acquiring
WhatsApp, and creating a dedicated link to the Tor anonymous network.
The global space is structured in four distinct and interconnected
geonetworks, in this order of importance, according to present conditions:
Pacific, Indian, African, Atlantic. These geonetworks are in most cases
centered around major oceans because the seas are the major conduits of
international trade and because, since the advent of global sea powers
(Spain, Great Britain, and United States of America), they are essential for
building up global dominance.2
A geonetwork is a strategic theatre in a condition of liquid balance (a
balance with shifting allegiances), generalized crises and networked
relationships; in other words, an area where there are no clearly defined
areas, consistent actors, shared rules, and credible axes of power. There is
instead a cluster of complex relationships that are difficult to control and
steer, marked by multi-level and simultaneous competition and
collaboration.
All these clusters are conditioned by seven tangible or intangible flows
that structure relationships between international public and private actors.
The flows are: Ecosystem (e.g., climate change, sea levels); Drinking
Water; Agri/Food Stocks; Demography/Real-Virtual Migrations; Fossil-
Renewable-Digital Energy (oil, gas, nuclear, hydro-solar, smart grids, etc.);
Financial and Invested Capital; Knowledge in its wider sense (culture, tech,
science, religion, etc.).
If we assume this new geopolitical foundation, it is obvious that
diplomacy is already facing a significant transformation well beyond its
traditional Westphalian, Cold War, and so-called glocalist settings.
Diplomacy is generally considered as the craft and science of a professional
representing the interests of a country at both bilateral and multilateral
level, while collecting relevant information and sentiments in the host
environment. This is generally achieved by negotiation and dialogue in a
non-violent bargaining process, although both positive and negative
incentives or outright threats can be part of the mix. Albeit part of the
traditional range of competences, representing the economic interests of
both the public sector and of big enterprises has increased in importance
with the growth of the European Union, multilateral economic
organizations, and globalization in general. In addition to this already
impressive list of tasks, the growing mobility of individuals as tourists,
expatriates, business persons, et al. requires a gamut of diplomatic and
consular services for them. Unsurprisingly, even big diplomatic machines
face competing priorities with smaller budgets. Finally, large international
organizations have their own representatives, especially the European
External Action Service, who, at least in principle, have to detach
themselves from their national identity in order to assume a sort of
collective and/or supranational representation of interests.3
Today, leaving aside irrelevant “Westphalian vs. Glocalist”
contrapositions, diplomacy faces several new and old-new challenges. The
first is the substantially regained autonomy and solitude (sometimes
loneliness) of the diplomat; one has the illusion that, with almost ubiquitous
and near real-time communication, an ambassador should always have the
support of clear instructions and lines of action; in fact, it is a pretense.
When Dr Kissinger was said to have quipped in 1973 that the European
Union had no telephone number (an apocryphal story), today one can retort
that each time somebody answers from the Department of State, a different
voice answers with different and contradicting positions. Unfortunately, this
condition is not limited to one country, it is the existential condition of
contemporary politics which is losing clarity, structure, grip, sense, and
purpose with the worsening crisis of the old Bretton Woods order, while
another is still emerging from obscurity and confusion. A quick look at
European history from the end of the Roman Empire (476) to the Congress
of Vienna (1814–1815) or at other great powers on other continents, can
steel our nerves to face the present confusion, but disoriented rulers are of
no use to the diplomat. In short, diplomats may have to interpret alone the
silence or cacophony from the capital, forging their own instructions and
mitigating the occasional top-down nonsense. They act as an instrument
using autonomy to address a persistent lack of pondered and coherent
policies and even to prevent the brandishing of conventional or nuclear
deterrence in the wrong context, place, and time.
The second is to have a global perception in order to correctly and
effectively put national interests on the political mental map. In other
words, global preconditions (also through the dynamics of shaping flows,
e.g., demographics) define national interests before the latter can try to
shape parts of the world in turn; worse, smaller countries’
governments/diplomacies, even if de facto orbiting around bigger countries,
are not exempted from crafting their own global assessment if they want to
survive the mistakes of their bigger partners. Moreover, a global vision
makes it possible to identify this apparently elusive common good or
interest that has very concrete effects on single countries: if different
countries had shared their global analyses (even if strongly diverging), this
would have facilitated a more effective common response to the past
pandemic, instead of often falling victim to a global infodemic.
A third challenge is the intertwining of laws, rules, governments, private
actors, and volatile borders. One must understand that a rules-based order is
not always based on agreed international laws, but on rules announced by
one or more interested parties that become a standard. This happens quite
often at business and commercial level and, more often than not, with high
levels of unaccountability behind a screen of legal compliance. Within the
European Union this silent fight for special and corporate interests is a daily
affair, but at least it happens within a structure that requires a certain degree
of transparency. This is not, of course, the case of shadow finance or, worse,
criminal cartels that try, quite successfully, to bypass border controls even
when these are apparently supported by fortified obstacles.
The bottom line for geopolitics and diplomacy? Geopolitics is here to
stay, it is neither inevitably synonymous with the return of war nor with the
restoration of the classic nation-state, it reflects, on the contrary, the fluid
and uncertain nature of global crises, offering simultaneously a subjective
and objective reality check to decision-makers. Confronted with the global
influences they experience as individual actors and vice versa with the
worldwide reverberation of their decisions, rulers and chiefs may be
induced to be more thoughtful about their plans and to measure their
ambitions on a concrete geographic space, crisscrossed by quantifiable
shaping flows. Diplomacy is even more important than before, precisely
because the rules of the past world order are challenged and eroded both by
the governments that wrote them and by the capitals which want to change
them into a multipolar structure. One possible, if not necessarily ideal,
scenario is that, while economic globalization might be preserved with
some substantial modifications, political globalization based on “market-
democracy” might be postponed for a long period by an agreement between
the antagonists in order to avoid a truly destructive world war.
In this very complex context, one is left to face even more uncertainty
with the new push for AI. For our purposes, we can adopt the EU's
definition (The Articles of the EU Artificial Intelligence Act, 2022):

1. “artificial intelligence system” (AI system) means a system that is


designed to operate with elements of autonomy and that, based on
machine and/or human-provided data and inputs, infers how to achieve a
given set of objectives using machine learning and/or logic- and
knowledge-based approaches, and produces system-generated outputs
such as content (generative AI systems), predictions, recommendations or
decisions, influencing the environments with which the AI system
interacts.
(Council of the European Union, 2022)

The definition obviously contains tantalizing prospects for both the


domains examined previously. Let us start with diplomacy and enumerate
some of the advantages of a “functioning” AI system. The first obvious
field of application is system integration or at least a more sensible and
harmonious composition of international customs, laws, and regulations.
The EU acquis, for instance, is believed to contain some 80,000 items and it
is easy to imagine what advantages a relatively autonomous system can
offer in streamlining, detecting incongruences, loopholes, and special
interest-clauses in this welter of codification. Even more impressive could
be the results when applied to the notoriously redundant Italian legislative
system: since 1861 (the creation of the Kingdom of Italy, almost completely
unifying the peninsula) 203,893 laws, ordinances, and decrees (including
98,896 royal decrees or provisional government orders, according to the
Italian governmental bills databank) have been issued.4 With a great deal of
pain and effort over 162 years, 93,979 useless bills have been eliminated,
and yet much more needs to be done in order to cut down a genuine
regulatory jungle. AI could serve also as a preventive check in Parliament
in order to avoid drafting and approving contradictory and pointless
legislation.
Another interesting application is the comparison of different legislative
systems, not just between Roman and common law, but with other systems
in different continents: religious rules and regulations that have an impact
on daily life and food production standards, for instance, are a complex
field. The fragmentation and multiplication of beliefs or cultural stances
could be better mastered and interconnected with these instruments,
furthering also a more structured religious dialogue.
Disarmament treaties are another very sensitive aspect of diplomacy, first
of all due to their highly technical and classified content, and then because
the system of control instruments and controlled items has to match
strategic intentions and political goals. In a favorable scenario, AI should
not only help identify weak spots and update technical characteristics, but
also assist in finding compromises and alternative solutions. A human-AI
team (like the centaur teams playing chess since 1998) could work to
anticipate the counterparts’ tactics or prepare multiple negotiating positions.
In concluding the optimistic and rosy picture for diplomats, one cannot
avoid thinking to what level “lawfare” or filibustering could be elevated by
the use of AI systems. Again, using the instructive Italian parliamentary
example, but also that of high-frequency trading (HFT), one can produce
millions of amendments against a specific bill. In 2015 Opposition Northern
League MP Roberto Calderoli, using a simple natural language generation
(NLG) algorithm, produced 85 million amendments for a governmental
decree-law, dwarfing previous records in the Italian and European
parliaments of manually produced amendments that numbered in the
thousands (4000 was one of those previous figures). HFT meanwhile
produced clouds of flash orders in order to influence markets, a practice
then partially restricted by the US Securities and Exchange Commission.
Before delving into the risks and opportunities of AI in geopolitics, it is
time to take a closer look at this sort of “intelligence.” Let us look again at
the key elements of the EU definition: “elements of autonomy,” “machine
and/or human-provided data and inputs,” “achieve a given set of
objectives,” and “infers … using machine learning and/or logic- and
knowledge-based approaches.”
Autonomy from what and/or whom and which kind of autonomy? This is
a central issue because, if this intelligence is subject to the human creator in
terms of objectives, supply of code lines, algorithms, data and inputs, its
operational autonomy tends to remove humans from the loop. Fabio
Vanorio, one of the most prominent Italian thinkers on AI, the metaverse
and cybersecurity, teaches that AI's autonomy allows the creation of
multiple virtual agents capable of operating as a swarm on a scale and
complexity that overwhelms human capabilities (Vanorio, 2024). What we
have seen in terms of swarming, which was predicted by RAND Corp. in
2000 and has been carried out by drones in Ukraine, is still a very crude and
simple idea of what swarming can be under AI's command. This leads to a
logical conclusion: the more the field of application is virtual or electronic,
the more imperative it is to remove the human from the intelligence loop. It
is an ominous statement, because if one removes, for understandable
reasons, humans from the intelligence cycle (quantity of data, lack of time,
need to compress times, complexity of the items to be analyzed, etc.), one
ends up with an intelligence result that resembles more a dogma than a
verifiable intelligence product. COMINT, FININT, GEOINT, MASINT,
PHOTINT, SIGINT5 appear obvious candidates for this drastic
simplification, even more so if data are collected by interconnected
ubiquitous sensors (Internet of Things [IoT]).
These questions are not new because HFT technically started in 1983,
gaining strong financial and technical impetus in 2005 and becoming
notorious in 2010 with its first “flash crash,” followed by another in 2014.
While the causes are still debated, it is important to understand, on the one
hand, that the highly automated chain reaction can contribute to a flash
crash, which is a painful event, but, on the other hand, that financial actors
have consolidated and improved operations, so that news of similar events
has not been seen in a decade. At present, HFT algorithms and strategies are
99% accurate because they must achieve speed, determinism and precision,
although no model can foresee all market movements. Likewise
quantitative strategies cannot fully model psychological and sentiment
market shifts which entail short-term performance degradation. This can be
minimized by fragmenting orders and having efficient automated stop-loss
systems. HFT is migrating increasingly to AI, especially in its relatively
new field of news trading, a branch of HFT exploiting the short-term effect
of rumors and news in order to profit from short-term volatility (giving new
life to the traders’ adage “buy the rumour, sell the news”).
The subject of machine and/or human-provided data and inputs begs the
immediate question of the quality of data and inputs. Also, here there are
different levels of heteronomy and autonomy vis-à-vis the system: human
data and input have a built-in element of control, although quality control of
data, especially in political sciences and strategic affairs, shows
considerable variations in quality, confidence, and truth. When the system
harvests data and data cleaning systems are employed, it is AI's autonomy
that is called into question, especially in terms of inputs, because these
systems are (still?) unable to explain how they come to certain outputs and
conclusions. Data exploration and analysis is the purview of an
(intelligence) analyst, and it is a very important step in AI because it calls
for the study of the relationship between different variables; pattern
discovery is a function that machines may perform quite efficiently, but the
assessment and weighting of variables is more of a human task. Here there
already comes into play the difficult relationship between free will and
arbitrariness in making choices: free will belongs to humans (despite
serious philosophical and practical doubts when looking at conscious and
subconscious influences), but arbitrariness is the realm of both human- and
machine-driven choices since there are situations where a choice must be
made without having the desirable supporting elements. The well-known
axiom “garbage in, garbage out” is fully valid also in this new arena. One
interesting example from the past in this respect comes from the RYAN
Complex (Raketno-Yadernoe Napadenie Compleks, Nuclear Missile Attack
Operation), a Soviet joint-intelligence services operation led by the KGB
(Komitet Gosudartsvennoe Bezopasnosti, Committee for State Security),
launched in 1981, in collaboration with the Soviet GRU (Glavnoye
razvedyvatel’noye upravleniye, Main Intelligence Directorate) and the East
German StaSi (Ministerium für Staatssicherheit, Ministry for State
Security) (Wilson Center-Digital Archive, 2022). The unprecedented and
quite massive intelligence operation was kept functioning until April 1989,
seven months before the fall of the Berlin Wall (November 1989).
The objective of the operation was to detect early signs of a possible US
and/or NATO first nuclear strike against the USSR (Union of Soviet
Socialist Republics) and it was due, on the one hand, to the extreme
prudence of then KGB Chairman General Yuri Andropov (later USSR
General Secretary), who wanted to avoid catastrophic enemy surprises, and,
on the other hand, to the aggressive anti-Soviet political rhetoric of US
President Ronald Reagan (USSR Empire of Evil, etc.), backed by actions
such as the deployment of Pershing II ballistic missiles in Germany and the
announcement of new ABM (Anti-Ballistic Missiles) defenses, dubbed
“Star Wars.” Pershing II missiles were deployed during the “Euromissiles
crisis” in response to the new SS-20 Saber (RSD-10 Pioneer, Soviet
denomination) intermediate range missile, but they allowed just ten minutes
to the Soviet leadership to react in case of a launch.
Although all the intelligence services involved (including all Warsaw
Pact satellites) were given a precise list of objectives and warning signs to
watch and report, a complex mixture of genuine paranoia, objective
observations, professional perceptions, ideological self-intoxication, party
or careerist zeal, and confirmation bias, gradually created a sense of
palpable and very strong tension around the NATO Able Archer 1983
exercise. Later declassified documents and interviews did nuance
significantly the idea that the world risked a nuclear war by the build-up of
a crisis so secret that it was not even suspected by NATO countries’
intelligence services. The fact remains that around a routine NATO
exercise, with the simulation of a nuclear response to conventional Soviet
aggression, the Soviet leadership became particularly worried and, only
thanks to a Soviet double agent, did the US leadership get wind of the
problem, discounting it at the time and also later. This tension grew despite
a considerable number of Soviet and Warsaw Pact intelligence officers
regarding the exercise as a normal one. Reading 40 years later the
unclassified KGB RYAN Complex summaries about the indicators of a
possible nuclear attack,6 one can easily understand their content and impact
on the perception of leaders worried by a looming world war: most of them
contain verifiable facts and are very concrete. To sum up, the “garbage” is
much more granular and impalpable compared to methodical data sifting
and cleaning by algorithms, the logic of which is not fully traceable.
The determination of objectives (as per the above definition, “achieve a
given set of objectives”) is an eminently human function. Objectives are
stated from the start of an AI project and conditioned by the nature and
means of adversarial party, whether they are situated in the physical or
mental realm or supported by a given AI system. The discussion of the
RYAN Complex has shed some light on the subject, since we have already
seen how the overall objective conditioned the execution of the operation.
Moreover, an intelligence requirement (the aforementioned list of objectives
and warning signs to watch, collect, and report) is not only influenced by
the general objectives, but can consciously or unconsciously change over
time. Objectives function literally as photographic objectives (camera's
lenses): they can considerably restrict the field of view up to the point that
one may miss key details or the whole context.
The first Israeli experiments in automated targeting during the Gaza war
have shown a disquieting phenomenon that helps understand the risks of AI
on the battlefield. Under the code name of “Lavender” and “Where is
Daddy?” these two systems were designed to earmark all possible
operatives in the Palestinian paramilitary and military organizations in Gaza
as potential targets, also using family interaction patterns to track down
fighters from their contacts with their children. A potential target can be
eliminated either by a sharpshooter, an assassination team, an explosive
charge, or an air strike: it is evident that during a war more massive and
undiscriminated methods may be preferred and/or inevitable. In fact,
following Lavender's and Where is Daddy's results and the policies proven
in battle at both political and operational level, the tactical level did not, and
could not, check the results in detail, knowing that at least 10% would be
errors (3700 people out of a pool of 37,000 targets).7 Again, as seen in the
paragraphs before, it is not a matter of extolling, absolving, or condemning
AI, it is the essential problem of the man-machine interaction and of how
useful/effective it is to remove de facto human oversight from the decision
loop.
This provides a significant, albeit partial and initial, glimpse at the last
point of the EU definition “infers … using machine learning and/or logic-
and knowledge-based approaches.” Machine learning is defined by
hyperparameters that define the parameters, allowing the system to execute
machine learning; like choosing the composition of a statistical sample, this
is a crucial initial decision that conditions the biases of the system too.
Logic and knowledge are two very different approaches because the former
follows a formal way of reasoning that starts from premises and arrives at
conclusions through steps than can be either false or true. Knowledge by
experience is associated with craft and involves a fairly important
component of non-logical elements that normally are not codified and are
often transmitted by example and orally. Logic is behind the mathematics
that design an aircraft wing, knowledge is what helps manufacture a
composite aircraft wing; logic is what remains of the design of an aircraft in
a blueprint, knowledge is what is lost when a design and manufacturing
team is disbanded.
To sum up our review of AI according to the EU definition,8 the main
sticking points are:

Autonomy from human control;


The quality of machine and/or human-provided data and inputs, not
only after automated data cleaning that removes low quality or
erroneous data, but keeping in mind subtle distortions in collecting data
and analyzing them that can have very significant effects on decision-
making;
The non-traceability of the logic embodied by the different algorithms
and the high degree of arbitrariness in the choice of hyperparameters;
and
The inherent problems in the choice of objectives that an AI system has
to target;
The difficulty of applying logic to situations that do not develop within
closed systems and the complexity of transmitting knowledge to
humans before doing the same with machines.

Indeed, for coming decades the main question will be how much we
allow a system that is human-designed but increasingly self-driven to
dictate choices on human freedom in a techno-ideological way: it has
already happened with the economy in the last 40 years with very debatable
results and precisely by experimenting with the earliest versions of these
tools. The cinematic metaphor of the Terminator appears at this point even
reassuring: in the end, it is the fight by humans against a system logically
gone amok, with (how comforting) a clearly good and a clearly evil side.
The real problem is that we risk transitioning from Plato's cavern to a
screen-walled cavern and to a metaverse where speed dictates what reality
and which truth is considered credible. Concrete experience after a quarter
of century with Google queries shows that the search engine narrows the
search down to the wishes of the searcher and in itself cannot be taken as a
path to truth; worse, in some cases it appears to be incapable and unwilling
to find and feed the research when it does not conform to mainstream
behavior, especially when searching for images deemed pertinent by the
searcher. Wikipedia is another cultural construct that is capable of
summarizing stories, but not of giving insight to history, while often
omitting significant content and context, as every historian or historically
knowledgeable person has often remarked. The first example is algorithm-
driven, the second is speed-driven by hardly traceable collective elaboration
that has no verification comparable to that of an encyclopedia.
The geopolitics of AI starts from very physical settings: where the best
design teams are; where the most powerful computer clusters are; where the
data farms are; who harvests data and where; which links assure the
necessary bandwidth for efficient communications; how much energy these
hubs and links require; where the necessary energy comes from. The
following short description of the AI geopolitical scenario concentrates on
countries but neglects the not-so-emerging power of private corporations
which increasingly appear to be compliant to, but not controlled by
governments. The United States and China are the world's top AI powers,
followed by a host of “swing countries” such as Canada, France, India,
Israel, Japan, the Netherlands, Singapore, Saudi Arabia, South Korea, and
Taiwan (some of them appear to act as experimental sandboxes, like Saudi
Arabia and Singapore, respectively for China and the United States). The 11
data centers that consume the most energy in the world are, in decreasing
order: Northern Virginia, Beijing, London, Singapore, Tokyo, Frankfurt,
Shanghai, Sydney, Dallas (Silicon Valley 10, Phoenix 11), Chicago, and
Amsterdam.9
This geography should suggest once again a neo-bipolar division
between the West and the rest (or democracies vs. autocracies or Global
North and Global South). In reality, there is no longer a unified West, but
several fluid configurations; democracies are in deep crisis and with
authoritarian tendencies favored by disenfranchised voters and obsolete
voting systems; big data and information conglomerates are looking more at
their own geoeconomic and transversal interests than at any vaguely
defined common good. Even within the common positions against clear
challenges, national or corporate interests are more important than any
declared solidarity and this is even more possible in the fluid and
immaterial world of electronic data.
This is a situation much more fraught with danger than the atomic
competition after 1945, because then only very few states had the know-
how, doctrine and leadership to handle nuclear weapons and the Non-
Proliferation Treaty was opened for signature just 23 years after the
detonation of the two atomic bombs on Japan. AI knowledge is much more
diffused and probably easier to smuggle than already quite penetrable
atomic secrets, which makes one think that an early treaty on military
applications is of general and shared interest among major computing and
data storage governmental powers. Not surprisingly the “Joint Statement of
the Russian Federation and the People's Republic of China on the
International Relations Entering a New Era and the Global Sustainable
Development” (4 February 2022), calls for negotiations on such an
important matter. The current tensions around the South China Sea and
Taiwan or, even more, regarding the unresolved war in Ukraine would
induce us at first sight to postpone diplomacy until once the crisis is over
and the dust of war has settled, either to prepare a more conducive terrain
for an agreement or to strengthen one's own hand before bilateral or
multilateral talks: to be clear, it would be a mistake of unfathomable
consequences.
We have started our discussion of the “geopolitics, diplomacy, AI” triad
recalling the old nuclear triad and we will come full circle on geopolitics,
not listing new and exciting possibilities for AI's geopolitical application,
but setting the priorities for globally shared security, using the
reinterpretation of NATO's three core tasks (deterrence and defense, crisis
prevention and management, and cooperative security) through possible AI
applications.
Moreover, we have seen that contemporary geopolitics (and
geoeconomics) are linked to a few factual tenets:

A global dimension by default, also due to the porosity or evaporation


of borders;
Polities in probable mutation from classic nation-states with impartial,
functioning and enforced legal systems to new governments or
governances captured by one-sided and non-/unconstitutional interests;
and
National, local, private, or specific interests coexisting within the
“butterfly-effect” conundrum of global reverberations from individual
choices on the common good or global interests.

In this dangerously unstable and fluid transition to a very murky (and


probably increasingly illiberal) future, geopolitics is the conscious and
deliberate projection of an essential political strategy on the global space:
the sustainable continuation of homo sapiens.
AI urgently needs a treaty regulating and controlling its applications on
nuclear deterrence and NBC proliferation in a shared way because the
priority is to stabilize a form of deterrence that is still in the hands of only a
few actors (five recognized under a treaty and still three or four
unrecognized) and that, as explained before, is becoming inherently
unstable through hypersonic warheads and submarine detection.10 Before
AI may be fully and confidently developed in a way to simulate a virtual
checkmate against an opponent, exposing its political elite to stringent
political and strategic blackmail, appropriate negotiations should instead
strengthen the role of AI in early warning, detection, mutual verification,
confidence, and reassurance. In other words, the primary function of AI's
strategic and overall nuclear deterrence must not be to shorten times in
order to successfully enter the adversary's decision cycle, but to buy
precious time to prevent a crisis and possibly to make a surprise attack very
unlikely and unrewarding at the political and strategic level. AI should
strengthen the whole non-proliferation system, last but not least, by closing
some very dangerous loopholes in the biological weapons control system
revealed by the Covid-19 pandemic. Ideally and looking forward, agreed
and shared deterrence, verification and non-proliferation control systems,
going well beyond the NTMs (National Technical Means, i.e., national
intelligence systems), could eventually lead to considering nuclear weapons
and deterrence itself as a very marginally useful investment, leading to its
gradual demise.
The second aspect to be agreed at multilateral level concerns autonomous
weapons and the connected aspects of unhuman weapons and warfare
practices. While at least some initial reflection has been made on
autonomous weapons, the idea of seriously limiting 24/7 warfare and the
impact on the civil population still needs cultural and political time to go
that can be helped by AI instruments. Calculating the true sunk human and
economic costs of wars could assist in preventing crises escalating toward
aggression, for instance. No one assumes that political actors may be fully
rational, but at least a better-informed gamble may offer other solutions
than a nationalist, militaristic rush toward a conflict. In short, the
preposterous, stereotypical and racist opposition between the sons of Mars
and the sons of Venus, should be overcome by the realization that the sons
of Minerva are wiser, smarter, and better protected also by crisis prevention
and management algorithms. One should imagine, as an example, a
wargame replay of the 20-year pacification and counterguerrilla campaign
in Afghanistan with AI-assisted integrated political, economic assistance,
negotiation/propaganda, and minimum-force operational practices. How
would they have been carried out? What old mistakes would be confirmed
and hidden ones exposed? What alternative scenarios could have been
explored? Again, AI is fraught with uncertainties, imperfections, errors, and
possible manipulations, so much that negotiating the limits and reciprocal
control of cyberwarfare could be a third long-term objective.
In all this (bottom-down approach), one should not ignore the much
creepier, insidious, and de-humanizing bottom-up dimension. Some
experiments in authoritarian and democratic countries show the enormous
dangers to individual and collective freedoms posed by the reckless and
inherently tyrannical uses of AI. There is a lot of talk about disinformation,
fake news, and hybrid operations, omitting two rather crucial facts: first,
words like hybrid may be new and the means are new, but the substance is
very old as the following example will show; second, disinformation is not
just done by “them,” but very well by “us.” Probably the newest useful
concept is that of the infodemic (first application in 2003),11 also taking into
account somewhat analogous disinformation during the 17th century in
conjunction with the bubonic plague or the infamous witch- or werewolf-
hunts.
The first important contemporary hybrid operation, overlooked by
mainstream historiography on the subject, was the Expedition of the
Thousand (1860) during the Italian Risorgimento (the Resurrection of Italy,
that is, the wars of Italian independence from 1848 to 1870, with a last
national unification step completed in 1918). We here only highlight the
hybrid aspects of the campaign. The first element: an Italian republican
revolt in Sicily, then part of the Kingdom of the Two Sicilies (April 1860);
the local disturbance as an exploitable starting point. The second element:
the creation in May of a group of thousand volunteers by the charismatic
political leader and revolutionary Giuseppe Garibaldi without any apparent
link with the Kingdom of Piedmont. In fact, the Turin-based government
had infiltrated the movement and blocked a donation of rifles to the
volunteers, leaving them with old smooth-bore flint-locks and keeping
plausible denial until the fall of the Bourbonic monarchy. At the same time,
the government dispatched two ships in a humanitarian mission along the
Sicilian coasts with a real reconnaissance task instead. When the volunteers
departed, they made a stop near the isolated fortress of Orbetello to collect
muskets and ammunition, with the quiet support of the local garrison. The
third element: the Garibaldi-led irregulars, through their successes in very
conventional battles, generate a wide insurrectional movement, while the
legitimate government is politically and diplomatically isolated also by
propaganda underlining the backward, repressive, and substantially
illegitimate nature of the Bourbon's regime. Through the press, the word
Bourbonic becomes a derogatory meme. The fourth element: a massive
governmental disinformation campaign on the risk that General Garibaldi,
as the dictator of the new de facto government replacing the collapsed
kingdom, might create for himself a revolutionary republic. The Kingdom
of Piedmont unleashes a massive attack, occupying half of the Papal States,
to reach the “unpredictable” irregulars in time. Garibaldi, who from the start
knew he was acting in the name of the king, peacefully surrendered the
conquered country to King Vittorio Emanuele II. The fifth and crowning
element: regional plebiscites (today we would say referenda) widely won
by the “patriotic forces,” fully legitimize the annexation, with no external
power contesting their legitimacy. The hybrid operation nets Turin lands 20
times larger than Crimea and very sizeable economic, financial, and
strategic resources.
As for democratic precedents regarding disinformation campaigns, the
concerted effort by the Bush administration to convince national and global
public opinion that Saddam Hussein had weapons of mass destruction was
run with persistence and skill throughout 2002, until the beginning of the
war in 2003, despite the State Department Bureau of Intelligence and
Research (INR) repeatedly disproving false information and cavalier
analysis.
The next step in this field will be the conception, design, organization,
and execution of cognitive warfare, making it possible, according to Sun-
Tzu's doctrinal vision, to cloud the ruler's and citizens’ minds to the point of
subjugating a country with the minimum use of force, as again anticipated
by the teachings of Dr Vanorio. Cognitive warfare will increase both
internal and international instability because predictably only a very few big
companies will own and offer adequate mass electronic surveillance
services to beleaguered governments: a neo-late Medieval or Renaissance
scenario where free companies offered mercenary forces to existing polities.
The substantially unchecked massive collection of personal data,
powered by AI instruments and embedded in increasingly expanding
metaverses tending to omniverses, can and will, in the absence of rigorous
democratic transparency and control, allow, on the one hand, large
institutions, corporations, and governments, to “regulate” social, political,
and economic inequalities in already partially democratic societies, and, on
the other hand, to reduce the level of democracy, where it exists, by
concentrating power in restricted governance groups.

Conclusion
When I was born, retired general Maxwell Taylor published the seminal
book “The Uncertain Trumpet” (1960). An Army general during the Korea
War (1950–1953), and a legendary Chief of Staff of the US Army, he tried
to think about how to successfully defend America and friendly countries
without resorting to massive nuclear retaliation, the most promising warfare
tool. Insisting that one should not confidently blow the trumpet in uncertain
situations and not put all eggs in one basket, he advocated for military
flexibility that allowed leadership not to be cornered or self-cornered into
quasi-suicidal dilemmas. In times of particularly short memories, AI
appears as a silver bullet, but it is not, and, as all revolutionary and
sometimes decisive statecraft instruments, it will experience a painful and
complex learning phase in times of deep upheaval. Humility, wisdom,
prudence, calm, and steadfastness are the real pillars of successful
geopolitics, diplomacy, AI triads, designed to foster humanity and not to
push it over another world war cliff.

Notes
1. Rechtsstaat is quite imperfectly translated into rule of law, because it
implies a state of justice and law, capable of being an actor that is
impartial, balanced, equally distant from disputants and prejudice-free
regarding the application of laws.
2. Africa is the only purely continental geonetwork, reflecting its lack of
major sea projection; thus, it is defined by its continental mass,
exposed to two oceans but still largely establishing its internal
dynamics. The other geonetworks are defined by their rivered
countries and adjacent seas, like the Arctic, North, Baltic,
Mediterranean, Black, Caribbean seas for the Atlantic geonetwork, that
includes both North and South Atlantic. Evidently geonetworks have
overlapping areas that must be taken into account for strategic analysis
reasons, for example, Australia is present in both the Indian and
Pacific geo-networks. In some cases, geography is strongly influenced
by political/economic evolutions. Latin America for instance, can be
fully included in the Pacific geonetwork, even with its long Atlantic
façade, due to the current strong Chinese influence. Central Asia,
during Operation Enduring Freedom (2001–2014), could be loosely
associated with the Indian Ocean geonetwork due to the pull factor of
Afghanistan's war toward strategic connections with Iran, Pakistan,
and India. Today, after NATO's retreat (2021), it appears to shift
toward China and away from Russia since the Belt and Road Initiative
begins to generate concrete trade flows, following a Latin American-
like dynamic.
3. One can ruefully and cynically point out that, in this case too, national
lines of interest, if not glaring personal interests (as recently seen in
transnational and international organizations at the top level) prevail
over a sensible and balanced representation of collective interests, but
it could be a minority compared to more honest or mainstream
officials.
4. Regional, communal, internal ministerial dispositions and extra-
ministerial decrees have to be added, plus implementing decrees, to
these numbers.
5. COMmunication INTelligence, FINancial INT, GEOspatial INT
(including satellite and reconnaissance space and aerial pictures),
MAeasurement-Signature INT, PHOTographic INT, SIGnal INT.
6. Still, those regarding 1983 seem to be unpublished.
7. The official response of the Israeli Defence Forces portrays the formal
and normal functioning of a targeted killing operation, like the ones
undertaken in the past in situations that are not a full-scale military
operation, which does not correspond to the concrete results observed
on buildings and the population in Gaza.
8. For the sake of the reader, the definition is: “(1) ‘artificial intelligence
system’ (AI system) means a system that is designed to operate with
elements of autonomy and that, based on machine and/or human-
provided data and inputs, infers how to achieve a given set of
objectives using machine learning and/or logic- and knowledge-based
approaches, and produces system-generated outputs such as content
(generative AI systems), predictions, recommendations or decisions,
influencing the environments with which the AI system interacts.”
9. In the first nine positions energy consumption and importance as data
market centers coincide. In parentheses when positions and cities
diverge.
10. To give a very rough idea of the challenge under very generic
scenarios, ICBM launch and warhead detection and tracking may give
some 30 minutes of warning (an SLBM could be 15 minutes), while a
hypersonic warhead could give just six to two minutes time to mount a
reaction.
11. The correct and original definition of infodemic is the following.
“What exactly do I mean by the ‘infodemic’? A few facts, mixed with
fear, speculation and rumor, amplified and relayed swiftly worldwide
by modern information technologies, have affected national and
international economies, politics and even security in ways that are
utterly disproportionate with the root realities. It is a phenomenon we
have seen with greater frequency in recent years—not only in our
reaction to SARS, for example, but also in our response to terrorism
and even to relatively minor occurrences such as shark sightings.”
David Rothkopf, The Washington Post, 11 May 2003.

References
Brzezinski, Z. (1997). The grand chessboard. Basic Books, Harper Collins
Publishers, Inc.
Council of the European Union. (2022, November 25). Proposal for a
Regulation of the European Parliament and of the Council laying down
harmonised rules on artificial intelligence (Artificial Intelligence Act)
and amending certain Union legislative acts - General approach.
Peasley, J. R. (2024, January 10). Ranked: Top 50 data center markets by
power consumption. Retrieved August 30, 2024, from
https://www.visualcapitalist.com/cp/top-data-center-markets/
Vanorio, F. (2024, April 22). La geopolitica dell’infosfera, rush transcript of
lesson held at Univ. Calabria. Retrieved August 30, 2024, from
https://news.socint.org/intelligence-fabio-vanorio-al-master-
delluniversita-della-calabria-intelligenza-artificiale-e-attivita-di-
intelligence-per-la-sicurezza-nazionale-il-ruolo-dellessere-uman/
Wilson Center-Digital Archive (2022, October 7). Project RYaN. Retrieved
August 30, 2024, from
https://digitalarchive.wilsoncenter.org/topics/project-ryan
PART IV
Security of Data in Cyberspace
15
PRESERVING
CONFIDENTIALITY AND
PRIVACY IN DATA-DRIVEN
APPLICATIONS
Luigi Ruggerone and Valeria Ricci

DOI: 10.4324/9781003518495-20

Introduction
In recent years, there has been significant progress in the field of artificial
intelligence (AI) systems. This advance has been made possible by the
widespread accessibility of vast computational resources and huge datasets
that are natively digital. This conjunction of factors has enabled the
development of systems capable of solving intellectual challenges with
performance comparable to that of humans, if not superior in some
circumscribed areas.
This development has catalyzed the wide adoption of AI-based
approaches in various industries, successfully integrating them into business
processes to support professionals.
Despite fears of automation in some jobs, history would suggest that such
concerns may be partially unfounded. Indeed, over time, at least until the
end of the last millennium, the job market has always managed to adapt to
changes in demand due to technological discontinuities.
According to Acemoglu and Restrepo (2019), although it is plausible that
technological change will generate a revolution in employment in its
traditional modes of operation and production by making certain jobs
obsolete, it is also true that the demand for new goods and services will lead
to the creation of new professional figures and allow for the evolution and
development of entire professional categories. The crucial issue here may
be the time lag between the speed of diffusion of new technologies and the
slower rate at which the workforce acquires the new skills needed to
complete the new jobs and tasks created by technological innovations.
However, it should be underlined that the massive use of AI techniques
has also generated growing concerns about the protection of the privacy of
the data being used. Public opinion is calling for more and more attention to
be paid to this issue, giving rise to intense debate at an ethical, technical,
and social level.
To complicate the scenario in real-world data-driven applications, we are
increasingly dealing with data from different sources and entities that may
be subject to privacy or confidentiality.
Most AI techniques require, for the training phase, that all data be
centralized in a single storage and be accessible in unencrypted form for
processing. These data are subject to a data preparation phase, during which
they are analyzed and, where necessary, modified (e.g., to make them
homogeneous) or enriched (e.g., by increasing their numerosity with
synthetic data), also using traditional statistical techniques.
Although in some cases it is possible to work with aggregated or
anonymized data, in many project contexts it is necessary to work with
plain-text data.
The General Data Protection Regulation (GDPR) and the EU's law on the
use of AI (the AI Act) play a crucial role in regulating the use of AI in the
EU. Together, these regulations aim to strike a balance between exploiting
the potential of AI and safeguarding individual rights and societal values,
promoting responsible and ethical development and application of AI
within the EU.
As of May 25, 2018, European Privacy Regulation 2016/679 (GDPR)
came into force, imposing strict guidelines for data processing, ensuring
transparency, consent, and accountability in the management of personal
data.
The main objective of this legislation is to give every individual control
over the use of their data, ensuring the protection of the “fundamental rights
and freedoms of natural persons.”
As data controllers, organizations are required to keep track of, monitor
and have a legal basis for personal data processing activities. This includes
not only data processed within the organization, but also data processed by
data controllers, that is, third parties that process personal data on behalf of
the data controller.
Companies, public bodies, and individuals who need to access, process,
store, manage, or transfer personal data of EU citizens are subject to the
privacy rules set out in the GDPR. In doing so, they must comply with the
provisions of the Regulation to ensure the secure and responsible handling
of personal data.
The AI Act, on the other hand, is a more specific piece of legislation for
AI, designed to address the challenges and risks associated with its use.
It proposes a comprehensive framework that includes requirements for
high-risk AI systems, risk management assessments, technical
documentation, and transparency criteria.
In April 2021, the European Commission presented the first EU
regulatory framework for AI. According to this document, AI systems, used
in various applications, are assessed and categorized according to the risk
they may pose to users, with different levels of risk leading to a varying
degree of regulation.
On December 9, 2023, the Presidency of the Council and the negotiators
of the European Parliament reached a provisional agreement on the AI
Regulation, with the aim of ensuring the security of AI systems in the
European market and respect for fundamental rights and values of the EU.
The Regulation in its final version was approved by the European
Parliament at its sitting on March 13, 2024 and could constitute a global
standard for AI regulation, exerting influence on other jurisdictions, similar
to the GDPR, thus promoting the European approach to the regulation of
technology worldwide.
Despite the AI Act and GDPR's emphasis on data protection and the
importance of adherence to ethical principles, it is important to recognize
that historically, encryption and cybersecurity have consistently sought to
preserve privacy. In other words, the current regulatory initiatives do not
start from scratch, but rather fit into a broader context in which
cybersecurity and encryption technologies have already played a significant
role in protecting sensitive information.
Preserving Confidentiality Through Privacy
Enhancing Technologies
Cryptography and cybersecurity in general share the objective of obscuring
data to make them unintelligible to unauthorized parties, both during
storage on a storage device (“in store”) and during transfer through
telecommunication systems (“in transit”).
An example of an attack that must be defended against is the
manipulation of a database, aimed at bringing out, through the use of AI
techniques, correlations, or statistical characteristics that are not real.
Cryptography, in particular, is historically oriented toward preserving the
confidentiality of private information, often represented in the form of
numerical data. It focuses on limiting and controlling access to data during
its transmission through insecure channels, as in the case of the HTTPS
protocol, and during storage in repositories, including public ones such as
the cloud. However, these techniques require a decryption phase to enable
their use, as software applications often operate on unencrypted data.
Privacy enhancing technologies (PETs), in addition to “traditional”
cryptographic techniques, are gradually being enriched with innovative
cryptographic approaches. These technologies are designed to protect the
confidentiality and integrity of data in various contexts without excluding
the possibility of processing and using such data, albeit encrypted, for the
extraction of second-level information.
Another advantage of using PETs is, for instance, the secure use of cloud
computing, which allows arithmetic operations to be performed on
encrypted data without revealing its content, and increases productivity
through faster, scalable, and flexible development and deployment of
models.
Intesa Sanpaolo Innovation Center, the Intesa Sanpaolo Group company
dedicated to frontier innovation, among the many activities it carries out,
explores future scenarios and innovative trends and develops
multidisciplinary applied research projects, fosters the development of
innovative ecosystems, and disseminates the culture of innovation. For
activities relating to applied research, the Company has set up some
laboratories that also serve the Group's facilities, drawing on the scientific
support of leading research institutes and universities, selected according to
their areas of expertise and specialization, including one dedicated to AI.
The AI Lab is concretely investigating the different PETs that can be
used, through the activation of applied research projects on vertical case
studies.
These technologies are transversal and allow the development of tools
aimed at promoting the principle of minimum privilege, providing data in
the clear only when strictly necessary.
The techniques investigated include, for example, multi-party
computation (MPC), an advanced cryptographic technique that allows
different entities or participants (counterparties, legal entities [LEs], etc.) to
jointly compute a function on private inputs (data made available in
encrypted mode by the same parties).
In other words, MPC allows multiple parties to collaborate in calculating
a result without having to share their input data, while maintaining the
privacy of the information.
In the first two decades of MPC research, there were no concrete
applications, but in the last decade, thanks to advances in hardware and
networking, the performance and efficiency of these protocols have
improved dramatically, leading to a radical transformation in the usability
of MPC. This development has enabled the scalability of MPC solutions,
extending their use to a wide range of use cases of varying complexity,
where ensuring confidentiality between participants is critical. Examples of
applications include tax fraud detection in Estonia, collision risk mitigation
between satellites belonging to different nations without revealing their
trajectories, electronic auctions, political elections on digital platforms, and
private key management.
However, it is important to emphasize that the MPC still requires further
research in order to advance its evolution and develop applications that
make it suitable for handling voluminous data and highly complex
problems.
Since 2018, the AI Lab has been working on an applied research
challenge: the development of an MPC (protocol) system inspired by a
concrete application scenario within the Group, a large company composed
of several LEs. These LEs have diversified assets but may share part of the
customer base.
In the defined context, the objective was to develop a prototype system
that would hypothetically enable the various (LEs of the group to securely
share customer information, e.g., in the context of fraud prevention or
supervision).
This prototype aimed to limit access to the data and the types of
applications/functions associated with them as much as possible. The
individual LE could query the system to evaluate a specific function –
selected from a limited set of public and approved functions – on the data.
In this process, the primary objective was to ensure that the querying LE
could only obtain the output of the function, without ever accessing further
details or sensitive information about the client data.
Through this research project, a prototype capable of handling different
mathematical and statistical functions (linear, polynomial, and random
forest) in a computationally efficient manner was developed. The
innovative elements of what was developed enabled the Intesa Sanpaolo
Group to obtain a patent for the Italian market (ID 102019000021576).
Another solution worthy of investigation is federated learning (FL), a
collaborative machine learning technique that allows different entities that
own data to create a federation in which the information exchanged consists
solely of the parameters/weights learned from the models trained on the
data owned by the individual entities, thus ensuring that the data remains
within the organizations that own it.
Learning occurs in rounds in which model updates are computed by
clients in isolation using local, private data, then aggregated on the server
and then transmitted to clients for the next round. Although this centralized
structure is by far the most used in practice, other decentralized approaches
are actively being researched, each one with different pros and cons about
performance and security (Beltrán et al., 2022).
There are two main federated settings: cross-device and cross-silo. In
cross-device FL, the parties may be edge devices (e.g., smart devices and
laptops); they can be numerous (in the order of thousands or even millions).
Parties are considered unreliable and with limited computational power.
In the cross-silo FL setting, the involved parties are instead
organizations; the number of parties is limited, usually in the range [2, 100].
Given the nature of the parties, it can also be assumed that communication
and computation do not represent significant obstacles.
FL scenarios can also be categorized according to the type of data held
by each client in the federation. We speak of horizontal FL when the parties
share a similar schema of features, but each one possesses different data
samples; otherwise, we talk about vertical FL when each client possesses
different information about the same individuals. This nomenclature derives
from the visual representation of how a data table would be split among the
parties (the samples or individuals are the rows and the features are the
columns).
From a machine learning point of view, the majority of FL research and
industrial implementations are based on deep neural networks (DNNs).
Despite the effectiveness of DNNs, they are not the universal solution for
all machine learning problems. Currently, research is underway on new
approaches aimed at extending the FL concept to any machine learning
model (Polato et al., 2022).
FL is particularly interesting because of its wide applicability in
transversal and very different sectors. Within this framework, a series of
highly diversified projects are being activated in various sectors, such as
Healthcare, Finance, Financial Crime.

Healthcare
With the aim of developing and disseminating predictive and diagnostic
models, maximizing the synergy between various entities with expertise in
AI, medical expertise, and patient data.
The use of FL techniques in this collaboration allows knowledge to be
shared through machine learning models trained on distributed data without
the need to exchange sensitive information in the clear.
This approach aims to provide crucial support to physicians, enhancing
the effectiveness of diagnostic practices and improving the accuracy of
prevention protocols. In particular, it aims to optimize the different stages of
healthcare, including diagnosis, prevention, treatment, and follow-up in the
patient care pathway. This learning technique in support of diagnostic work
is very useful in cases of serious but rare diseases, for which individual
hospital and research institutes often have limited or insignificant samples
available.

Finance
The applied research project “Federated Learning in Finance,” proposed in
collaboration with the University of Turin and the University of Salento and
winner of a National Recovery and Resilience Plan innovation grant, aims
at defining and developing a prototype FL framework to address the
challenges of resolving entities, be they natural or legal persons to clearly
identify who the creditors and debtors of a payment are in a context of
combating financial crime.
Transaction monitoring involves continuous screening of financial flows
on a global scale, and interbank communication, with few mandatory and
free text fields identifying the originator and beneficiary, makes this step
particularly challenging (e.g., originator and beneficiary names may include
abbreviations, typos, aliases, and information overloads).
The course of this project will include the application of machine
learning and natural language analysis techniques for entity resolution and
the development of FL strategies to allow, in a simulated manner, multiple
financial parties to collaborate without compromising system security and
data privacy.
The project also includes an in-depth study of the related regulatory
framework, possible problems, and proposed solutions.

Financial Crime
An applied research project was launched to define an FL framework
capable of optimizing the learning of models based on AI techniques by
exploiting the information distributed among different counterparts
observing, therefore, different cases of fraud and illegal behavior.
In general, Intesa Sanpaolo Innovation Center, in order to explore and
deepen the main PETs, in addition to the activation of targeted applied
research projects, makes use of the vertical expertise of researchers in the
field to acquired new knowledge, not only on the previously mentioned
techniques, MPC and FL, but also for example on:
Differential Privacy: It is based on technologies that make it possible to
anonymize the data already at the acquisition stage. This process involves
the introduction of “noise elements” randomly attributed to each individual
piece of data. However, for the aggregated results of the data not to be
distorted, it is necessary that the starting information base is extremely
broad (Dankar & El Emam, 2012).
Traditional differential privacy, also known as central differential privacy
(CDP), is usually achieved by a service provider's initial collection of
original user data, followed by the public disclosure of noise-enriched
statistical information. In the centralized configuration, it is assumed that
the service provider is trustworthy. However, in reality, this assumption
does not always hold true, as even large companies may fail to guarantee
the privacy of their customers.
Unlike CDP, local differential privacy (LDP) allows users to alter their
data directly on their own devices. Only the data thus disturbed are then
reported to the server. This approach allows the service provider not to have
to trust anyone, while still maintaining data privacy and alleviating the
burden of preserving data privacy.
Despite its potential, successfully used by several large companies to
preserve the privacy of their users, LDP has some disadvantages. In
particular, the noise introduced into the entire data set can be considerably
high, resulting in query responses with much less utility than that of the
centralized model. Furthermore, since each user introduces noise without
having information about other users’ data, this perturbation must be
independent, thus limiting the scope of application. These limitations have
contributed to making LDP the subject of much less study than CDP (Yang
et al., 2020).
Any sector involved in data collection and analysis stands to gain from
the implementation of differential privacy, though certain industries are
leading the way. Major players in the realm of information technology, such
as Google and Apple, are heavily invested in employing this technique.
Additionally, the healthcare sector has begun adopting it, as was
particularly evident during the COVID-19 crisis when accurate and secure
public health data became crucial for safeguarding and informing the
population. An example of this is Australia's development of the COVID-19
Real-Time Information System for Preparedness and Epidemic Response
(CRISPER), which utilizes differential privacy to protect sensitive personal
information, such as age and medical conditions, while reporting cases
(Field et al., 2021).
Homomorphic Encryption: It is a sophisticated cryptographic method that
allows arbitrary functions to be performed on encrypted data without
revealing the underlying data and ensuring the validity of the result
obtained. There are various types of homomorphic encryption, which differ
in the type and frequency of mathematical operations that can be performed
on encrypted data (Partially Homomorphic Encryption, Somewhat
Homomorphic Encryption, Fully Homomorphic Encryption). Partially
Homomorphic Encryption contemplates the use of a limited number of
mathematical operations on the encrypted data, that is, addition and
multiplication, an unlimited number of times.
Somewhat Homomorphic Encryption involves the use of more complex
calculations but, unlike the previous one, they are only possible a limited
number of times.
And, finally, there is Fully Homomorphic Encryption, which is still under
development and aims to combine the advantages of the other two methods.
That is, to be able to have addition and multiplication on encrypted data,
allow calculations of high difficulty and for an unlimited number of times,
all while leaving the data encrypted, that is, without the need to use a
private key. Although with Fully Homomorphic Encryption it is
theoretically possible to perform any calculation on encrypted data, in
practice this cryptographic scheme produces a considerable overhead in
terms of time and memory performance. In addition, Fully Homomorphic
Encryption is complicated to use and requires specific cryptographic
expertise. These performance- and usability-inhibiting elements have so far
made Fully Homomorphic Encryption unfeasible.
Brenner et al. (2012) underline that “Fully homomorphic encryption fired
many people's imagination in the field of distributed computing security.
Architectures have been proposed and many application scenarios have
been identified that can benefit from FHE. Encrypted online storage, secure
delegation of confidential computation and even privacy for searching the
web. Unfortunately, all implementations of fully homomorphic encryption
schemes showed that this technique is still much too slow for practical
applications.”
However, in the last two years, there have been huge improvements, both
in terms of usability and computational performance.
Confidential Computing: It is an approach to IT security focused on
protecting data during processing. The main objective is to preserve the
confidentiality and integrity of data even when it is processed on systems or
infrastructures that are not completely secure.
Confidential computing solutions seek to ensure the privacy of data at
three main stages of its lifecycle: during processing, during transfer, and
during storage. This need is particularly critical in cloud environments and
in scenarios where sensitive data must be processed on infrastructures
managed by third parties.
Confidential computing technologies often utilize secure enclaves, which
consist of isolated, encrypted memory zones where code can be securely
executed without the host system being able to access the data or influence
its execution.
The ultimate goal of confidential computing is to enable organizations to
exploit the full potential of cloud computing without compromising the
security of sensitive data.
Synthetic Data: They allow the creation of realistic representations of
real data without compromising the privacy of individuals. These data are
generated through specific algorithms based on existing inputs and are
commonly used as datasets to validate mathematical models and train
machine learning models, where the availability of datasets is crucial and
can range from a few thousand to tens of millions of items.
The technology makes it possible to generate both the desired amount of
information and the most suitable types quickly and efficiently for the
purpose, adapting them to specific needs.
An increasing number of companies are emerging to offer dedicated tools
for generating synthetic data, highlighting the growing importance of this
technology in contexts where data privacy, customization, and flexibility
are paramount.
In addition to conducting detailed investigations on each of the
techniques mentioned, it is important to explore how, depending on specific
use cases, these approaches can be integrated with each other to find the
best balance between privacy and usability.
The combination of these methodologies requires a careful assessment of
the specific needs, the type of data used (in terms of confidentiality), the
required performance and scalability, as well as a scrupulous analysis and
evaluation of risks. It is, in fact, crucial to understand what privacy threats
one intends to address and how different PETs can help mitigate them.
The use cases examined and the vertical insights represent the first steps
in the exploration of PETs, but it is clear that further analysis is essential.

Conclusion
PETs play a crucial role within our increasingly data-centric society. These
technologies allow individuals to maintain control over their personal
information, securely share and analyze data, and ensure that technological
advances can coexist with privacy and data protection. In an ever-changing
digital landscape, the development and adoption of effective PET
technologies will remain crucial to preserving our privacy in the digital age.
PETs, by facilitating widespread data sharing between different entities,
present themselves as a promising means to mitigate the inherent disparities
between individual European states. The large-scale dissemination of data
could act as a catalyst to achieve stronger cohesion and amplify the
geopolitical relevance of our continent. In this way, Europe could emerge as
a major player, comparable to the United States and China, helping to
strengthen its global role and influence.
However, PETs, due to their innovative nature, are not yet regulated by
current legislation.
To achieve this, it is essential to promote a continuous dialogue between
legislation and science, facilitating communication between sectoral
experts.
In addition to the need for regulation on the use of PET and AI
techniques in general, the activity of analyzing these technologies is
particularly important. Compared to the past, the evolution of new
technologies does not always follow a pattern of close cooperation between
private entities and governments.
Many technological innovations arise from private companies and
scientific research centers, which have a strong economic incentive to
develop and market such technologies. In order to achieve relevant
performance, complex algorithms are often used in “black box” mode,
making it difficult to fully understand the reasons behind the decisions
taken by AI. This lack of transparency and understanding can lead to less
direct control over the solution.
To address this problem, it is essential to always check the fairness of
algorithmic decisions, ensuring that there is no discrimination or bias
against certain groups of people. Furthermore, the explainability of results
becomes crucial to ensure user and stakeholder understanding of how and
why an AI system makes certain decisions.
Attention to these aspects will be increasingly important, especially when
the AI Act comes into force. Indeed, it is expected to introduce regulatory
requirements regarding transparency, explainability and fairness in the
implementation of AI-based technologies to ensure ethical and fair use.
References
Acemoglu, D., & Restrepo, P. (2019). Automation and new tasks: How
technology displaces and reinstates labor. Journal of Economic
Perspectives, 33(2), 3–30. https://doi.org/10.1257/jep.33.2.3
Beltrán, E. T. M., Pérez, M. Q., Sánchez, P. M. S., Bernal, S. L., Bovet, G.,
Pérez, M. G., Pérez, G. M., & Celdrán, A. H. (2022). Decentralized
federated learning: Fundamentals, state-of-the-art, frameworks, trends,
and challenges. IEEE Communications Surveys & Tutorials.
ArXiv:2211.08413. https://arxiv.org/abs/2211.08413
Brenner, M., Perl, H., & Smith, M. (2012). Practical applications of
homomorphic encryption. https://doi.org/10.5220/0003969400050014
Dankar, F. K., & El Emam, K. (2012). The application of differential
privacy to health data. Proceedings of the 2012 Joint EDBT/ICDT
Workshops on - EDBT-ICDT’12.
https://doi.org/10.1145/2320765.2320816
Field, E., Dyda, A., Hewett, M., Weng, H., Shi, J., Curtis, S., Law, C.,
McHugh, L., Sheel, M., Moore, J., Furuya-Kanamori, L., Pillai, P.,
Konings, P., Purcell, M., Stocks, N., Williams, G., & Lau, C. L.
(2021). Development of the COVID-19 real-time information system
for preparedness and epidemic response (CRISPER), Australia.
Frontiers in Public Health, 9.
https://doi.org/10.3389/fpubh.2021.753493
Polato, M., Esposito, R., & Aldinucci, M. (2022). Boosting the federation:
Cross-silo federated learning without gradient descent. IEEE.
https://doi.org/10.1109/IJCNN55064.2022.9892284
Yang, M., Lyu, L., Zhao, J., Zhu, T., & Lam, K.-Y. (2020). Local
differential privacy and its applications: A comprehensive survey.
https://doi.org/10.48550/arXiv.2008.03686

A​ppendix A: “Multi-Party Computation” Design


Details
The MPC prototype protocol implemented is divided into three phases:

Preprocessing: this is setup phase where the communication among the


servers is initialized via standard checks on the system public
information (e.g., version, functions, parameters).
Secret-Sharing: this describes the steps that each client has to execute to
encrypt their inputs and send them to the servers.
Secure Computation: assuming that the servers received shares for
several values, the clients can start to request function evaluations of the
servers. We assume that a public set of functions F is given: the client
can request the evaluation of any function in the set, but no function
outside the set. Each function in the set is described using few
elementary operations and the servers execute the evaluation request
running a specific protocol for each elementary operation.

And it consists of three main components:

mpc-lang: this is a descriptive language used to create the database (i.e.,


the data of the clients stored on the servers, represented in shares or as
public values) and the functions in F.
mpc-server: this is the software installed on each of the three servers. It
implements the sub-protocols for the secure computation of the
functions (key exchange, linear combination, multiplication, secure
comparisons).
mpc-client-cli: this is the software installed on the devices used by the
clients. It implements the secret-sharing sub-protocols (sharing,
reconstruction) and the query to the servers for the function evaluation.

A​ppendix B: “Federated Learning” Project


Details
Financial crime (e.g., fraud, theft, money laundering) is a large and growing
problem, in some way touching almost every financial institution, as well as
many individuals. Financial institutions are on the front line in the war
against financial crime and must devote substantial human and
technological resources to this effort. Current processes to detect financial
misconduct have limitations in their ability to effectively differentiate
between malicious behavior and ordinary financial activity. These
limitations tend to result in gross over-reporting of suspicious activity
(manifested as “alerts”) that necessitate time-intensive and costly manual
review.
Technology advancements in this domain, including machine learning-
based approaches, can improve the effectiveness of financial institution
processes. However, a key challenge that most financial institutions
continue to face is that they address financial crimes in isolation without
any insight from other firms.
Financial institutions continue to work relentlessly to advance their
capabilities, forming partnerships across institutions (including
governmental bodies) to share insights, patterns, and capabilities.
These public-private partnerships are subject to stringent regulatory and
data privacy requirements, thereby making it difficult to rely on traditional
technology solutions. Federated learning seems a viable solution since it
allows the building of machine learning models in a collaborative fashion
without sharing the private data but only the model.
During our research project, we conducted a meticulous review of the
state of the art, focusing our attention on identifying potential application
cases and AI techniques that have previously been successfully adopted in
federated computing contexts.
We then proceeded with the creation of a prototype simulating a
federation of data from a public dataset. In this phase, we compared the
performance obtained in three different configurations: centralized, local,
and federated. This analysis was conducted to identify the benefits,
limitations, and challenges associated with the application of federated
techniques in the financial crime domain.
16
THE ROLE OF THE PUBLIC
SECTOR IN DEVELOPING
SUPERCOMPUTING AND
INNOVATION ECOSYSTEMS
The Case of Bologna Technopole within the
Italian and European Framework
Francesco Ubertini and Alessandra Poggiani

DOI: 10.4324/9781003518495-21

The Role of Public Investments to Develop


Technological Infrastructures
The public sector's role in developing supercomputing and innovation
ecosystems is multifaceted and vital for technological progress. Through
investments, longsighted policymaking, and infrastructure development, the
public sector can ensure that supercomputing capabilities can be developed
for the benefit of the whole of society. In fact, while private sector
investments are essential for technological development, they often focus
on short-term returns and commercially viable projects. Yet, high-
performing computing (HPC) infrastructure and fundamental research in
artificial intelligence (AI) may not provide immediate financial returns,
making them unfeasible for private investors. Only public investments can
fill this gap by funding long-term projects that are critical for national and
regional technological advancement, but may not be immediately
profitable, and by allocating resources to build the foundational
infrastructure necessary for future innovation.
HPC and AI are at the forefront of today's technological advancement,
driving innovation across various sectors. HPC refers to the use of
supercomputers and parallel processing techniques to solve complex
computational problems. It is essential for tasks that require the processing
of large amounts of data at high speeds, such as climate modeling, genomic
analysis, financial simulations, and big data processing. AI, on the other
hand, involves the creation of intelligent systems capable of performing
tasks such as visual perception, speech recognition, decision-making, and
language translation and its “training” relies on processing large amounts of
data.
The integration of HPC and AI has an enormous transformative potential
by enabling the processing of massive datasets, enhancing predictive
analytics and automating complex tasks. Thus, both these technologies are
crucial for scientific research, economic development, and for addressing
global challenges such as climate change, healthcare, and cybersecurity.
Public investment can play a critical role in advancing HPC and AI
capabilities in Europe, through funding programs, strategic initiatives, and
collaborative efforts, to foster innovation, enhance research and ensure
Europe's competitiveness in the global market. Given the global scenario,
sustained and increased public investment in Europe is essential to continue
this momentum and address emerging challenges in the field.
Globally in fact, both HPC and AI are experiencing an extremely rapid
expansion. In HPC, exascale computing—systems capable of performing at
least one exaflop (a billion billion calculations per second)—is becoming a
reality. This leap in computational power enables more detailed simulations
and analyses in scientific research and at the same time allows AI
breakthroughs in deep learning, thus pushing the boundaries of what
machines can achieve, from mastering complex problems to driving
autonomous vehicles.

The European Framework


In Europe, significant progress has been made in both fields. The European
Union (EU) has launched various initiatives to boost HPC capabilities, such
as the EuroHPC Joint Undertaking (EuroHPC JU, 2024), which aims to
establish world-class HPC infrastructure and, more recently, European
institutions are leading the global scenario in developing a regulatory
framework focusing on ethical and trustworthy AI development (Artificial
Intelligence Act: MEPs adopt landmark law | News | European Parliament,
n.d.), ensuring that advancements align with societal values.
These efforts have contributed in enhancing the EU position as a credible
player in the digital transformation scenario: by investing in research and
innovation initiatives, fostering collaboration among member states, and
prioritizing digital transformation, the EU has in fact underscored its
commitment to staying competitive and relevant in the ever-evolving
technological landscape.
As noted, in 2018 the European Commission initiated the EuroHPC Joint
Undertaking aiming at creating world-class supercomputing infrastructure
that includes some of the most powerful supercomputers globally (TOP500
List – June 2024 | TOP500, n.d.)—in May 2024, three of the first ten known
supercomputers were part of the EuroHPC system. Key players in the
European HPC ecosystem include France's GENCI (Grand Équipement
National de Calcul Intensif), Germany's Gauss Centre for Supercomputing,
Finland's CSC, Spain's BSC-CNS, and Italy's CINECA. These centers
provide the computational power necessary for scientific research and
industrial applications, supporting a wide range of projects from climate
modeling to pharmaceutical development and generative AI. Furthermore,
the EU has prioritized the development of a common European data space,
enabling the seamless flow of data across borders and sectors. This
initiative not only facilitates the sharing of data for AI applications but also
promotes interoperability and collaboration among diverse stakeholders. By
promoting data sovereignty, transparency, and trust, the EU is fostering a
conducive environment for the responsible and sustainable deployment of
AI technologies.
While there is still significant room for improvement, since the European
technology landscape is still fragmented, with varying levels of investment
and infrastructure across different countries and a notable shortage of
skilled professionals in HPC and AI, through the EuroHPC initiative and
recent legislation on AI (Artificial intelligence Act | Think Tank | European
Parliament, n.d.) the EU has made significant advances in developing its
HPC and AI capabilities, positioning itself as a competitive player in the
global technology landscape.
Bologna's Technopole within the EuroHPC
System
Among these notable developments is one of the key supercomputers of the
EuroHPC system, “Leonardo” in Italy, which represents a significant
milestone in Europe's HPC landscape, designed to perform at pre-exascale
levels and one of the most powerful supercomputers worldwide. This
supercomputer supports a wide range of applications from scientific
research to industrial development, demonstrating Italy and Europe's
commitment to advancing HPC infrastructure.
Leonardo, housed at CINECA in Bologna and operational since
November 2022, is one of the flagship projects under the EuroHPC Joint
Undertaking. Designed to perform billions of calculations per second, it is
supplied by ATOS, with two main partitions: the Booster Module and the
Data-centric Module (see Figure 16.1). The Booster Module partition is
based on BullSequana XH2135 supercomputer nodes, each with four
NVIDIA Tensor Core GPUs and a single Intel CPU. The Data-centric
Module partition is based on BullSequana X2140 three-node CPU Blade
and is equipped with two Intel Sapphire Rapids CPUs, each with 56 cores
(UG3.2: LEONARDO UserGuide—SCAI—User Support—CINECA
Technical Portal, n.d.).
Long Description for Figure 16.1
Figure 16.1 Leonardo's architecture.

The overall system also uses NVIDIA Mellanox HDR 200Gb/s


InfiniBand connectivity, with smart in-network computing acceleration
engines that enable extremely low latency and high data throughput to
provide the highest AI and HPC application performance and scalability
(Figure 16.1).
However, Leonardo's success is based on a much more visionary project
which was made possible thanks to the proactive collaboration of many
actors and which could leverage the development of the Bologna
Technopole project.
Bologna Technopole was initially conceived by the Emilia-Romagna
region as a strategic initiative to foster scientific research, technological
innovation, and industrial development, aiming at bridging the gap between
academia and industry and providing a collaborative environment where
cutting-edge research can translate into practical applications). The vision
was to create an area where high-technological impact research could
seamlessly translate into practical applications by working closely with
start-ups, innovative hi-tech private companies and public sector agencies,
transforming Bologna into a hub for scientific research and technological
innovation. This challenging objective was driven by the need to boost
regional and national economic growth through technological
advancements and innovation and required substantial funding, which was
secured through a combination of local public and private investments, as
well as support from the Italian government and the EU.

Table 16.1 Leonardo's tech specifications

Booster DCGP
Atos BullSequana X2135 “Da Atos BullSequana
Model Vinci” single-node GPU blade X2140 three-node CPU
blade
Racks 116 22
Nodes 3456 1536
Single socket 32 cores Intel Ice Dual socket 56 cores
Lake CPU Intel Sapphire Rapids
1 × Intel Xeon Platinum 8358, CPU
Processors
2.60 GHz TDP 250 W 2 × Intel Xeon Platinum
8480p, 2.00 GHz TDP
350 W
Booster DCGP
4 × NVIDIA Ampere GPUs/node, -
Accelerators 64 GB HBM2e NVLink 3.0 (200
GB/S)
Cores 32 cores/node 112 cores/node
512 (8 × 64) GB DDR4 3200 512 (16 × 32) GB DDR5
RAM
MHz 4800 MHz
Peak About 309 Pflop/s 9 Pflops/s
Performance
Internal DragonFly+ 200 Gbps (NVIDIA Mellanox Infiniband HDR)
Network
2 × dual port HDR100 per node Single port HDR100 per
node
137.6 PB based on DDN ES7990X and Hard Drive Disks
Storage (raw (Capacity Tier)
capacity) 5.7 PB based on DDN ES400NVX2 and Solid State Drives
(Fast Tier)

Source: Courtesy of CINECA

The strategic goals of Bologna Technopole were clear from the outset:

To become a leading center for HPC and data-intensive research


To facilitate the transfer of knowledge and technology between
academia and industry
To provide a collaborative environment that nurtures innovation and
entrepreneurship
Building on their decades of experience in supercomputing, when the
Technopole ecosystem started as a place for innovation, supercomputing
and digital transformation, CINECA and the National Institute for Nuclear
Physics—INFN (Epifani, n.d.) pooled their expertise and part of their
infrastructure to settle in the area. Almost simultaneously, in 2017, Italy
nominated Bologna to house the data center of the European
Meteorological Center, ECMWF (Leon, n.d.), at the Technopole, effectively
promoting that area as an international hub for innovation and advanced
technologies.
The physical development of the Bologna Technopole involved
significant public investment, from local, regional, national and European
resources, to develop state-of-the-art facilities and infrastructure. The site
encompasses a range of laboratories, research centers, and collaborative
spaces designed to support multidisciplinary research and it is still
undergoing significant refurbishment and re-functionalization of old factory
spaces (Tecnopolo di Bologna – Ex Manifattura Tabacchi, n.d.).
Along this process, after the establishment of the European
Meteorological Centre data center, when in 2019 Europe decided to equip
itself with a network of pre-Exascale-class supercomputers, Italy applied to
host one, and here Leonardo came to Technopole and was turned on in
November 2022, ranking at the time as the fourth most powerful
supercomputer in the world.
One of the critical steps in the development of Bologna Technopole was
(and partially, still is) the renovation of the former industrial areas and the
regeneration of the buildings that had hosted the Manifattura Tabacchi
factories in the past, together with the design and building of the new
physical and technological infrastructure, including the establishment of
advanced laboratories, research facilities, and collaborative spaces. The
facilities are designed to support a wide range of research activities, from
basic scientific research to applied technological development and cutting-
edge data centers, including the one that hosts Leonardo.
Currently, approximately 30% of the whole area has been refurbished and
the rest is undergoing the design phase to be completed by 2028.
The infrastructure at Bologna Technopole is not just about physical
buildings, it also includes advanced technological resources and
infrastructure. This is where the Leonardo supercomputer comes into play.
As one of the most powerful supercomputers in the world, Leonardo
represents the pinnacle of HPC and serves as the technological backbone of
the Technopole. Its vast processing power supports complex simulations,
data analysis, and computational research across various scientific domains.
Leonardo is housed within the Technopole building of CINECA, the
Italian government-owned in-house consortium responsible for managing
HPC resources for the Italian research system. CINECA's history of
excellence in HPC (HPC Cineca, n.d.) dates back to its establishment in
1969. Over the years, CINECA has evolved into a major player in the
European and international HPC ecosystem, providing advanced computing
resources to the national and international scientific community.
Leonardo is an invaluable tool for researchers working on
computationally intensive projects, ranging from climate modeling to AI,
life sciences and astrophysics. The modular architecture of Leonardo, which
includes thousands of high-performance processors and GPUs, ensures
efficient data processing and transfer, enabling researchers to tackle some of
the most challenging scientific questions of our times and thus attracts a
whole range of researchers across different scientific disciplines.
The contributions of Leonardo to Bologna Technopole extend beyond the
realms of science and technology. The advancements made possible by the
research conducted with Leonardo have indeed far-reaching implications
for society and the economy. For example, breakthroughs in climate
science, enabled by the computational power of Leonardo, inform policy
decisions and can contribute to the development of strategies to combat
climate change or to tackle meteorological emergencies. Similarly,
advancements in genomics and personalized medicine have the potential to
revolutionize healthcare, leading to new discoveries and treatments and
improved quality of life.
Predictive models processed by supercomputing-enabled algorithms can
also give rise to so-called “digital twins”: virtual representations of an
object or system, linked to it throughout its life cycle. In fact, the “digital
twin” is updated in real time from data collected and use simulation
programs and machine learning to come up with predictive models of the
object's future performance and reactions to given conditions, which are fed
back to the physical counterpart as decision support. In simpler terms, the
digital twin is a highly complex virtual model that is an exact replica of its
physical counterpart. The potential and fields of application of digital
twinning are numerous. To date, digital twins are mainly used in industry,
but the technology is also gaining ground in the public sector, especially for
municipal governments, including Bologna (Comune di Bologna, n.d.). But
the scale is broader. A visionary EU program (in which CINECA
participates), entitled “Destination Earth” (Destination Earth, 2024),
envisions arriving in 2030 with a digital twin of the entire earth, that is, of
all the physical phenomena occurring on our planet. However, the logic of
the “digital twin” is not only to provide predictions, but also to guide and
possibly correct the decision-making process.
The Leonardo supercomputer also plays a key role in training Generative
ai and creating Large Language Models (LLMs). The supercomputing
system has already been used to train European open models such as
Mistral (Mistral AI, 2024) and is now training several Italian LLMs,
including the La Sapienza University led project called Minerva (Ansa,
2024). And its power for training AI is very significant: the numbers speak
for themselves, training Chat GPT 4 took several months, at a total cost
estimated at hundreds of millions of dollars, while Leonardo could have
completed the same training in 120 days.
Leonardo, however, is not alone. One of the defining features of Bologna
Technopole is its emphasis on collaboration: Technopole serves as a hub
where academia and industry converge, fostering an environment that
encourages innovation and the exchange of ideas. It also intends to play a
crucial role in higher education and scientific research. By providing access
to advanced technological resources, including the Leonardo
supercomputer, Technopole can now equip the next generation of
researchers with the skills and knowledge needed to excel in their fields.
Universities and research institutions leverage these resources for their
research projects and programs.
As it continues to develop, the future of Technopole is meant to be
transformative, not only for the Emilia-Romagna region but also on a
national and European scale. In fact, while Technopole has already
established itself as a critical node for scientific research and technological
innovation, moving forward the expansion of its infrastructure will be
pivotal. Plans for new state-of-the-art laboratories, expanded building
facilities and office spaces, and advanced computing facilities are already
underway. In April an agreement between the Emilia Romagna Region,
CINECA and INFN WAS signed for a further development, including the
historical building of “Ballette” for another 25,000 sq. m. of new facilities
(Cineca, 2024). These developments aim to provide researchers and
companies with the necessary resources to conduct cutting-edge research
and develop innovative technologies.
One of the cornerstone projects in this expansion is the Technopole's
commitment to smart buildings, sustainable architecture and functioning
which aims at setting a benchmark for other future developments of
abandoned former industrial areas. Implementing green building standards
and renewable energy sources will minimize environmental impact and
align with global sustainability goals. As Bologna Technopole evolves, it is
committed to promoting sustainable growth. This commitment is reflected
in its strategic initiatives aimed at boosting renewable energy and green
technologies. In fact, hi-tech scientific and HPC campus sites represent the
forefront of technological and research advancement, but the significant
energy demands of such sites pose a considerable challenge, particularly
given the global emphasis on the need to reduce carbon footprints and
combat climate change. It is imperative to increase the reliance on
sustainable and renewable energy resources for these campuses to ensure
environmental responsibility and long-term operational viability.
The energy consumption of Technopole is already considerable, despite
Leonardo having been built with state-of-the art technology to reduce
energy consumption. Leonardo's racks are cooled with tempered water:
water enters the supercomputer's cooling circuits at 37°C and exits at 47°C,
to be sent to adiabatic disposers, called dry coolers, which bring it back to
the temperature of 37°C. Not classic refrigeration units, but huge “fans”
connected to adiabatic batteries, which use the evaporation of water to bring
the temperature down. The air conditioning of the engine room, and all
technical rooms in general, is done with chilled water: the water, brought to
19°C by the chiller, is sent to the different rooms to cool them, and returns
to the chiller at a temperature of 26°C.
Nevertheless, areas such Technopole that house powerful supercomputers
and high-impact technological facilities that operate continuously 24/7
require substantial energy power for both computational tasks and cooling
systems. Transitioning to sustainable and renewable energy sources is
therefore paramount to ensure the future-proof sustainability of Technopole
and this very issue is at the core of its future development. Moreover,
investing in sustainable energy for scientific and HPC campuses can also
spur technological innovation and interdisciplinary research. The
integration of renewable energy systems with advanced computing
infrastructure necessitates the development of novel engineering solutions
and optimization techniques. This can lead to improvements in energy
storage, smart grid management, and energy-efficient computing
architectures. Such advancements will not only enhance the sustainability
of HPC operations but will also potentially have broader applications,
contributing to the global pursuit of sustainable development.
But most of all, the future of Technopole is intrinsically linked to its
capacity to advance scientific research and innovation across multiple
disciplines. By fostering a multidisciplinary approach, Technopole aims to
break down silos between different fields of study, encouraging
collaboration and cross-pollination of ideas. The focus on big data and AI is
particularly significant. Technopole is positioning itself as a leader in data-
driven research, leveraging AI to tackle complex problems in fields such as
medicine, environmental science, and social sciences. The establishment of
dedicated AI research centers will provide a collaborative environment
where experts from various domains can work together to develop
innovative solutions and where scientific disciplines, such as Linguistics,
Arts, and Humanities, can also take advantage of new technologies.
It is clear that the way Bologna Technopole will develop is going to have
a profound impact on both the local and the European landscape. By
attracting investment, creating high-skilled jobs, and fostering a culture of
innovation, Technopole can drive economic growth and enhance the
region's competitiveness and its presence in Bologna will stimulate the
growth of related industries and services, creating a ripple effect throughout
the local economy and beyond. The demand for skilled professionals in
fields such as data science, engineering, and biotechnology will provide
opportunities for the local universities and workforce and attract talent from
other regions. Additionally, Technopole's commitment to research and
science will benefit the whole European system as outreach programs and
collaborations with schools and universities will inspire the next generation
of scientists, engineers, and innovators. By promoting STEM (science,
technology, engineering, and mathematics) education and providing
opportunities for hands-on learning, Technopole can indeed help build
awareness and a digitally skilled community for the future.
Looking ahead, the long-term vision of Bologna Technopole must focus
on being at the forefront of European scientific research and technological
innovation. This vision is underpinned by several strategic goals, including
the objective to host a future post-exascale HPC center. Building strong
European partnerships and participating in pan-European and global
research initiatives will further expand this vision. Creating a dynamic
ecosystem that supports innovation, entrepreneurship, and the
commercialization of research, along with enhancing the local economy and
community through education, outreach, and job creation, should also be
central to Technopole's objectives.
Nevertheless, despite its undisputable success, Bologna Technopole still
faces several challenges that must be addressed to ensure its continued
growth and impact. One of the primary challenges is the need to keep pace
with the rapidly evolving field of HPC. As data volumes and complexity
increase, the demand for more advanced computational resources will grow.
This necessitates continuous investment in infrastructure upgrades and
technological advancements. At the same time, the investments needed to
complete the urban regeneration intervention cannot be further postponed,
so that the Technopole site can become fully operational and equipped with
all the necessary spaces for the demand for innovation that comes from all
over Europe.
Another crucial challenge is fostering and maintaining effective
collaboration between academia and industry. While the Technopole has
been striving to succeed in this domain, continuous efforts are needed to
ensure that partnerships remain productive and mutually beneficial. This
includes facilitating knowledge transfer, supporting innovation, and creating
an environment that encourages cross-contamination.

Looking Ahead: Lessons Learned and Future


Challenges
Continued public investment and awareness is crucial to face these
challenges and maintain and enhance Italy's position and Europe's position
at the forefront of technological innovation.
Among other things, the case of Bologna Technopole shows how
significant public investments can be in advancing HPC and AI capabilities,
and more generally innovation, throughout Europe. Through funding
programs and fostering strategic initiatives and collaborative efforts, public
investment can and does nurture innovation, enhancing research and
development while ensuring Europe's competitiveness in the global
scenario. Sustained and increased public investment is essential to continue
this momentum and address emerging challenges in the field.
As President Ursula von der Leyen said during her 2023 State of the
Union speech (European Commission, n.d.): “Europe is a leader in
supercomputing, thanks to the investments we made in recent years. We
have three state-of-the-art supercomputers in the EU. And we need to put
this power to use. The access to Europe's supercomputing infrastructure that
we are announcing today will help start-ups lead the development and
scale-up of artificial intelligence in a responsible manner, and in line with
European values.”
Innovation is a key driver of competitiveness in the global economy.
Countries that excel in innovation tend to have more dynamic economies,
higher productivity, and better job creation rates. For Europe, enhancing its
innovation capabilities is vital to maintaining its competitive edge against
other global players such as the United States and China.
Public investment plays a pivotal role in the innovation ecosystem,
particularly in areas where private sector investment may be insufficient
due to high risks and long-term horizons. Government funding can
stimulate basic research, which is the bedrock of groundbreaking
innovations, but often requires substantial resources and time before
yielding practical applications. Moreover, public investment can bridge the
gap between fundamental research and market-ready products. By funding
fundamental and applied research and development projects, governments
can help translate scientific discoveries into commercial technologies.
In an increasingly competitive global landscape, the importance of
innovation and research, and development cannot be overstated. For
Europe, continuing public investment in these areas and in developing state-
of-the-art infrastructure is crucial not only to maintain its economic stature
but also to drive sustainable growth and address societal challenges.
Persistent and continuous public investment in innovation and R&D is
crucial to enhance Europe's competitiveness in the global economy because
in the current evolving technological scenario these investments are key to
driving economic growth, addressing societal progress and ensuring the e
long-term sustainability of innovation ecosystems. By prioritizing funding
in research and technological infrastructure, fostering collaboration, and
overcoming barriers to investment, Europe can secure its position as a
leader in science and technology. The benefits may extend far beyond
economic gains, contributing to a more sustainable, healthy, and prosperous
future for all Europeans.

References
European Commission. (n.d.). European Commission – European
Commission.
https://ec.europa.eu/commission/presscorner/detail/ov/speech_23_442
6
Ansa. (2024, May 2). Ecco Minerva, la prima famiglia di LLM addestrati
da zero in italiano – Approfondimenti – Ansa.it. Agenzia ANSA.
https://www.ansa.it/osservatorio_intelligenza_artificiale/notizie/approf
ondimenti/2024/04/23/ecco-minerva-la-prima-famiglia-di-llm-
addestrati-da-zero-in-italiano_ac6cc4b0-6c65-4cef-a7e3-
dfca7bc3c2c4.html
Artificial Intelligence Act: MEPs adopt landmark law | News | European
Parliament. (n.d.). https://www.europarl.europa.eu/news/en/press-
room/20240308IPR19015/artificial-intelligence-act-meps-adopt-
landmark-law
Artificial Intelligence Act | Think Tank | European Parliament. (n.d.).
https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(20
21)698792
Cineca. (2024). Tecnopolo: al via un hub per ricerca e innovazione.
https://www.cineca.it/temi-caldi/ballette-tecnopolo-hub
Comune di Bologna. (n.d.).
https://www.comune.bologna.it/notizie/gemello-digitale
Destination Earth. (2024, June 10). Destination Earth. https://destination-
earth.eu/
Epifani. (n.d.). National Institute for Nuclear Physics.
https://home.infn.it/en/
EuroHPC JU. (2024, May 15). The European High Performance
Computing Joint Undertaking (EuroHPC JU). https://eurohpc-
ju.europa.eu/index_en
HPC Cineca. (n.d.). HPC Cineca. https://www.hpc.cineca.it/about-us/
Leon, J. P. (n.d.). ECMWF's new data centre in Italy. ECMWF.
https://www.ecmwf.int/en/newsletter/170/computing/ecmwfs-new-
data-centre-
italy#:∼:text=After%20an%20international%20tender%2C%20on,tob
acco%20factory%20in%20Bologna%2C%20Italy
Mistral AI. (2024, June 10). Mistral 7B. Mistral AI | Frontier AI in Your
Hands. https://mistral.ai/news/announcing-mistral-7b/
Tecnopolo di Bologna – Ex Manifattura Tabacchi. (n.d.). {“Default”:
“Programma Regionale”}. https://fesr.regione.emilia-romagna.it/por-
fesr/documenti/asse-1/tecnopolo-bologna-17-settembre-2020
TOP500 List – June 2024 | TOP500. (n.d.).
https://top500.org/lists/top500/list/2024/06/
UG3.2: LEONARDO UserGuide – SCAI – User Support – CINECA
Technical Portal. (n.d.).
17
ARTIFICIAL INTELLIGENCE
AND CYBERSECURITY
Navigating a Double-Edged Relationship
Pascal Steichen

DOI: 10.4324/9781003518495-22

Introduction
Artificial intelligence (AI) and cybersecurity, mainly connect in three ways:
the good way, where the prowess of AI benefits cybersecurity operations
and enhances defenses; the bad way, enabling sophisticated cyberattacks
and accelerating criminality; and, last but not least, the ugly way, the abuse
or misbehavior of AI systems generating unpredictable or undetectable
results, sometimes for good, but most of the time for negative, outcomes.
Technological developments and the dependency of society on
technology, especially when looking at the dramatic evolution of the threat
landscape during the last decade, amplify the need for AI in cybersecurity.
Its prowess is so appealing that we will have to cope with the
accompanying concerns.
The potential misuse of AI by malicious actors, including ethical
considerations, are concerns that we will only be able to address when
building on: 1) the recent developments of multi-disciplinarity in
cybersecurity, and 2) learning, and I would even be tempted to say deep
learning, the lessons and solutions from the past, and not only
recommendations from digital concepts; we especially have to look at other
domains, like the plastic industry.

The Cyber Threat Landscape and How It


Demands AI Prowess
Over the past five to ten years, the digital threat landscape has broadened
dramatically, driven by three major developments that have escalated the
need for robust cybersecurity. First, the rise of ransomware, especially
facilitated by cryptocurrencies like Bitcoin, has made it harder to trace and
prosecute cybercriminals. Second, the rapid digitization, spurred by the
Internet of Things (IoT) and by the COVID-19 pandemic, has increased
vulnerabilities, with phishing attacks and compromised IoT devices
becoming common. Lastly, geopolitical conflicts, notably the Russian
aggression against Ukraine, have highlighted the critical role of cyberspace
in modern warfare, further emphasizing the importance of cybersecurity.
Cybersecurity is a global challenge, yet it has very regional specificities,
the following overview of the threat landscape is based on my own personal
perspective and the experiences I went through during my career of more
than 20 years in cybersecurity. Thus, the focus is clearly on Europe and its
closely related nations. In South East Asia or Africa, the evolution was
probably very different. However, everything is interconnected and
globalization has also affected cybersecurity. With the growing use of AI,
we will see convergence in the future.
The first leap was linked to the huge success of what we now know as
ransomware (while a success from a criminal perspective, for society it is a
true disaster). There have been several attempts which were already quite
successful. In August 2010, Russian authorities arrested nine individuals
connected to ransomware known as WinLock. Victims needed to send a
premium-rate SMS, costing around $10, to receive a code to unlock their
device. The scam hit numerous people in the region – the group reportedly
earning $16 million. The real boost for ransomware came, however, much
later, along with another phenomenon of the time: Bitcoin (Nakamoto,
2008). Using digital money such as Bitcoin and other cryptocurrencies for
the ransoms makes tracing and prosecuting the perpetrators much more
difficult.
It was around 2013/2014 that so-called crypto-ransomware first hit the
world. CryptoLocker infected an estimated 250,000 victims and saw
payments of some $27 million within two months, between October 15,
2013, and December 18, 2013 (Blue, 2013).
Since 2016, ransomware has become cyber criminals’ cash machine.
Multiple crime groups further professionalized operations known as
Ransomware-as-a-service (appearing in the early 2020s), even making it
their core business model (Hypponen, 2022). In Europe alone, ransomware
is responsible for more than one third of all cyber incidents. See Figure
17.1.
Figure 17.1 ENISA Threat Landscape 2023. Breakdown of analyzed incidents by threat type
(July 2022–June 2023).

Second, there is the influence of digitization, IoT and the COVID-19


pandemic, three effects that accelerated each other and catapulted our
society into a fully, digitally interconnected world. Technology is so deeply
rooted in our lives today that most of our activities and businesses are
completely dependent on the Internet.
In the third quarter alone of 2020, in the middle of the COVID-19
outbreak, the number of phishing1 attacks significantly increased.
According to a report by the Anti-Phishing Working Group (APWG),
phishing attacks quadrupled from July to September 2020. Massive scale
campaigns were seen and were relatively successful, even though
fortunately, only very few people fell victim of the scam schemes. Such
social engineering attacks, as they target people, rely on specific and
personalized information to be most effective. With the release, availability
and ready accessibility of Generative AI systems (such as ChatGPT), this
barrier is now broken. For scammers and other cybercriminals AI is a gold
mine, especially for phishing and other human-targeted cyberattacks.
Another “threat area” that takes advantage of the lack of awareness and
often poor protection of some individuals, especially when compared to
companies or organizations, is the IoT. What is IoT? Take conventional
objects/devices/equipment, such as umbrellas, fridges or heating systems…
fit them with sensors and other electronic chips, and connect them to each
other and to the Internet. The purpose is to exchange data to fulfill their
specific function(s) efficiently and eventually autonomously, while giving
us the comfort of making all those smart devices “work for us” without
needing our intervention or action. Being autonomous needs decision-
making, which needs AI and data, loads of data, to provide context and
understanding for the action(s) needed. IoT devices, therefore, need to be
interconnected via central platforms, which are in their turn interconnected,
and to share and process data and information.
Recent years have seen a massive proliferation of such smart devices,
although predictions have been revised recently (Nordrum, 2016). The
number of IoT devices increased 31% year-on-year to 8.4 billion2 in 2017,
where it surpassed the number of human beings on planet Earth. Further
forecasts are for them to almost double from 15.1 billion in 2020 to more
than 29 billion in 2030 – with eight billion consumer devices to be found in
China alone (Statistia, 2024).
To showcase the impact of IoT on the threat landscape, the Mirai botnet
is quite illustrative. In late 2016, multiple high-volume and widely used
websites were under DDoS3 attack, big names including GitHub, Twitter,
Ovh, Reddit, Netflix, AirBnb, among hundreds of other websites were
rendered inaccessible for several hours. A vast army, allegedly 1.2 million
infected devices (Williams, 2016), of hijacked, Internet-connected devices,
a so-called botnet4 – devices ranging from security cameras and home
routers to baby phones and video recorders – was the source of the attack
(Fruhlinger, 2018). IoT devices turned on their owners and broke a big
chunk of the web. What makes this story even more worrying is that the
source code, which has leaked since then and is now available online for
research purposes, is quite tiny and unsophisticated, showing how badly
protected most connected devices in fact are. Often the most basic cyber
hygiene rule is not followed: change the default password after installation.
And even long after the initial outbreak and general awareness of the Mirai
case, the botnet or similar programs could still be used effectively over a
long period of time (Milić, 2019). In Luxembourg, my team followed this
case as well, struggling to reduce the risk exposure, as, after several
months, only half of the vulnerable devices had been fixed.
Last but certainly not least, the unprovoked and illegal Russian
aggression against Ukraine has elevated the cyber topic to become one of
the top priorities of states and corporates. Long before the aggression itself,
both these nations as many others, were already active in cyberspace. One
of the most prominent examples being the NotPetya case, bringing down
ports, government agencies, and countless companies around the world.
From hospitals to a chocolate factory, it crippled multinational corporates,
including Maersk, the pharmaceutical giant Merck, FedEx, Saint-Gobain,
and Mondelēz. In each case, the damage was counted in nine-figures and
hindered operations for periods from weeks up to several months
(Greenberg, 2018).
During the conflict then the world was again victim of collateral damage,
multiple times. From the early Viasat attack (Case Study Viasat, 2022)
through the now famous Colonial Pipeline case (Easterly, 2023) to regular
DDoS incidents (mainly) throughout Europe (ENISA, 2023). Luxembourg
was hit very recently as well (Luxembourg, 2024).
Many may cry wolf that cyber war is at our door step. In reality,
geopolitical tensions show that cyberspace is already being used and abused
to do harm, and for quite some time now (CyberPeace Institute, n.d.).
Combining both phenomena described above – efficient techniques of
phishing and ransomware, and the massive proliferation of unprotected
connected devices, especially in an Industry 4.0 context – further
aggravates the situation and makes offensive cyber operations accessible
and feasible, even to the least well-resourced states (e.g., the 2014 Sony
Pictures hack was allegedly perpetrated by North Korea).

The Promise of AI in Cybersecurity


In the beginning there is the good, the benefits of AI and its tremendous
promise for enhancing cybersecurity by offering advanced capabilities in
threat detection, incident response, and risk management. By analyzing vast
amounts of data and utilizing machine-learning algorithms, AI-powered
solutions can identify anomalies, predict emerging threats, and automate
responses with unprecedented speed and accuracy. This section explores
key AI-driven applications in cybersecurity, such as behavioral analytics,
threat intelligence, and security automation, highlighting their effectiveness
in strengthening cyber resilience and reducing risk exposure. As we delve
into these benefits, it becomes clear that AI is pivotal in shaping a more
secure digital future.

AI for Security Operations


Today, the main area where AI is helpful for cybersecurity is in operational
security, ranging from threat analysis and forecast, intrusion detection,
vulnerability management, to threat hunting and behavioral analytics.5 With
the enormous amount of data, the multiplication of systems and the growing
threat landscape, detection, reaction and response times are of the essence
for modern cybersecurity tools and strategies. For the most efficient
protection and response, a blend of available AI technologies is needed and
used (Stanham, 2024):

Machine learning: to recognize patterns and learn from past incidents;


Natural language processing: to interpret human language, streamlining
the analyst's experience in task execution and democratizing security
decision-making across teams;
Data mining: to extract valuable patterns and insights from large
datasets;
Predictive analytics: to forecast potential threats based on historical
data;
Behavioral analytics: to monitor and analyze user behavior to detect
anomalies;
Automated decision-making: to enable quick responses to identified
threats.

Although the use of automation and AI-like techniques in cybersecurity


is not new – it has seen a steady growth since the early days (1990s), using
expert systems6 and rule-based mechanisms for intrusion detection – the
new developments and techniques in AI have tremendously enhanced
security products and solutions. Gartner predicts for 2024 that AI will
completely disrupt, in a positive way, the global cybersecurity market.
The huge global efforts and investments in cybersecurity are, of course,
connected to the growth and acceleration of cyber threats, as Chief
Information Security Officers (CISOs) and other defenders seek help in
managing AI technology. The uptake of specific developments and
investments of AI-powered solutions come with a couple of additional non-
technical, market-specific challenges and opportunities, as recent studies
show. “The underpenetration of cybersecurity products and services is the
result of the below-target adoption of cybersecurity products and services
by organisations […]. Cybersecurity providers must meet the challenge by
modernising their capabilities and rethinking their go-to-market strategies”
(Bharath Aiyer, 2022).
This is exactly where AI comes in very handy and where we see those
needs and challenges are driving cybersecurity innovation strongly, as an
analysis of cybersecurity patents in the United States shows.
Alongside the fact that more and more companies are hiring dedicated
cybersecurity personnel, especially CISO7 profiles. The more competence
and capacity a company has internally, the better attacks are being detected,
the clearer the scale of the problem is being understood, the bigger the need
for dedicated cybersecurity solutions becomes. This trend has now reached
the mid-market and more and more small- and medium-sized companies are
seeking to implement cybersecurity solutions and hire cybersecurity
professionals.
The last couple of years, and particularly in Europe, we have been
dealing with a tsunami of cybersecurity regulation that will impact, it is
hoped in positive ways, companies and organizations of all types and sizes.
From the already well-known GDPR (General Data Protection Regulation),
to NIS2 (Network and Information Security Directive 2), CSA
(Cybersecurity Act), CRA (Cyber Resilience Act), CER (Critical Entities
Resilience Directive), and there is more to come. The European market sees
a real impetus to innovate and reshape its cybersecurity services and
solutions.
Last but certainly not least, a huge threat looms on the cybersecurity
market, and is even growing as recent studies highlight. “The cybersecurity
workforce gap has reached a record high, with 4 million professionals
needed to adequately safeguard digital assets” (ISC2, 2023).
In my opinion, one of the major challenges in cybersecurity, which is an
opportunity at the same time, is the talent and skills gap. A huge problem to
employers and organizations seeking to best prepare against the growing
threat. A fantastic avenue for the younger generation to plan and join this
expanding and fascinating ecosystem that is cybersecurity. Combined with
specialization in AI, IoT and related disciplines, we will see booming and
very promising economic development in cybersecurity in the near future.
Many indicators are already quite clear today, like the major rise in demand
for (log) monitoring expertise and solutions.
From the perspective of cybersecurity providers, the offer side of the
market, all this looks very promising and if companies manage to embrace
the power of AI in their future products and services – considering that the
resources needed are available, and with the rapidly evolving threat
landscape, there are no longer good reasons not to invest in AI to strengthen
cybersecurity – the cyber future will be a bright one.
This is, however, a little too rosy a projection. A recent outlook analysis
from the World Economic Forum, traces a more nuanced trajectory of the
global cybersecurity market. “Looming cyber inequity amid a rapidly
evolving tech landscape emphasises the need for even greater public-private
cooperation” (WEF, 2024).
A stark divergence between already cyber-resilient organizations
continuing to invest heavily and adopting paradigm-shifting technology,
like AI, and those that are still struggling – very often small- and medium-
sized enterprises, has emerged. This is, of course, connected to the contours
of the threat landscape, as already mentioned previously, but
macroeconomic trends, including the rising costs of access to innovative
cyber services, tools, skills and expertise, constitute an ever-higher entry
barrier for late adopters.
A further aggravating factor is the interdependence due to the transition
to an all-interconnected world and a need to understand, align, and manage
third party and supply-chain threats and vulnerabilities.
All the above leads to a necessity for collaboration and cooperation, on a
public-private level, but also private-private and public-public interactions
are key to collectively addressing the many challenges and embracing the
opportunities that AI and other emerging technologies present.

Cybersecurity is a team sport; only together can we make the future


cyber secure and safe.

How Can AI Strengthen Cybersecurity Training


AI comes in two distinct mechanisms or categories: discriminatory AI,
closely related to expert systems, that analyzes and compares, using a huge
knowledge base, a given question or input, to classify, identify, explain, or
provide an expected result; generative AI, the more recent and novel
mechanism using neural networks,8 to produce, create or generate
something new, previously non-existent content or information, following a
request or expressed wish.
GenAI (or generative AI), is the new frontier of cybersecurity, in
particular, for its ability to create and simulate sophisticated defense and
attack scenarios. A capability that allows organizations to train, exercise,
prepare, and anticipate against a wide range of potential attacks, intrusions,
and other threats. Combined with the aforementioned prowess of AI in
automation capabilities is the integration of generative AI into
organizations’ cybersecurity processes, which is seen as a real game
changer. It will provide the needed layer of intelligence that will strengthen
the overall cybersecurity posture.
Such GenAI technology, however, needs huge investments and enormous
amounts of data and machine-learning capabilities. As depicted above, this
further fuels the market inequities and enlarges the gap between big
corporations and mid- or small-sized companies, and likewise for nations,
being it the north-south divide or the big-small difference, or both.
To conclude this section, it is understood that AI is here to stay and is
inseparable from cybersecurity. The 2024 edition of one of the most
renowned conferences in cybersecurity, the RSAC conference in San
Francisco, distinctively underscored the transformative impact of AI on
cybersecurity. AI is at the forefront of the digital economy's evolution and
will empower those – humans or organizations – that know how to leverage
it to do the job better, others will have to face the threat of being “replaced”
(Bradley, 2024).

The Threat of AI-Powered Hackers


In the ever-evolving landscape of cybersecurity, the ominous shadow cast
by the rapid advancement of AI looms large. While AI heralds immense
potential for progress, its darker side emerges with equal force. Malicious
actors harness AI's power to amplify their nefarious activities, from creating
and disseminating malware to orchestrating sophisticated social engineering
attacks. Recent months have witnessed a surge in cyber threats as hackers
exploit AI's capabilities to evade security measures with unprecedented
agility. GenAI systems like ChatGPT can become potent tools, offering
unpredictable and creative solutions prized by criminals. Spam, phishing,
and deepfake attacks escalate, leveraging AI's automation and adaptability
to bypass defenses. Yet, amidst this bleak narrative, both defenders and
adversaries grapple with the complexities of AI employment, offering a
glimmer of hope in the fight for cyber resilience.
Where there is light there is also darkness. Hand in hand with the good
side of AI goes the bad. The huge potential for malicious actors to use AI to
enhance their capabilities, to create and spread malware, analyze behavior
to abuse it via social engineering attacks, find and exploit vulnerabilities
with unprecedented efficiency, or simply propagate dis- and misinformation
to influence political or societal decisions.
In recent months, we have seen such threat actors start to specialize in
using AI to empower their mischief and significantly increase the volume
and frequency of attacks and intrusions (Deep Instinct, 2024). Hackers are
scaling up their cyberattack capabilities to evade security controls and find
new vulnerabilities, all at an unprecedented pace with sometimes
devastating impacts.
Particularly the use of GenAI systems, such as ChatGPT, is very
prominent, as criminals do not have a specific result expectancy, as one
would expect from a protection system, on the contrary, the sometimes
unexpected and creative solutions or outcomes from a generative AI system
are of high value for them. Such unpredictable or unknown behavior and
attack vectors are much harder to protect against. The most common way
AI is leveraged for cybercrime is, however, linked to the automated,
untiring and multitasking prowess of AI.

Spam, Phishing, and Other Email-Related Compromise


More and more attackers are using generative AI to enhance their business
email compromise and similar techniques to create better, more legitimate-
looking phishing emails as well as to generate fake personae to execute
their fraud campaigns. According to the FBI $43 billion in global losses
were identified between June 2016 and December 2021 (FBI, 2022).
On the other hand, machine-learning techniques are used to get past
filters and other protection systems. Spam filters, for instance, analyze an
email and based on a set of parameters give it a score, the higher the score
the more likely it is a scam. Such legitimate tools are being abused more
and more often to finetune and make the perpetrators’ attacks more
successful.

Neutralizing Defenders’ Tools and Platforms


Expanding on the spam filters case from above, we see a trend in criminals
using “our” tools and platforms – especially those that are AI-powered and
programmed to take their own decisions to block or let through attempted
attacks, such as anti-virus or intrusion prevention systems – to tweak and
train their malware until they can evade detection. In particular, off-the-
shelf protection software is vulnerable to such abuse and might become
obsolete faster than imagined.
Another evasion technique is password guessing or brute-forcing.9 AI
comes in very handy here as well, especially to make faster, fewer, and
better guesses to increase the success rate dramatically.
Better Targeted Attacks through Deepfakes
The most frightening use of AI is its capability of generating deepfakes,
especially involving real humans who can be targeted. Not only voice, but
also facial expressions and gestures are nowadays commonly reproduced to
manipulate people to give thieves access to a system or execute an action
benefiting the malicious actor.
Machine learning allows hackers not only to customize and optimize
messages, generate realistic-looking photos, and so on, it can also be
harnessed to explore, analyze and predict a target's behavior by exploiting
the massive amount of data available on social networks and other public
platforms. So-called spear-phishing attacks craft perfect interactions using
AI-backed social environment and contextualization capabilities. Humans
will have a hard time detecting any malicious intent or spotting a detail that
allows them to think that there is something untoward. AI can produce a
perfect illusion.
All this draws a quite a bleak picture for cybersecurity. However, the
dark side struggles with similar challenges as we do. The lack of expertise
and complexity in efficiently and persistently using machine learning and
dedicated generative AI systems, keeps the use of AI by threat actors still
low.

Facing the Unknown, the Ugly Side of AI


In the realm of AI, where innovation intertwines with risk, lurks the
enigmatic realm of the unknown: the ugly side of AI. Amidst the
transformative promises of AI there are unanticipated outcomes,
particularly evident in generative AI systems like ChatGPT, where
unpredictability reigns supreme. In cybersecurity, this unpredictability
poses a grave concern, as AI may inadvertently breach systems, exploiting
undetected loopholes. Moreover, AI's capacity for learning, unbound by
human constraints, engenders ethical dilemmas, notably AI bias, with far-
reaching implications for societal discourse and democracy. This section
delves into the ethical considerations, challenges, and imperative human
oversight necessary to navigate the treacherous waters of AI integration in
cybersecurity and beyond.
With the positive impact of AI comes the negative and abuse of AI, the
ugly side of AI, the unexplainable outcomes that recent AI systems,
especially those using generative techniques, produce. There have been
multiple alarming media reports of ChatGPT producing unexpected,
sometimes “creative,” responses to requests (Edwards, 2024). In a
cybersecurity context, this would mean that “[…] an AI may inadvertently
hack a system by finding a “solution” that its designers never intended. This
kind of unintentional hack is especially troubling because it might occur
and remain undetected” (Oberhaus, 2023).
Such considerations associated with the integration of AI in
cybersecurity, emphasize the importance of ethical principles, transparency,
accountability, and human oversight of AI-driven (security) operations.

Emerging Threats and Challenges Induced by AI Systems


At the end of the day, an AI system is nothing more than a very
sophisticated software program. Even though it is based on some
algorithmic procedures or neural networks, it is software, and software is
vulnerable. The famous Dutch programmer, physicist and software engineer
put it quite clearly: “Defect-free software does not exist”.
And it is no different with AI software. Such defects or vulnerabilities are
the prime target of cyber criminals as it enables them to work around the
initial purpose of software, and in the best case make that software do what
they want. Malevolent autonomous AI agents will generate devastating
damage and detecting such compromise might be very difficult, especially
when protective, AI-powered cybersecurity tools are being turned upside
down. The concept of the enemy within will have a very different meaning.
In fact, we do not yet understand the real implications of such scenarios as
we have not seen them in real life. However, without very close attention,
involving all stakeholders and making them accept the necessary
responsibility, will we soon have to face the ugly and yet unknown face of
AI.
Furthermore, what makes AI different from usual computer programs, is
its learning capacity – as humans learn from experience – the more data it
processes, the more adroit, some would say intelligent, the system becomes.
Unlike humans, however, AI is not constrained by norms, values, and
assumptions, and therefore might come up with results or outcomes that
would be inconceivable by humans. This, however, depends very much on
the type, quality and content of available data that is used to train a specific
AI system. Criminals could specifically craft learning data to further
enhance AI's capability to find and abuse vulnerabilities and weaknesses in
protection software.

Ethical Considerations and Human Oversight


One of the biggest challenges and I would even dare to call it a major threat
is AI bias.10 Where does the bias come from? Well, it comes from us. Bias
can be positive, when it comes from norms, values, experience, or beliefs,
on the negative side – and the word bias is most often used in such a
context – the source is discrimination, prejudice, stereotypes, or other
inequalities. Examples of AI bias show us – like the now disused recruiting
machine-learning-based engine used by Amazon and others, that produced
bias against women (Iriondo, 2018) – that when discriminatory data and
algorithms are baked into AI models, the models, in many cases
unintentionally, deploy biases at scale and amplify the resulting negative
effects.
Bias can be found in the training data, the algorithm, which has been
created or chosen by humans who are inevitably influenced by experience,
preferences, environment, or sometimes purely by time. Think about days
when you are motivated and very positive thinking compared to bad days
when everything seems to be failing and you feel a certain pessimism – all
this is natural, but when it becomes part of an AI system, the persistence
and automation factor can be so aggravating that predictions or results that
the algorithm produces can be disturbing or even harmful (Tolga Bolukbasi,
2016). “AI can make decisions that affect whether a person is admitted into
a school, authorised for a bank loan or accepted as a rental applicant”
(Boutin, 2022).
Addressing the bias question inevitably goes with ethical considerations
that we have to build by design into AI systems, train using data that is as
unbiased as possible, and continuously test and evaluate the bias-level of
implemented systems, for instance using blind taste tests (Uzzi, 2020).

Fake News a Threat Not to Be Neglected by Cybersecurity


Professionals
One real-world phenomenon that unfortunately has been quite prominent
for a couple of years, demonstrates very clearly the impact and damage of
AI bias: fake news and especially deepfakes. I am confident I do not need to
spell out what fake news is. We are confronted with such incorrect
information, unfortunately, on a daily basis. The proliferation and extensive
use of new types of “news platforms” – I prefer to describe them as
platforms that massively redistribute thoughts, ideas, and expressions of
interest of individuals, camouflaged as information and truth, which very
extensively use AI to support the massive amount of data that needs to be
processed – have reshaped the media landscape, and the way we consume
information. We are all aware of the massive amount of information
available and the tremendous pace at which it is being produced and
consumed. Often there is no time to check sources or reconsider what has
been read or heard, to create our own judgment. In fact, opinions are mostly
forced upon us. We have unlearned to process and analyze information and
understand what is true and what not, we take things for granted without
validation or questioning. And AI is not innocent in this.
More and more voices state that AI threatens democracy (Sarah Kreps,
2023), and when we look into the deep analysis of cases like Cambridge
Analytica (Guardian, 2019), Brexit, and recent elections in the United
States, there is clear evidence that AI has an impact on how we understand
and see leaders and geopolitics. Since the beginning of time, we have
known that communication and information is power, and that such power
can change the course of history. With AI we have created a “black box”
that can take its own, sometimes unexpected and unexplainable, decisions,
and as such completely removing the transparency that is so crucial for
democracies. For instance, without a human whistle-blower (Christopher
Wylie), the Cambridge Analytica case would probably never have been
detected.
We have to make sure to enforce human oversight on AI. On May 21,
2024, the Council of the European Union gave its final green light to the
first worldwide rules on AI (Council of the EU, 2024), a big step in the
right direction. For once, regulation took the lead and showed us tech
people the path to follow.

Lessons Learned from a Comparable Paradigm


The comparison of AI to oil as a revolutionary catalyst is common,
highlighting its transformative impact on our economy and society.
However, another perspective assimilated data to uranium, with AI as the
powerful but potentially hazardous machine that processes it. This section
explores analogies, drawing parallels between AI and the evolution of
another, more balanced paradigm, plastic. Both have become ubiquitous in
modern life, bringing significant benefits alongside profound challenges.
By examining the history and implications of plastic use, we can glean
valuable lessons for managing AI's development and integration, balancing
its potential with the need for responsible oversight and regulation.
Many voices we hear these days – and this is mainly related to the
“wonders” that recent solutions like ChatGPT can produce and show to the
world – promote AI as being the catalyst for the next “industrial”
revolution. “Information is the oil of the 21st century, and analytics is the
combustion engine” (Sondergaard, 2015).
Further, it is argued that AI will not only disrupt our economy, but also
our society, our way of living. We hear things like: “A year spent in
artificial intelligence is enough to make one believe in God” (Perlis, 1982).
Of course, there are also, less positive voices, even pessimistic ones, which
go in the extreme opposite direction and paint a black picture. Assimilating
data rather to another element: uranium. The processing and machine using
this uranium-like data being AI, is then comparable to nuclear power.
“Where there is data smoke, there is business fire” (Redman, 2023).
The analogies, with oil or uranium, are interesting in the sense that they
tell us that data is like a new fuel that powers today's machines and
infrastructure, fueling the Industrial Revolution. The phase we are in
nowadays is linked to the robotization of our world, and it is not far-fetched
to bring AI into that picture. Data alone, without processing capabilities, are
worthless. AI is the real engine of Industry 4.0. However, we are in a very
different area of AI from the technology that can be found in platforms like
ChatGPT, this is the IoT which we talked about earlier. And IoT comes with
deterministic AI focusing on and strengthening automation to empower
global value chains. The key aspects of the fourth industrial phase, or
Industry 4.0, is the cyber-physical systems, where the processing needs to
be close to the actual activity, in a robotic arm or part of a sensor, a field
that we call edge computing.
Let us not widen too much the scope of our discourse and go back to the
oil comparison. The essence of this comparison is that AI is seen as
revolutionary and positively impacting our society. On the other hand, the
pessimists among us see a more dangerous and dark impact of AI on our
society. They came up with the “nuclear bomb kind of AI” scenario. Fueled
mainly by the inexplicable and hard to grasp prowess of generative AI, they
warn against the extensive use of such systems and how they could
negatively impact our society. “I visualise a time when we will be to robots
what dogs are to humans, and I’m rooting for the machines” (Shannon,
1987).
Well, as often, both sides are partly right here. AI has its good and its bad
use-cases and solutions, as well as some ugly ones, as we have already
explored when looking at the specific cybersecurity aspects. In my opinion
we need a more balanced approach, and to ensure I am not misunderstood I
accept that oil too is not purely positive and beneficial, it clearly comes
with some challenges (e.g., the CO2 footprint); nor is nuclear only negative
(e.g., the waste issues) or only used in weaponization. Nuclear power has
played, and is still playing, a crucial role in energy diversification. A
paradigm that would combine both the enormous potential and especially
the multi-purpose nature of AI, but also comes with some intrinsic and
underlying challenges that are less directly visible as those with uranium or
oil, is plastic.
When we look at its history and development, as well as the most
recently discovered and researched implications of plastics on our
environment and health, there are some similarities with the (r)evolution of
AI technology. It is, in fact, an evolution rather than a revolution. Since its
beginnings, AI has been a bit like plastic, mostly looming in the
background – within the technicians or researchers’ communities – popping
up in public headlines once in a while with some feat, like Garry Kasparov
being beaten at chess by Deep Blue, a machine (Chess.com, 2018). Each AI
innovation, even the very smallest ones, have, however, continuously
flown, often unnoticed, into the overall technological developments and
contributed to today's digital world.
We now realize that in almost all digital processes, AI technologies
(especially expert systems and machine learning) play a crucial role or are a
core component – like with plastics, where “suddenly” we noticed that in all
packaging, or even in products themselves, for example, toothpaste
containing microbeads, there is plastic, or plastic-based elements
(Prabhakar, 2020). Since its discovery, back in the mid-19th century when
Alexander Parkes created the first synthetic plastic called Parkesine –
although it was not yet widely adopted due to its high cost, we had to wait
until 1907, when the breakthrough came with the invention of Bakelite by
Leo Baekeland – the ubiquitousness of plastics in our manufacturing
industries has reached such a level that it is challenging to find areas of
modern life that have not been impacted by it (UK Science Museum, 2019).
Furthermore, we were, I think, all flabbergasted by the discovery of Captain
Charles Moore and his teams’ initial research and reports on the extreme
plastic pollution of our oceans – back in the early 2000s.
A similar uproar went through the media when in 2020 OpenAI released
its GPT-3 (Generative Pre-trained Transformer 3) AI model, in the form of
a user application known as ChatGPT. In only a couple of months it reached
over 100 million users, making it the fastest-growing consumer application
ever (The Guardian, 2023). It changed the way we all see and think of AI,
its powers and problems, it generates not only useful content, but also fears
and uncertainty for the future, and anxiety about the impact on our jobs or
human decision-making. We feel the ubiquitousness of AI coming and
fortunately we are already reacting to it. For once, regulation defines the
pace of technology: the EU's AI Act being the world's first comprehensive
AI law (European Parliament, 2023). Society seems to have learned from
the lessons of the past, especially by comparing with the areas, like plastics,
where it took roughly a decade to see impactful regulation, only addressing
sub-aspects of the challenges identified, such as marine plastic pollution.
In summary, while plastics and AI are both ubiquitous in modern life,
they have distinct impacts on our environment, health, and society. Plastic
poses direct challenges related to pollution and waste management, while
AI raises broader concerns about ethics, privacy, and algorithmic
governance. Both paradigms require holistic approaches and collaborative
efforts to mitigate their negative impacts and promote sustainability and
responsibility. Such an approach needs the involvement of all relevant
stakeholders, especially focusing on a multi-sector approach, and reach
through a multitude of disciplines and expertise. In a common international
effort, we will manage to:

Assess and raise awareness about the societal impacts of AI


technologies, conduct comprehensive life cycle assessments, develop
educational campaigns on AI-related challenges, and implement
mandatory ethical and privacy impact assessments for AI-powered
processes.
Ensure responsible AI development and deployment, establish
regulatory frameworks for AI governance and implement extended
producer responsibility policies covering data privacy, algorithm
transparency, and product life cycle management.
Support sustainable and socially impactful AI innovations while
fostering interdisciplinary research collaborations to address the
complex challenges of AI systems.
Establish training programs and collaborate with media and influencers
to educate businesses, policymakers, and the general public on the
social implications and ethical practices of AI.
Develop metrics to monitor AI impacts, conduct regular evaluations of
interventions, and establish public participation mechanisms to ensure
transparency, accountability, and effectiveness in mitigating AI's
negative effects.

By adopting such a holistic approach – learning from the plastics


industry, and maybe developing AI technology that contributes to
addressing its specific challenges – that integrates regulatory, technological,
educational, and collaborative strategies, we can work towards mitigating
the negative impacts of AI and embrace the many opportunities while
promoting cybersecurity and responsible practices for the future.
Reinventing the I in C.I.A.
In the dynamic landscape of cybersecurity, the rapid advancement of AI has
ushered in both promise and peril. As traditional defense mechanisms
struggle to keep pace with evolving cyber threats, a paradigm shift is
imperative. Enter “Reinventing the I in C.I.A.,” where “I” signifies the
identification of AI systems. This approach seeks to integrate advanced AI
identification mechanisms into cybersecurity principles to bolster defenses
against malicious AI activities. By discerning between benign and harmful
AI behavior, this strategy enables swifter threat responses and mitigates the
risk of AI-driven cyberattacks. This section explores the transformative
potential of AI identification in safeguarding digital ecosystems and
ensuring the secure deployment of AI innovations.
In cybersecurity, the C.I.A. concept refers to the three core principles of
information security: Confidentiality, Integrity, and Availability. These
principles serve as the foundation for designing and implementing effective
security measures to protect data and systems from unauthorized access,
alteration, or destruction.
The first and probably best understood principle, also by non–cyber
specialists, ensures that information – be it sensitive, important or simply
restricted to certain groups of people – is accessible only to authorized users
and entities. Confidentiality involves implementing measures such as
encryption, access controls, and user authentication to prevent unwanted
access to, and use of, data. This principle aims to maintain the privacy and
secrecy of information, protecting it from disclosure to unauthorized
parties, or during transit via an unprotected communication channel (e.g.,
an email) or through an open network (e.g., the Internet).
Integrity aims to maintain the reliability and authenticity of information,
ensuring its integrity and trustworthiness, aiming to keep data accurate,
consistent, and trustworthy throughout its life cycle. This principle involves
safeguarding information against modification or corruption. Traditionally,
measures such as data validation, checksums, and digital signatures are
employed to detect and prevent unauthorized changes to data. In our
specific AI context achieving trustworthiness involves a deeper analysis
than for general data exchange. In fact, the source of the creation of
information, data or content can no longer be trusted by default. This is
where the concept of identification comes in, complementing the Integrity
principle and adding novel types of measures and security techniques to
address the specific protection mechanisms in an AI-powered environment.
But more about this further below.
The third principle, Availability aims to maintain the responsiveness and
reliability of systems, ensuring they are accessible and operational as
required, this by ensuring that data and systems are accessible and usable by
authorized users or entities whenever needed. This principle involves
implementing measures to prevent disruptions or downtime that could
impact the availability of resources or systems that hold the information or
content that needs to be accessed. Redundancy, fault tolerance, and disaster
recovery planning are employed to ensure continuous access to data and
services, even in the face of unforeseen events or attacks.
Overall, the C.I.A. concept provides a framework for prioritizing and
implementing security measures to protect information and systems from a
wide range of threats and risks. By addressing confidentiality, integrity, and
availability, organizations can effectively safeguard their assets, resources
and services, and maintain the trust and confidence of stakeholders.
Identification
Previous sections showed that AI systems, when improperly managed or
maliciously exploited, pose significant risks to privacy, data integrity, and
the overall cybersecurity of our society. Inspired by the plastics industry,
where huge efforts have been put into awareness as well as the labeling of
products – for example, if plastic is present and which type, especially for
recycling purposes – the concept of identification of AI systems focuses on
recognizing and verifying AI behavior and signatures within digital
environments. This novel approach not only aims to distinguish between
benign and potentially harmful AI activities – like the most frightening
deepfakes or AI-driven decision systems – it will empower consumers to
evaluate the trustworthiness of content, by knowing what is AI-generated
and what is not. Further, such identification measures – if integrated into
protective systems and processes – enable quicker responses to threats and
reduce the risk of AI-driven cyberattacks.
Implementing AI identification as a core component of cybersecurity
represents a paradigm shift. It transforms the traditional Confidentiality,
Integrity, and Availability model by embedding a proactive layer of AI
oversight to strengthen the trustworthiness of systems, services, and
content. This ensures that, as AI technologies advance, our defenses evolve
in tandem, offering robust protection against misuse and ensuring the secure
deployment of AI innovations.
Through clear, comprehensive and continued labeling of AI systems or
AI-generated content, users take back the control and decision-making
power as to if, when and why they can trust a system or content. One of
today's leading thinkers, Yuval Noah Harari, puts it quite clearly, we need to
be able to differentiate AI suggestions from human intelligence
recommendations (Harari, 2023).
Using AI-Tagging to Tag AI
One idea for an implementing measure for this identification principle is to
use the power of AI itself. In particular, addressing the issue of deepfakes
with AI tagging11 involves creating a system that can identify and label AI-
generated content reliably. To achieve this, a comprehensive approach is
needed.
Before we can develop an AI-identifying AI model, we need to better
understand how AI generates content. When it comes to textual content, the
large language models everybody is currently talking and writing about, are
a good starting point. Research already exists on this topic, looking to
identify specific detection methods of machine-generated texts (Evan
Crothres, 2023). More effort needs to be made though for other types of
content, like video, image, sound, etc.
Understanding and thus being able to identify an AI mechanism drills
down to knowing how it was trained and from what data it learned. An
increasingly popular field or tool for training deep learning models is using
synthetic data12 in combination with real-world data, thus accelerating and
more specifically focusing on the training of an AI model. Particularly in
the area of computer vision the use of synthetic data for deep learning has
become very popular. A study from 2019 suggests that using synthetic data
is “one of the most promising general techniques on the rise in modern deep
learning, especially computer vision” (Nikolenko, 2019). Another huge
advantage of such techniques is the privacy guarantee (Irit Dinur, 2002),
which showed that a few database queries, like doing sums or averages of
only subsets of the data, can generate strong privacy violations. With
synthetic data, there is no real data of real people, so per se privacy cannot
be directly impacted. Data are based on simulations or projections of typical
or specific behavior or situations. Such trained models, as research shows,
can produce very useful outcomes and results, and Gartner predicts that, by
2023, synthetic data will overshadow real data in AI models.
“There is no silver bullet for detecting AI-written text” (Solaiman, 2022).
Neither in detecting AI-generated texts nor in solving AI-bias issues will a
specifically trained AI model bring the magic solution. A trained human
eye and contextual knowledge will continue to prevail when it comes to
identifying the trustworthiness of information or the source of that
information or content. Even though AI is fooling us already (Elizabeth
Clark, 2021) and will become better and faster, the good news is that
training people has shown promising results (Heikkilä, 2022). Initiatives
like the https://roft.io/ from Daphne Ippolito et al. (Dugan, 2023) are good
ways to raise awareness and provide tutorials and training material to
recognize deepfakes and to highlight the importance of verifying content
and the confidence put in its sources, be it AI or not.
The future will tell, but I believe that such techniques will lead to new
ways of using AI in a beneficial manner, helping us to address the ugly
threats (see above) of AI-generated or -influenced content or decisions.
After the detection comes the identification phase, to be able to
persistently identify that a certain content, system or service is AI-powered
or fully AI-generated. A path I would like to explore here is how tagging or
watermarking of AI-generated content – in such a way that it makes it easy
for users to identify generated content, bias, or simply know that AI
technology was used – can restore the power and choice to believe or take
for granted what is shown to us humans. Such a tag needs to be easy to
implement or integrate into AI models and should be difficult to remove or
alter without damaging the content.
The most straightforward approach of such watermarking technology is
to embed it during content creation. This obviously requires access to the
AI model itself and can only be performed by technology providers or
platform operators. When looking at promising examples in the area of
imagery where watermarking results are robust, we see clear signs of the
usefulness of such mechanisms (Chappelier, 2024).
The second method, which is implemented after the content has been
produced and should be able to tag generated data, even from closed-source
or proprietary models, comes with the caveat that one first needs to detect
that content is AI-produced (detection mechanisms have been discussed
above). Furthermore, these methods may not be applicable to all types of
content (e.g., plain text).
Identification, or any underlying tagging system, is only as strong as the
trustworthiness of its label – which will most often come in the shape of
metadata – that is provided for checking, for example, in our case the fact
that AI was used to produce or process a given content or system. Time is
especially important in our identification concept. The metadata label must
include a timestamp to make sure that users can not only verify if, but also,
when – for how long and until when – a given content or system used/uses
AI.
Another technology – we have already talked about earlier and which is
often abused by threat actors in a cyberattack scenario – that could come in
handy here, is blockchain or the modern high potential technique of
tokenization.13 Blockchain, as its moniker suggests, is blocks of data linked
into an immutable, digital chain. This chain is then stored in a decentralized
environment, in which each block's information is verifiable by each
participating computer or user. As a dispersed structure this intrinsic design
helps to ensure trust, validity, and transparency.
In our use case, the metadata tag – the result of the identification process
and providing visibility about AI-generated content – is stored in a
specifically designed blockchain. This decentralized, timestamped, and
tamper-proof method – three intrinsic features of a blockchain – provide a
convenient way to verify the authenticity and sustainability of the
identification information. Each tag, or we could call it token, registered on
the blockchain gives users the power to check and verify whether a certain
content is AI-generated, when, by whom and eventually for how long.
Further explaining this idea of an identification mechanism for AI
systems, as put forward here, would go beyond the purpose of this article,
so let us conclude. As disinformation and misinformation grow steadily,
part of the solution is to democratize good machine learning and we believe
that having mechanisms to identify AI-generated content quickly and
systematically is key. By combining advanced detection algorithms and the
above secure metadata tagging system, it will be possible to create a robust
system to identify and label AI-generated content, thereby mitigating the
impact of deepfakes. Technology, however, can never stand alone, it needs a
viable and user-friendly environment as well as policy or legal frameworks
to embrace each stakeholder's responsibilities. The development of tools,
such as browser extensions and mobile apps, the integration of distributors
and media into content platforms, raising awareness, and educating the
general public will all contribute to and secure the success of such a
mechanism. AI-tagging or – watermarking, or whatever we may call it, the
implementation of the identification principle, is, of course, not foolproof,
but, in my opinion, it could be a powerful tool in the fight against malicious
and misleading uses of AI.

International Cooperation and Multistakeholder


Policy Development
International cooperation and multistakeholder policy development are
crucial in addressing the global opportunities and challenges posed by rapid
technological advancements. As globalization has interconnected our
economies and societies, the need for coordinated global responses to issues
like AI and cybersecurity has become evident. The international community
has already taken significant steps, such as the UN's resolution on AI for
sustainable development and the European Union's AI Act, demonstrating a
collective commitment to responsible governance. By fostering
collaboration across nations and sectors, we can ensure the safe and ethical
development of technologies that impact us all.
Since the second half of the twentieth century the world, and especially
the global economies and markets, have become international. Progress in
transport, communication, automation, and digital platforms has driven
globalization and made all our nations interdependent. Opportunities in one
part of the world rapidly become opportunities across the globe. Challenges
faced by some develop into global issues that need to be tackled by all of
us. For both the opportunities and challenges, as they are often linked to
technology – the pace of technological development accelerates decade by
decade, year by year – global solutions and approaches need to be identified
and implemented together, via international cooperation and coordination.
Be it on the legal, policy, or recommendations and guidelines levels, we
need global rules and practices, especially for such impactful areas as AI
and cybersecurity.
Fortunately, the international community is already aware of the
geopolitical implications of the use and especially abuse of AI – thanks for
this awareness also go to the huge mediatization of some of the more
catastrophic scenarios, such as AI overruling humans – as multiple
initiatives to address the challenges and embrace the opportunities have
been launched.
One of the recent landmarks is the UN Resolution A/78/L.49, on seizing
the opportunities of safe, secure and trustworthy AI systems for sustainable
development, that was adopted by the General Assembly on March 11,
2024. The leitmotif of the Resolution clearly shows that the international
community is willing to take its responsibility, “[…] to govern this
technology [AI] rather than let it govern us” (United Nations, 2024).
Alongside the aforementioned European AI Act, we already have two
very promising policy frameworks to help shape future developments and
regulate the usage of AI. The text recently adopted by the European
Parliament also highlights how interlinked AI is with cybersecurity, hence
the need to bring both communities and experts closer together and join
forces to address the many connected challenges, some of which have been
highlighted in this article. Not only must technical communities cooperate,
a broad multistakeholder approach is needed.
The domain of cybersecurity, where pertinent UN guidance by several
Groups of Governmental Experts – relevant reports are from 2010 (UN,
2010), 2013 (UN, 2013), and 2015 (UN, 2015) – and the more recent Open-
ended Working Group on security of and the use of information and
communications technologies have developed rules, norms and principles
of responsible behavior for States in cyberspace, shows how international
cooperation must and can function. OSCE (Organization for Security and
Co-operation in Europe) and ASEAN (Association of Southeast Asian
Nations) have followed the call, and as first movers have implemented the
UN principles regionally. Sixteen confidence-building measures have been
put forward by OSCE, whereas ASEAN has developed a common
Cybersecurity Cooperation Strategy.
Both OSCE and ASEAN have also elaborated guidance on AI: the OSCE
Policy paper on AI and freedom of expression in political competition and
elections and the ASEAN Guide on AI Governance and Ethics. These
examples further spotlight the recognized importance of a multistakeholder
and multidisciplinary approach (OSCE, ASEAN, 2021, 2024).
For tech topics such as AI and cybersecurity, it is, in my opinion,
however, the OECD recommendations that will have the most impact in the
long run, as they have a more specific market perspective on these topics.
Such a view is often preferred, especially as actors from the private sector
are key when it comes to AI or cybersecurity.
For instance, OECD's landmark guidance from 2002 (OECD Guidelines
for the Security of Information Systems and Networks – Towards a Culture
of Security), has shaped how cybersecurity has been implemented in many
economies throughout the world, in particular in Luxembourg. These
recommendations guided the establishment, definition and mission of the
Luxembourg House of Cybersecurity, back in 2010. Nowadays, the OECD
has replaced this document with the Recommendation on Digital Security
Risk Management for Economic and Social Prosperity, still emphasizing
the need for a risk-based approach, a concept to be found increasingly in
legislation and other policy frameworks (e.g., in the EU: GDPR, AI Act,
etc.).
In AI, the OECD has continued this “tradition” and published, as legal
instrument document, the Recommendation of the Council on Artificial
Intelligence as early as 2019. Following the fast pace of AI, a revision has
been available since May 2024. Last but not least, a real multistakeholder
initiative is the OECD AI Policy Observatory (OECD.AI), a one-stop shop
for AI policymakers and other actors. At the heart of its work is a Network
of Experts contributing to the priorities of the Observatory, which are at the
core of current concerns regarding the use and abuse of AI:

AI risks and accountability;


Researching potential AI futures;
Defining and tracking AI incidents;
“AI compute” capacity and the effects of AI on climate change.

To conclude this section, cybersecurity and AI are profoundly multi-


domain and intrinsically dual-use technologies, which interconnect our civil
society increasingly with the military domain. One example: in 2023, the
Luxembourg government organized the first Autonomous Weapon Systems
(AWS) Conference in order to develop a Luxembourg position on AWS and
lead the way, in close cooperation with the International Committee of the
Red Cross (ICRC), on regulating AWS at international level. The unlawful
and unprovoked aggression of Russian against Ukraine shows how dearly
needed such an initiative is, unfortunately.

Conclusion
History has shown multiple times that technology has never really solved
problems. Yet it has helped people address some existing issues better,
faster or more easily, while introducing, almost always, new challenges,
often greater and more complex than before. While AI and cybersecurity,
both technological phenomena, follow the same principle, we see the recent
interaction and relations among the two communities mutually benefiting
each other. AI will help to address some cybersecurity challenges, while
cybersecurity contributes to safer and more trustworthy AI deployment.
In the last couple of years, cybersecurity has evolved into a
multidisciplinary field and exchange with experts from other domains has
become common practice. As the projections tell us, AI is here to stay, and
sooner or later it will become ubiquitous, demanding its community to join
forces with others to address the multitude of challenges. In the long run, in
my opinion, we will all have to contribute to and be concerned by
developments in AI. Time is of the essence, The Singularity Is Nearer, as
the soon-to-be released book of futurist Ray Kurzweil suggests, we have to
work together, now, all of us – private companies, research and academia,
public entities, individuals, and nations – to address the challenges, embrace
the opportunities and make our AI future cyber safe.
The major challenge, however, is neither technological (malicious AI
systems) nor economic (AI taking our jobs), not even legal (impact of AI on
intellectual property), it is the societal impact of AI that will shape the
future of humanity. And this is not my lone voice telling you this, a brilliant
human thinker comes to that same conclusion: Professor Yaser S. Abu-
Mostafa. His lecture makes everything crystal clear (Abu-Mostafa, 2023).
What will be, and we can already see glimpses of it, the biggest
repercussion on humanity is what AI will do to our children. The human is
a social beast, AI is not! Let us not confine our children in a digital only and
AI-directed bubble à la Truman Show.
We have been able to develop artificial intelligence; we shall make sure
to prevail human intelligence, as well.

Notes
1. Phishing is a form of scam where attackers deceive people into
revealing sensitive information or installing malware such as
ransomware. It targets humans not machines and as such is part of the
category of cyberattacks called social engineering.
2. Throughout this article we will use, unless otherwise noted, the system
most commonly used in modern English: the “short scale” system,
where billion is defined as one thousand million or 109.
3. DDoS stands for “Distributed Denial of Service,” which is a form of
cyberattack that focuses on disturbing the service of an infrastructure
or application. “Distributed” comes from the fact that the sources of
the attack are multiple, distributed throughout the Internet and rapidly
switching, over the course of the attack. This is to avoid easy
identification and termination of the perpetrators’ activities.
4. A botnet is a group of Internet-connected devices, each of which runs
one or more bots (a bot, web robot, or Internet bot is a software
application that runs automated tasks, usually with the intent to imitate
human activity, such as messaging). Botnets are mainly used
maliciously to perform cyber-attacks like DDoS attacks, steal data,
send spam, or allow the attacker to access the device, which is mostly
not owned by the attacker, and its connection.
5. Behavior analytics is a technique that helps you gain insights into
various behavioral patterns to make data-driven decisions.
6. In artificial intelligence (AI), an expert system is a computer system
emulating the decision-making ability of a human expert. These
systems are designed to solve complex problems by reasoning through
bodies of knowledge, represented mainly as if–then rules.
7. Chief Information Security Office (CISO) is the profile in charge of
cybersecurity. In Europe we still very often lack the “chief” function in
such job descriptions, but further explanation is beyond the scope of
this article.
8. In machine learning, a neural network (also artificial neural network or
neural net) is a model inspired by the structure and function of
biological neural networks in animal brains.
9. In cryptography, a brute-force attack consists of an attacker
systematically checking all possible passwords and passphrases until
the correct one is found.
10. AI bias, also called machine learning bias or algorithm bias, refers to
the occurrence of biased results due to human biases that skew the
original training data or AI algorithm – leading to distorted outputs and
potentially harmful outcomes.
11. AI tagging is a machine learning process where algorithms recognize
the content of unstructured data, assigning relevant metadata tags,
markers, or labels making it searchable by key terms. AI tagging scans
media such as images, videos, and documents, and quickly identifies
and tags valuable moments that content owners may not even know
they have.
12. Training any AI model requires carefully labeled and purposeful
datasets that contain thousands to tens of millions of elements.
Collecting and labeling this data in the real world is time-consuming
and expensive. Data generated by computer simulations, so-called
synthetic data, is made up of diverse types and used in conjunction
with real-world data tremendously speeds up the training process of AI
models.
13. Tokenization is the process of creating a digital representation of either
a real thing or data. It can therefore be used to protect sensitive data
through a cryptographic token that identifies and is intrinsically linked
with the data or thing to “protect.”
References
Abu-Mostafa, Y. (2023, May 24). Watson lecture: Artificial intelligence:
The good, the bad, and the ugly.
Bharath Aiyer, J. C. (2022, Oct. 27). New survey reveals $2 trillion market
opportunity for cybersecurity technology and service providers.
McKinsey & Company.
Blue, V. (2013, Dec. 22). “CryptoLocker's crimewave: A trail of millions in
laundered Bitcoin.” ZDNet.
Boutin, C. (2022). There's more to AI bias than biased data. NIST.
Bradley, T. (2024, May 17). AI Is the Past, Present and Future of
Cybersecurity.
Case Study Viasat. (2022, June). Cyber Peace Institute.
https://cyberconflicts.cyberpeaceinstitute.org/law-and-
policy/cases/viasat
Chappelier, V. (2024, Jan. 22). Robust image watermarking with Stable
Signature + IMATAG's BZH.
Website: Chess.com. (2018, Oct. 12). Kasparov vs. Deep Blue | The match
that changed history.
Council of the EU. (2024, May 21). Artificial intelligence (AI) act: Council
gives final green light to the first worldwide rules on AI.
CyberPeace Institute. (n.d.). Cyber attacks in times of conflict.
Deep Instinct. (2024). Generative AI in cybersecurity: Friend or foe? Deep
Instinct.
Dugan, L. a.-B. (2023). Real or fake text?: Investigating human ability to
detect boundaries between human-written and machine-generated text.
Proceedings of the 2023 AAAI conference on artificial intelligence.
Easterly, J. (2023, May 7). The attack on colonial pipeline: What we’ve
learned & what we’ve done over the past two years.
https://www.cisa.gov/news-events/news/attack-colonial-pipeline-what-
weve-learned-what-weve-done-over-past-two-years
Edwards, B. (2024, Feb. 21). ChatGPT goes temporarily “insane” with
unexpected outputs, spooking users.
Elizabeth Clark, T. A. (2021, Jul. 7). All that's ‘human’ is not gold:
Evaluating human evaluation of generated text.
ENISA. (2023, Dec. 6). Warfare and geopolitics are fuelling denial-of-
service attacks.
European Parliament. (2023, Dec. 19). EU AI Act: First regulation on
artificial intelligence.
Evan Crothres, N. J. (2023, May 8). Machine generated text: A
comprehensive survey of threat models and detection methods.
FBI. (2022, May 4). Business email compromise: The $43 billion scam.
Fruhlinger, J. (2018, Mar. 9). The Mirai botnet explained: How teen
scammers and CCTV cameras almost brought down the Internet.
https://www.csoonline.com/article/564711/the-mirai-botnet-explained-
how-teen-scammers-and-cctv-cameras-almost-brought-down-the-
internet.html
Greenberg, A. (2018, Aug. 22). The untold story of NotPetya, the most
devastating cyberattack in history.
https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-
code-crashed-the-world/
Guardian, T. (2019, Mar. 20). What is the Cambridge Analytica scandal?
(C. Wylie, Interviewer).
Harari, Y. N. (2023). AI and the future of humanity. Frontiers Forum.
Heikkilä, M. (2022, Dec. 19). How to spot AI-generated text. MIT
Technology Review.
Hypponen, M. (2022). Mikko Hypponen, Finnish computer security expert,
speaker and author, in an interview with Forcepoint, breaks down the
rise and fall of cybercrime unicorns. (Forcepoint, Interviewer).
Iriondo, R. (2018, Oct. 11). Amazon scraps secret AI recruiting engine that
showed biases against women.
Irit Dinur, K. N. (2002, Dec. 30). Revealing information while preserving
privacy.
ISC2. (2023, Oct. 30). ISC2 reveals growth in global cybersecurity
workforce, but record-breaking gap of 4 million cybersecurity
professionals looms. ISC2.
Leinar Ramos, J. S. (2021). Maverick research: Forget about your real data
– Synthetic data is the future of AI. Gartner.
Luxembourg. (2024, Mar. 21). Disruption to several government websites.
Milić, J. (2019, Sept. 10). Mirai Botnet continues to plague IoT space.
Nakamoto, S. (2008, Oct. 31). Bitcoin: A peer-to-peer electronic cash
system.
Nikolenko, S. I. (2019, Sep. 25). Synthetic data for deep learning.
Nordrum, A. (2016, Aug. 18). Popular Internet of Things forecast of 50
billion devices by 2020 is outdated. IEEE Spectrum.
Oberhaus, D. (2023, Apr.). Prepare for AI hackers. Harvard Magazine.
Perlis, A. J. (1982). Epigrams in programming. ACM's SIGPLAN.
Prabhakar, M. (2020, Aug. 20). Myth buster: Toothpaste still contains
plastic ingredients!
Redman, T. (2023). People and data. Kogan Page.
Sarah Kreps, D. K. (2023). How AI threatens democracy. Journal of
Democracy, 34(4), 122–131.
Shannon, C. (1987). Omni Magazine.
Solaiman, I. (2022, Dec. 19). How to spot AI generated text. (M. Heikkilä,
Interviewer).
Sondergaard, P. (2015, Feb. 3). The interview: Peter Sondergaard, global
head of research at Gartner. (J. Kennedy, Interviewer).
Stanham, L. (2024, May 10). The role of AI in cybersecurity.
Statistia. (2024). Number of Internet of Things (IoT) connected devices
worldwide from 2019 to 2023, with forecasts from 2022 to 2030.
The Guardian. (2023, Feb. 2). ChatGPT reaches 100 million users two
months after launch.
Tolga Bolukbasi, K.-W. C. (2016, Jul. 21). Man is to computer programmer
as woman is to homemaker? Debiasing word embeddings.
UK Science Museum. (2019, Oct. 11). The age of plastic: From Parkesine
to pollution.
UN. (2010, Jul. 30). Report of the group of governmental experts on
developments in the field of information and telecommunications in the
context of international security (A/65/201).
UN. (2013, Jun. 24). Report of the group of governmental experts on
developments in the field of information and telecommunications in the
context of international security (A/68/98).
UN. (2015, Jul. 15). Report of the group of governmental experts on
developments in the field of information and telecommunications in the
context of international security (A/70/174).
United Nations. (2024, Mar. 21). General Assembly adopts landmark
resolution on artificial intelligence.
Uzzi, B. (2020, Nov. 4). A simple tactic that could help reduce bias in AI.
Harvard Business Review.
WEF. (2024). Global cybersecurity outlook 2024. WEF.
Williams, C. (2016, Oct. 21). IoT gadgets flooded DNS biz Dyn to take
down big name websites. The Register.
18
ARTIFICIAL INTELLIGENCE
AND EARTH OBSERVATION-
BASED CAPABILITY TO GET
FAST ACCESS TO LARGE-SCALE
TRANSBOUNDARY
INFORMATION
Marco Chini

DOI: 10.4324/9781003518495-23

Introduction
The process of gathering information on the physical, chemical, and
biological processes of the Earth requires the utilization of equipment that is
able to sense and measure from a distance. This process is referred to as
Earth Observation (EO). The collection of geospatial data involves the
utilization of satellites, aircraft, and other platforms in order to acquire
information regarding various aspects of the surface and atmosphere of the
Earth. EO spans a wide range of applications, including the management of
agriculture, the planning of urban areas, the monitoring of the environment,
the response to and the management of emergency situations, among other
fields of study. The study of how geographical factors, which include both
human and physical characteristics, have an impact on political processes
and international relations is referred to as geopolitics. An examination of
the influence that geographical factors, such as borders, resources, and
physical characteristics, have on political decisions and behavior is a
necessary step in this process. The confluence of EO and geopolitics occurs
when remote sensing technology delivers data that informs and influences
geopolitical decisions and plans. Illustrative examples from several fields
that demonstrate the connections between EO and geopolitics include:
resource management, which involves the surveillance and evaluation of
natural resources, such as water, minerals, and energy reserves; monitoring
environmental changes, such as deforestation, climate change, and natural
catastrophes, which can be used to make informed decisions on
environmental security and resilience; border monitoring, which is of
extreme interest in areas with major border conflicts and security concerns;
management of natural disasters, where the exploitation of EO data fosters
and facilitates international cooperation during the emergency phase;
planning and monitoring of infrastructure development, which facilitates
strategic and economic advancement; marine security, which concerns the
surveillance of marine operations in order to take action regarding territorial
waterways, piracy, and illegal fishing; climate change trends, which are
extremely important to take action regarding climate change mitigation and
adaptation.
Satellite Earth Observation
Satellite EO offers a number of advantages in comparison to other remote
sensing platforms, such as aircraft and drones, among others. These benefits
extend to a wide range of features, including global coverage, data
consistency, and operating efficiency. In contrast to geostationary satellites,
which provide continuous monitoring of specific regions with restricted
spatial resolutions, cross-polar orbit satellites are able to frequently capture
information over the entire surface of the Earth with a high level of
granularity. They are able to view regions that are difficult to access or
indeed inaccessible, such as dense forests, open seas, and dangerous
regions, where other platforms may have restrictions on their ability to
operate. In addition, satellites are able to collect data over the same places
in a systematic way thanks to their regular revisit times and predictable
orbits, providing a consistent long-term data record. These characteristics
are critical for monitoring land surfaces, tracking changes over time, and
investigating long-lasting natural and anthropogenic processes. Satellites,
once deployed, are able to cover large areas without incurring further costs
of deploying or maintaining them. This is a significant advantage in terms
of operational costs. A multitude of different satellite constellations orbit
the Earth, each equipped with sensors that capture images of the surface
using a variety of spectral bands (visible, infrared, and microwave), at
different spatial resolutions, and with a high temporal resolution. To be
more specific, certain satellite constellations have the capability to access a
certain location several times a day while maintaining a high level of spatial
detail. The capability of this technology to capture and offer synoptic views
over huge areas in a short amount of time and across geographical borders
is another extremely important element that sets it apart from other similar
technologies. Thanks to the several advantages that these platforms provide,
satellite imaging is rapidly becoming the most widely used and powerful
instrument for interacting with, representing, and building space. As a
result, remote sensing has become the method by which people discover the
world and acquire a sense of place and space, making it an integral part of
our lives (Shim, 2014).
In an effort to respond to the growing demand from many end users and
operational service providers from different sectors, the availability of EO
satellite systems has increased significantly in the last two decades.
Important satellite constellations equipped with advanced and diversified
sensors continue to be developed and already provide a capacity to monitor
planet Earth more frequently and more precisely and with a higher level of
detail than ever before. The data that these satellites record enable a
multitude of applications due to a large spectrum of measurement
frequencies, ranging from infrared to thermal, optical and radar, reaching
spatial resolutions of up to tens of centimeters and with an unprecedented
frequency of measurements. It is important to notice that these
developments are not limited to the well-known governmental and
intergovernmental space agencies such as ESA and NASA, but also many
new countries and increasingly private companies have started their own
space programs. Hundreds of new satellites have been launched in the
recent past and the availability of a large amount of new EO data is
transforming the way the products are provided, the way operational EO-
based services are deployed in many domains, and the way end users are
accessing the data and the products derived therefrom. In this context, the
Copernicus program, which is the European Union's EO program,
coordinated and managed by the European Commission in partnership with
the European Space Agency (ESA), is having a disruptive effect, as it is
able to provide global, continuous, autonomous, high-quality, wide range
EO capacity. Providing accurate, timely and easily accessible information
helps improve the management of the environment, understand and mitigate
the effects of climate change, and ensure civil security. The Copernicus
program not only aims to make use of the quasi-continuous stream of data
generated by its multi-sensor EO satellites, but also to combine those
datasets with information from airborne or in situ installed instruments in
order to produce more prompt and precise information, to run reliable
operational services, and to generate new knowledge. The Copernicus
program integrates all this information from different data sources to
monitor the status of the Earth and, more broadly, to support service
providers, public authorities, and international organizations in their
mission to improve people's life quality. A great strength of the Copernicus
program is that the data and information it generates is available free of
charge for the private and public sector, thereby boosting the development
of new downstream services and improving existing services. The
Copernicus space component is composed of different Sentinel satellites,
some already orbiting and others being launched soon, as well as
contributing missions from other space agencies. With its abundance of
complementary spaceborne sensors, the mission's main objective is to
systematically measure key parameters from land, ocean, and the
atmosphere. In addition to this, Copernicus also offers its own core services
to specific end users and the public in general, covering different
interconnected themes such as the atmosphere, marine, land, climate,
emergencies, and security. What makes the Copernicus satellite fleet truly a
game changer that boosts the adoption of satellite data to develop new
downstream services and make a valuable contribution to many operational
applications in different public and private sectors is its capability to
acquire data systematically, with a high revisit time and to provide
continuity to data generated by previous satellite missions.

Examples of Satellite-Based EO Information to


Take Action
The use of EO data for the purpose of operational monitoring of natural
phenomena has been developed as a result of the growing number of
satellite missions as well as the efforts of national and international
initiatives conducted by both corporate and institutional entities. The
quantification of land use and urban areas, the identification and evaluation
of conflicts and the results of such conflicts, and the creation of proxies for
a range of human variables, such as poverty, are all areas in which these
data are proving to be of great utility. Furthermore, the capability to
evaluate geospatial data at a variety of spatial and temporal resolutions
makes it possible to disaggregate the data to gain more insightful
information (Levin et al., 2018).
According to Levin et al. (2018), conflict areas exhibit a rapid decline in
nighttime lighting as a result of infrastructure damage, a rapid decrease in
Flickr photos as tourists avoid them, and an increase in world news
coverage. In order to validate their hypothesis, they integrated remotely
sensed and big data metrics, conducting their analysis at a monthly
temporal resolution. This allowed them to identify seasonal patterns, given
the seasonal nature of many human activities. By integrating the Visible
Infrared Imaging Radiometer Suite (VIIRS) monthly nighttime light series
and big data (data mining of news events and Flickr photos) in the Arab
Spring case, they discovered that both Flickr photos and nighttime lights
served as sensitive indicators for the depletion of economic and human
capital. Additionally, they observed a positive correlation between news and
actual conflict deaths.
Eklund et al. (2021) examined the correlation between war and fire in the
Kurdistan Region of Iraq, as well as the influence of conflict on the
biophysical environment. Since 2014, the natural environment of the
Kurdistan Region of Iraq has been adversely impacted by repeated
conflicts, which have coincided with a significant rise in the occurrence of
fires. The researchers investigated the spatial and temporal dimensions of
conflict, fire, and land use by integrating several satellite-derived datasets,
such as land use and land cover, active fires, and precipitation. The primary
dataset used for the categorization was the Normalized Difference
Vegetation Index (NDVI) obtained from the MODIS (Moderate-resolution
Imaging Spectroradiometer) satellite. The NDVI data had a spatial
resolution of 250 m and a temporal resolution of 8 days. The NDVI time
series was analyzed to derive 15 phenological variables that have been
shown to be important for mapping the vegetation in this area and to
consider the seasonal changes in NDVI. The VIIRS Active Fire product,
which offers daily worldwide fire data at a resolution of up to 375 m,
together with the land surface and air temperature (both at a spatial
resolution of 0.1°), derived from the ERA5-Land dataset, was also used to
investigate the influence of climate on fire and land dynamics. The study
revealed a correlation between conflict events and fires in terms of their
timing. It also found that certain conflicts were associated with specific
types of land use and land cover. For example, the conflict involving the
Islamic State was mainly concentrated in southern areas with bare
vegetation, while the conflict involving Turkey primarily occurred in
northern mountainous regions with natural vegetation and rough
topography.
Woods et al. (2021) examined the intricate connection between armed
conflict, political economics, geopolitics, and forest destruction in
Myanmar. They also retrieved Land Use and Land Cover Change data from
Landsat satellite data. They combined a multi-level analysis using EO data
with a multi-location methodology based on political ecology. They also
conducted interviews in several study locations. Their comprehensive
research revealed three distinct periods of significant deforestation
increases at both the national and state/region level, while there were
noticeable differences across areas. The rate and geography of deforestation
were primarily impacted by the territorial jurisdictions of military
authorities, national political economic reforms, timber regulations, and
proximity to national boundaries and their respective geopolitical linkages.
Another study in Myanmar was conducted by Bennett and Faxon (2021).
They used night-light time series derived from EO data to examine the
progress inside the country and its ethnic minority regions. An examination
of DMSP-OLS (Defense Meteorological Program-Operational Line-Scan
System) data from 1992 to 2013 and VIIRS data from 2013 to 2020 reveals
a general increase in brightness in Myanmar, especially during the 2010s,
which corresponds to the country's ongoing but slowed-down process of
liberalization. Nevertheless, the growth in light expansion was not
uniformly distributed. Due to the limited availability of official data from
Myanmar, the use of EO imagery was crucial for obtaining information on
socio-economic trends, notwithstanding its incompleteness. In order to
acquire a deeper understanding of these politically intricate regions, Bennett
and Faxon (2021) integrated EO-based nighttime light analysis with
qualitative investigations, particularly long-term ethnography. They
emphasized that the significance of shifting pixels in political contexts may
be better understood by using critical remote-sensing techniques. Gaining
an understanding of the causes behind changes detected using EO requires
information derived from direct experiences and interpersonal connections,
which has to be accounted for by the international development industry in
adopting data-driven solutions, machine learning, and algorithmic
governance. If satellites tracking socio-economic progress fail to detect
specific people, they may overlook potential policy answers. “The
geopolitical analysis of remote sensing must take into account not only the
information that satellites are capable of capturing, but also the information
that they are unable to capture” (Bennett & Faxon, 2021).
Due to the existing and expected needs for information and energy
technology, the mining of rare earth elements is seeing a fast increase.
Chinkaka et al. (2023) performed an empirical investigation using the
Google Earth Engine (GEE) platform to analyze changes in mining surface
footprints from 2005 to 2020 at two rare earth mines situated on opposite
sides of the Myanmar-China border. The analysis revealed that mining
operations had a 130% rise on China's side and a 327% increase on
Myanmar's side over the specified timeframe. Specifically, the researchers
used surface reflectance images captured by Landsat 5 TM (Thematic
Mapper) in 2005 and 2010, as well as Landsat 8 OLI (Operational Land
Imager) images in 2015 and 2020. These images were utilized to generate
NDVI in order to improve the identification of different land cover
categories. An automated machine-learning technique was used to classify
land cover into mine and non-mine categories, achieving a high level of
accuracy. Quantifying these changes over time may assist researchers and
policymakers in gaining a deeper comprehension of the evolving
geographies and geopolitics of rare earth mining, the environmental
dynamics in mining areas, and the distinctive characteristics of mineral
extraction in border regions.
Kariyeva and van Leeuwen (2012) emphasized the effectiveness of
satellite-based phenological analysis as a tool for evaluating landscape
dynamics in Central Asia following institutional changes. Undoubtedly, the
dissolution of the USSR in 1991 led to significant alterations in land cover
and land use as a consequence of economic and political upheavals in the
area. Vegetation indices derived from EO data provided valuable
information about vegetation productivity and the timing of vegetation
growth cycles. Kariyeva and van Leeuwen (2012) examined the impact of
socio-economic and bioclimatic determinants by analyzing the changes in
regional land surface over time. They utilized geographic time-series data
of NDVI, collected twice a week from 1981 to 2006. The data was utilized
to calculate annual phenological metrics for sites in Uzbekistan and
Turkmenistan, which are distinguished by land cover types including
irrigated agriculture, riparian zones, and arid desert regions. The study
identified significant differences in the seasonal trajectories of NDVI and
the variations in vegetation response between the periods before and after
the collapse of the USSR. These disparities were driven by factors such as
land management, climate, and socio-economic conditions. By utilizing
comparable vegetation indicators derived from EO data, it was feasible to
carry out an analogous experiment in Cuba, a country located a thousand
kilometers away. It is worth noting that Cuba has a significantly smaller
territorial area compared to Uzbekistan and Turkmenistan. This emphasizes
the easy scalability of EO-based analysis on a global scale, as well as the
significant portability of EO-based techniques. In the study conducted by
Machado (2018), the author examined the alterations in agricultural land
usage in Cuba following the collapse of the Soviet Union. Following this
event and the cessation of Soviet subsidies, the government swiftly shifted
from extensive, centralized agricultural activities to small-scale farming and
cooperatives in order to tackle local food security concerns. Machado
(2018) utilized many data sources, including remote sensing data, to
understand the land-use consequences of this significant agrarian
transformation. He utilized the Human Footprint Index, which was
calculated based on high-quality Landsat imagery with a resolution of 30 m.
The Human Footprint Index comprises multiple layers, including
agriculture, pastureland, population density, and roadways, among other
characteristics. These layers are normally used to create a metric to assess
the impact of human activity on a landscape. The investigation, conducted
from 1993 to 2009, established a connection between the empirical data and
the evolving geopolitics and escalating impacts of climate change, which
are of great significance to Cuba and other Caribbean islands.
Satellite technology is also utilized to control situations related to
infectious disease epidemics. Peckham and Sinha (2017) conducted an
extensive analysis of the Ebola virus epidemic in West Africa from 2014 to
2015. They used data from public and commercial satellites to examine
how satellite imagery influences people's perspectives on diseases and
affects their reactions to infectious threats. The recent availability of high-
resolution satellite imagery is driving the need for enhanced surveillance of
illnesses throughout time and space, while also encouraging a more
comprehensive understanding of the evolution of pathogens within
ecosystems. The researchers analyzed the diverse uses of satellite imagery
in monitoring and mapping areas with high concentrations of Ebola cases in
Guinea, Liberia, and Sierra Leone. The remote-sensing data, which was
obtained by optical sensors and processed using automatic algorithms,
contributed to the overall understanding of the pandemic situation. In
response to the crisis, the UN's Operational Satellite Applications
Programme (UNOSAT) utilized high-resolution optical imagery from three
DigitalGlobe satellites, specifically WorldView-2, WorldView-1, and
Quickbird. The purpose was to create an Atlas of Ebola Care Facilities in
Guinea, Liberia, and Sierra Leone. This atlas aimed to aid emergency
humanitarian assistance efforts. In addition, the United Nations utilized
satellites to observe remote rural populations, gathering mapping data that
aided with on-site logistical operations and contact tracing (CERN, 2014).
In a report, the ESA highlighted challenges faced by health agencies in
monitoring previous epidemics due to the limited availability of current
ground information and how this information is crucial in effectively
allocating personnel and identifying probable routes of infection (ESA,
2003). Satellite images played a crucial role in assisting response teams in
finding their way to the most severely affected towns and villages.
Additionally, these images were vital for conducting long-term research on
the emergence of the disease, as they helped identify environmental
indicators that could potentially contribute to the occurrence of Ebola in a
particular region.
Oil production, a highly traded product, is often subject to stringent
monitoring. Notwithstanding this attention, illegal extraction and trade of
oil and its byproducts take place globally and generate substantial profits
beyond the oversight and control of governments. Terrorists and rebels
often choose this unorthodox method as a means of generating income.
Therefore, comprehending the spatial and temporal variations in production
can assist in determining the extent of activities, technical capability, and
sources of revenue. In order to achieve this objective, Do et al. (2018)
employed satellite multispectral imaging and ground-truth data to
successfully create a record of oil production in regions under the control of
terrorist organizations. Their approach was based on the assumption that oil
extraction results in the release of natural gas, specifically methane, which
initially dissolves in crude oil in constant ratios. In the study region, the gas
was typically burned off, resulting in the emission of radiation that was
detected by VIIRS satellite sensors. This demonstrates the potential of
satellite EO data to monitor the operations of extractive companies in
conflict areas where there is a scarcity of reliable administrative data
sources. These results can provide assistance in various domains of public
policy, decision-making, and military operations.
EO satellite data can be utilized to examine the impact of migrations on
environmental variables at a large scale. It is widely recognized that the
growth of cities often leads to the formation of urban heat islands, which in
turn raises the land surface temperature. Shatnawi and Qdais (2019)
observed and forecast the variations in land surface temperature in the
northern region of Jordan from 2000 to 2016 in order to assess the potential
influence of the increasing influx of refugees from Syria and Iraq, which
began in 2003, on these factors. The researchers employed satellite imagery
to produce maps of land surface temperatures, land use, and land cover,
employing the normalized difference building index and the normalized
difference vegetation index. The authors integrated the EO data and in situ
meteorological data from eight stations in the study area, together with
population density, into a non-linear autoregressive exogenous artificial
neural network. This machine-learning model was used to forecast land
surface temperatures for the next decade. The machine-learning model that
was developed showed a satisfactory level of accuracy in simulating and
forecasting the land surface of the research area, suggesting that there was
an expected increase in land surface temperature as a result of the ongoing
urban sprawl trend.
Transitioning to renewable energy is crucial in combating climate
change. Multiple studies have examined the impact of economic,
institutional, technological, and socio-cultural aspects on this shift.
However, they have not adequately addressed the consequences of conflicts
on the implementation of renewable energy. The assessment of this specific
factor is especially difficult because there is a lack of reliable data in
unstable regimes and regions. Fischhendler et al. (2022) were among the
first in utilizing EO data to analyze the spread of renewable energy sources
in the Gaza Strip, in terms of both time and space. The research to detect
the deployment and distribution of solar energy in the Gaza Strip was
undertaken using WorldView-2 and WorldView-3, which are very high-
resolution multispectral/panchromatic optical satellites with a spatial
resolution of 0.45 and 0.30 m, respectively. The utilization of deep
machine-learning techniques and neural networks helped speed up the
process of extracting information. The implemented methodology
facilitated the gathering of data across a broad geographical region and the
analysis of historical occurrences without the need for direct physical
access to the study area. According to the report, Gaza has emerged as a
prominent figure in the field of renewable energy, even in times of conflict,
being able to generate 25% of its energy through the utilization of solar
power.
The integration of EO data with the automatic identification system
(AIS) plays a significant role in achieving a comprehensive characterization
of vessel traffic patterns. This integration allows for the distinction between
areas covered by both cooperative and non-cooperative systems, such as
AIS and EO imagery, and areas primarily covered by non-cooperative
systems, such as EO imagery. This distinction aids in identifying ships that
may engage in illegal activities, such as illegal fishing or oil spills
(Rudolph, 2024). Detecting and following ships that are not permanent
targets is more likely with synthetic aperture radar (SAR) images. This is
because using satellite optical imagery for maritime surveillance purposes
depends a lot on clear skies and daylight (Pelich et al., 2019, 2021).
It is important to highlight that governments and private companies
around the world are quickly advancing SAR technology with very high
spatial resolutions. Unlike optical satellites that depend on reflected
sunlight, SAR uses microwave signals that bounce off the Earth's surface.
Another significant advantage is its operation at microwave frequencies,
which allow it to function effectively in an atmosphere that is largely
transparent, thereby avoiding disturbance from clouds, rain, and fog. This
chapter has mainly focused on applications that necessitate a significant
amount of spatial resolution. These applications have not fully exploited
SAR data because of its complex visual interpretation and smaller spatial
and spectral resolutions in comparison to optical sensors. Historically, SAR
data have mostly been utilized to promptly gather extensive information on
areas impacted by natural disasters, such as floods, earthquakes, and
volcanic eruptions, since SAR is not limited by weather conditions (Chini et
al., 2017, 2019; Pierdicca et al., 2018).

AI and EO
The provision of large quantities of satellite data to the end users or
customers has no operational value per se. The challenge is to rapidly
process, analyze, and transform the raw EO data in order to generate
intuitive, understandable, and quantitative measurements. The information
derived from EO data must offer sufficient value to enable evidence-based
decision-making in different areas that are relevant for society. It goes
without saying that this effort largely benefits from recent advances in
artificial intelligence (AI) models as they make it possible to efficiently
explore large datasets and to bring new insights in many fields of
application. Indeed, AI has shown remarkable efficacy as a suite of tools,
occasionally surpassing human capabilities in performing complex
computational tasks in many different domains of society. Due to its
impressive achievements, AI is becoming the preferred paradigm in various
application domains. Due to the growing availability of EO data and
computer resources, the use of AI is now gaining momentum in the field of
EO following its recent success. EO data introduces novel complexities for
deep-learning algorithms, as the analysis of satellite images gives rise to
distinctive challenges that set complex scientific questions (Zhu et al.,
2017).
EO data are typically obtained using various multimodalities, originating
from a wide variety of spectrum frequencies, such as optical to microwave.
These data are collected by sensors with distinct properties, including active
or passive modes, and varying geometries and information content. While
bringing together various sources of information can provide additional
benefits, it is essential to first develop innovative AI models that can
effectively align and integrate images captured from different viewpoints
and imaging modalities (Chen et al., 2023; Zhu et al., 2017). With the
increasing availability of long-term archives of EO data, the importance and
the value of the time variable in the EO field are growing. The Copernicus
program ensures continuous acquisition of images from various frequencies
for many years and has led to the systematic exploration of the temporal
variable as a valuable source of information that was previously untapped.
Moreover, the increasing number of EO satellite constellations is expanding
the dimension of our datasets, resulting in the emergence of the big data
challenge. This challenge becomes much more intricate when considering
the fundamental purpose of satellite missions, which is to acquire data and
provide products and services globally. To develop global applications, AI
models often tackle the challenge of transferring knowledge across the
entire Earth surface, despite the scarcity of annotated data and labels.
Indeed, one major concern when applying AI models at global scale relates
to the requirement of having access to training data. The quantity and
quality of training data are essential for AI models. To mitigate the negative
effects of a limited annotated training dataset, active self-learning
convolutional neural network approaches have been adopted (Li et al.,
2019). Recent studies show that increasing the quantity rather than the
quality of training data is beneficial for AI models (Bonafilia et al., 2020).
Indeed, to generalize, AI methods need a training dataset that is
representative of different events in space and time. Such a dataset can only
be obtained if noisy labels are added to the training dataset, for example,
labels from other automatic mapping algorithms.
EO data are also utilized for the retrieval of geophysical variables, such
as soil moisture, rather than for object classification, and AI models
frequently rely on physically based models and domain knowledge as prior
information to estimate these variables. This suggests that AI models
without expert input may not always be the optimal solution (Rivera et al.,
2015; Zhu et al., 2017). In general, AI is increasingly becoming a
significant factor in EO. AI applications, specifically machine-learning
techniques, encompass a wide range of tasks, starting with basic image
processing and extending to advanced data comprehension and knowledge
exploration. AI approaches have become a potent approach for evaluating
EO data and have resulted in significant advancements in all EO domains
(Zhang & Zhang, 2022).
References
Bennett, M. M., & Faxon, H. O. (2021). Uneven frontiers: Exposing the
geopolitics of Myanmar's borderlands with critical remote sensing.
Remote Sensing, 13, 1158.
Bonafilia, D, Tellman, B., Anderson, T., & Issenberg, E. (2020).
Sen1Floods11: A georeferenced dataset to train and test deep learning
flood algorithms for Sentinel-1. In: Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition (CVPR)
Workshops, Seattle, WA, USA: IEEE.
CERN. (2014, November 3). UNOSAT joins the fight against Ebola. CERN
Bulletin, Nos. 45–46.
https://cds.cern.ch/journal/CERNBulletin/2014/45/News%20Articles/1
958288
Chen, H., Yokoya, N., & Chini, M. (2023). Fourier domain structural
relationship analysis for unsupervised multimodal change detection.
ISPRS Journal of Photogrammetry and Remote Sensing, 198, 99–114.
Chini, M., Hostache, R., Giustarini, L., & Matgen, P. (2017). A hierarchical
Split-based approach for parametric thresholding of SAR images:
Flood inundation as a test case. IEEE Transactions on Geoscience and
Remote Sensing, 55(12), 6975–6988.
Chini, M., Pelich, R., Pulvirenti, L., Pierdicca, N., Hostache, R., & Matgen,
P. (2019). Sentinel-1 InSAR coherence to detect floodwater in urban
areas: Houston and Hurricane Harvey as a test case. Remote Sensing,
11(2), 107, pp. 1–20.
Chinkaka, E., Klinger, J. M., Davis, K. F., & Bianco, F. (2023). Unexpected
expansion of rare-earth element mining activities in the Myanmar–
China border region. Remote Sensing, 15, 4597.
Do, Q. T., Shapiro, J. N., Elvidge, C. D., Abdel-Jelil, M., Ahn, D. P.,
Baugh, K., Hansen-Lewis, J., Zhizhin, M., & Bazilian, M. D. (2018).
Terrorism, geopolitics, and oil security: Using remote sensing to
estimate oil production of the Islamic State. Energy Research & Social
Science, 44, 411–418.
Eklund, L., Abdi, A. M., Shahpurwala, A., & Dinc, P. (2021). On the
geopolitics of fire, conflict and land in the Kurdistan region of Iraq.
Remote Sensing, 13, 1575.
ESA (European Space Agency). (2003). Solving the Ebola enigma:
Satellites will provide clues.
http://www.esa.int/Our_Activities/Observing_the_Earth/Solving_the_
Ebola_enigma_satellites_will_provide_clues
Fischhendler, I., Herman, L., & David, L. (2022). Light at the end of the
panel: The Gaza strip and the interplay between geopolitical conflict
and renewable energy transition. New Political Economy, 27(1), 1–18.
Kariyeva, J., & van Leeuwen, W. J. D. (2012). Phenological dynamics of
irrigated and natural drylands in Central Asia Before and after the
USSR collapse. Agriculture, Ecosystems and Environment, 162, 77–
89.
Levin, N., Ali, S., & Crandall, D. (2018). Utilizing remote sensing and big
data to quantify conflict intensity: The Arab spring as a case study.
Applied Geography, 94, 1–17.
Li, Y. Sandro, M., & Marc, W. (2019). Urban flood mapping with an active
self-learning convolutional neural network based on TerraSAR-X
intensity and interferometric coherence. ISPRS Journal of
Photogrammetry and Remote Sensing, 152, 178–191.
Machado, M. R. (2018). What's going on with land-use in Cuba?: Disparate
data sets and the Cuban agricultural transition. Journal of Land Use
Science, 13(4), 439–446.
Peckham, R., & Sinha, R. (2017). Satellites and the new war on infection:
Tracking Ebola in West Africa. Geoforum, 80, 24–38.
Pelich, R., Chini, M., Hostache, R., Matgen, P., & López-Martinez, C.
(2021). Coastline detection based on sentinel-1 time series for ship-
and flood-monitoring applications. IEEE Geoscience and Remote
Sensing Letters, 18(10), 1771–1775.
Pelich, R., Chini, M., Hostache, R., Matgen, P., Lopez-Martinez, C., Nuevo,
M., Ries, P., & Eiden, G. (2019). Large-scale automatic vessel
monitoring based on dual-polarization sentinel-1 and AIS data. Remote
Sensing, 11(9), 1078.
Pierdicca, N., Anniballe, R., Noto, F., Scalia, T., Bignami, C., Stramondo,
S., & Chini, M. (2018). Earthquake damage mapping: An overall
assessment of ground surveys and VHR image change detection after
L’Aquila 2009 earthquake. Remote Sensing of Environment, 210, 166–
178.
Rivera, J. P., Verrelst, J., Gomez-Dans, J., Muñoz-Marí, J., Moreno, J., &
Camps-Valls, G. (2015). An emulator toolbox to approximate radiative
transfer models with statistical learning. Remote Sensing, 7(7), 9347–
9370.
Rudolph, T. A. (2024). Seeing like an algorithm: The limits of using remote
sensing to link vessel movements with worker abuse at sea. Maritime
Studies, 23(13), 1–14.
Shatnawi, N., & Qdais, H. A. (2019). Mapping urban land surface
temperature using remote sensing techniques and artificial neural
network modelling. International Journal of Remote Sensing, 40(10),
3968–3983.
Shim, D. (2014). Remote sensing place: Satellite images as visual spatial
imaginaries. Geoforum, 51, 152–160.
Woods, K. M., Wang, P., Sexton, J. O., Leimgruber, P., Wong, J., & Huang,
Q. (2021). Integrating pixels, people, and political economy to
understand the role of armed conflict and geopolitics in driving
deforestation: The case of Myanmar. Remote Sensing, 13, 4589.
Zhang, L., & Zhang, L. (2022). Artificial intelligence for remote sensing
data analysis: A review of challenges and opportunities. IEEE
Geoscience and Remote Sensing Magazine, 10(2), 270–294.
Zhu, X. X., Tuia, D., Mou, L., Xia, G. S., Zhang, L., Xu, F., & Fraundorfer,
F. (2017). Deep learning in remote sensing: A comprehensive review
and list of resources. IEEE Geoscience and Remote Sensing Magazine,
5(4), 8–36.
PART V
Economic, Social, and Cultural
Dimensions
19
HARNESSING ARTIFICIAL
INTELLIGENCE FOR A MORE
EQUITABLE WORLD
Joe Cerrell

DOI: 10.4324/9781003518495-25

Introduction
How can we use artificial intelligence (AI) to build a world where all lives
have equal value? That's a question that my colleagues and I at the Gates
Foundation have been asking ourselves more and more over the last two
years.
To give you an idea of the speed at which this technology is evolving,
toward the end of 2022, a team from OpenAI came to see Bill Gates at his
office in Seattle. They showed him an impressive version of ChatGPT3, but
Bill was hesitant. He asked Sam Altman and the OpenAI team to come
back when this version could pass the Advanced Placement, or AP, exam on
Biology. He figured he might see them again in three years. But the team
came back in just two months with an updated version of the tool, and
showed Bill how it had passed the exam, with a nearly perfect score. He
said it was the most impressive demo he had ever seen – and this coming
from Bill Gates who has seen a lot of product demos in his career. Such an
impressive tool left Bill and many of us with the question of how AI could
be put to use to create a more equitable world.
Yet it has been surprising to us how few other people have been publicly
asking this. There have been hundreds, if not thousands, of articles written
about how convenient AI can make our lives – and even more on the risks.
It seems to me that most people are more interested in generating images of
the Pope in a puffer jacket (Reuters, 2023) or watching Dwayne “The
Rock” Johnson eating, well, rocks (Wired, 2023), than finding solutions to
some of our biggest shared challenges.
Recently, a computer scientist named Tristan Harris, co-founder of the
Center for Humane Technology, was speaking at an event with members of
Congress in the United States (NPR, 2023), where he asked an important
question about AI: What are we innovating toward?
We, as humans, are excellent innovators. We love to create and build new
things. But we don’t always use that innovation for the good of all. We have
a history of innovations that take more away from society than they give to
it. If left unchecked, AI has significant potential to negatively affect equity.
So here at the foundation, our answer to Tristan's question is definitive: we
can and must innovate toward a better world. What does this better world
look like? Is it a world where AI is harnessed to improve our working lives?
Perhaps enhanced to run factories, allowing employees to work less hours?
It is around these sorts of questions that the future of AI has largely been
framed. But there are important questions missing from the conversation. I
would add: how can AI help us put an end to global poverty? Or create a
world where no child dies of a preventable disease? Or one where the
farmers with the most tenuous of existences have the ability to adapt to
climate change? AI is exciting to us because it can help us achieve these
goals. AI will not change our mission to build a more equitable society. Our
hope is that AI can make that mission real – and that we can achieve this
more quickly.

The Gates Foundation's View on AI


Innovation is central to our mission at the Gates Foundation: we believe
that all lives have equal value and that the power of innovation can make
that belief a reality. Working with our partners we scan the horizon for new
discoveries, from drought-resistant seeds to mRNA vaccines, that can make
an impact toward health and development goals. AI has the potential to
supercharge innovation in many of the areas where the Gates Foundation
works. As Bill Gates wrote in a recent Gates Notes post, “AI is about to
accelerate the rate of new discoveries at a pace we’ve never seen before”
(Gates, 2023a).
In health, AI has the potential to accelerate drug discovery by processing
huge sets of data. AI software that already exists can infer data pathways,
search for targets on pathogens, and design drugs accordingly. Future
software will even make it possible to predict the side effects of certain
drugs and set dosing levels. In health centers, AI-powered tools can boost
productivity by automating routine tasks like scheduling appointments and
keeping medical records. In agriculture, AI can generate valuable insights
into crops and livestock. Scientists can use AI to develop better seeds for
local conditions, and farmers in poor countries can choose the best seeds for
the soil and weather in their area (Gates, 2023b). As extreme weather and
climate change put agricultural sectors in low-income countries at risk,
these insights become increasingly important.
Some of these AI-powered innovations are already a reality. Take
ultrasound machines, for instance, which are not practical in low-income
settings. They are bulky and expensive, take special training to operate, and
the images need to be read by an expert radiologist. The foundation is
supporting the development of an AI-powered ultrasound machine – health
workers can simply plug a probe into a mobile phone or tablet, swipe it
around the mother's stomach, and an AI software will read the images and
provide information about the pregnancy (Gates Foundation, 2024).
To ensure that we not missing any key developments on AI, the
foundation appointed a Global AI Task Force in March 2023. The Task
Force sets a strategy for the use of AI in each of the foundation's program
areas, from health and agriculture to inclusive financial systems and water
sanitation and hygiene (Suzman, 2023). The Task Force also supports large-
scale investments in AI-powered innovation like last year's edition of Grand
Challenges, the foundation's flagship innovation program, which I’ll come
on to later in this chapter.
However, simply supporting AI-powered innovation is not enough, on its
own, to deliver on our mission to create a better and more equitable world.
Throughout history, innovation has rarely benefited the poorest in society
first. With AI, we must actively resist this trend. It will not be enough to
just support innovators – we have to create an environment that is favorable
toward equitable use of AI, rather than one that is favorable toward
inequity.
To this end, the foundation is designing a set of principles around the use
of AI, which can serve as guidelines for our work. Our CEO, Mark Suzman,
started this process in 2023 by formulating a set of “first principles,” which
include: the use of AI technology needs to promote greater equity and
opportunity for resource-poor communities, and low-income countries
should not only be seen as beneficiaries, but also as collaborators and
partners in program design and uses (Suzman, 2023). In health, for instance,
following these principles means ensuring that AI tools are applied to drug
discovery for diseases that disproportionately affects the world's poorest
like HIV, TB, and malaria. Applying these principles may sound
straightforward, but there are serious barriers to achieving this vision for the
use of AI, which I now turn to.

Barriers to Equitable Progress around AI


There have been few moments in history where we have had the chance to
reshape the way our world works.
To name some examples – the industrial revolution was one of these
chances, as was the creation of the global financial system after World War
II, and the development of the internet. These major technological and
political advances have had huge effects on the way our world operates and
have certainly brought humanity closer to achieving a level playing field.
But they have fallen short of the mark in many ways. After years of
progress toward health equity and poverty reduction, we are now seeing a
widening gap between countries and communities on nearly all metrics of
global development (Suzman, 2023). We have made mistakes, failing to
regulate what we should, or think about where we were going.
These are mistakes that we cannot make again with AI – it is simply too
big an advancement to get wrong. There are a huge multitude of risks
associated with AI – a list I won’t go into here – but it is undeniable that AI
could, if managed badly, have catastrophic and negative effects on our
global society. The AI revolution will be a truly transformative one – think
of it as a deeply intensified and escalated industrial revolution. There are, of
course, a world of differences between the industrial and AI revolutions, but
I use the example to signal the complete change in the way the world works
– and the foundation believes AI will change so much of our lives. It will
not happen overnight, but it will happen faster than previous technological
revolutions. Within five, ten, twenty years, we will see huge changes to our
world.
However, I want to focus on the optimistic scenario. At the Gates
Foundation, we take pride in seeing the best in humanity and our potential
for growth and developing positive solutions to the shared challenges we
face. If the global community is serious about creating equitable AI tools in
the future, we need to ensure that there are forums for open, honest
discussions about the barriers that exist. It is only with such conversations
that we will make progress. To that end, I want to discuss two key barriers
that stand in the way of equitable progress around AI and how we might
overcome them.

First Barrier: Access to Co-creation and Decision-Making


The first of these barriers lies in the co-creation process. Over the course of
history, we have seen how a lack of inclusion in decision-making and
design can lead directly to inequitable outcomes. The first color films used
technologies that did not accurately capture skin tones of people who
weren’t white because the film companies only used models with white skin
as testing controls. People with darker skin tones were never considered.
And so, for decades they were not represented accurately in the films and
photographs that shape our collective culture and consciousness. Or take
crash test dummies. Crash-test dummies continue to be designed to
represent the average height and build of a white American man, with
catastrophic consequences for women whose bodies are shaped differently
and react differently to the position and placement of seatbelts and airbags.
Almost unbelievably, the first ever female crash test dummy was only
introduced in 2022 by a Swedish professor, Astrid Linder (Kelly, 2022).
Over the last five years we have seen this again with the debate around
inclusion, or lack thereof, of lower- and middle-income countries (LMICs)
in the decision-making process around climate change issues, and during
the COVID-19 pandemic. There are certainly optimistic tales that arose
from the pandemic – think about the speed of innovation or the
collaboration between countries to counter a common threat. Sadly,
however, there are also stories of tragedy. The way the developed world
handled vaccination against COVID-19 was deeply inequitable, with
Europe and the United States amassing more doses than they would ever
need, compared to LMICs, many of which were left without access to any
vaccination tools. The result? A 400% increase in COVID-19 related deaths
in LMICs, compared to rich nations.
Our response to the development of AI must be different. Deliberate
technological design and collaboration with a wide variety of people who
stand to benefit are both critical components of positive innovation, which
we measure in lives saved and opportunities provided for people to achieve
their full potential. Simply put, potential gains will only be realized if the
technology is implemented with all beneficiaries participating equitably in
its development.
Whilst the history of cinema and crash test dummies clearly articulate the
problem, the use of AI for development requires specific considerations due
to the nature of how and where large language models (LLMs) have been
created. LLMs are built around the analysis of large datasets combined with
algorithms and statistical models to find and utilize patterns. It only requires
some basic thinking to start to identify the problems here, when considering
how we use AI for development. Most communities, especially in LMICs,
have not had a seat at the table to inform these models. This means the
statistical representations in LLMs will naturally be less accurate when
applied to these same countries, compared to richer nations, where data is
often better represented. We must make sure that we do not allow AI to just
learn from databases that reflect the discrimination and inequity of the past.
Indigenous knowledge, oral history, and the lived experience of poorer
people all represent data that is just as important to capture as the existing
datasets that we have on record in the developed world. These former
datasets must be accurately captured in the databases AI learns from. If we
don’t follow this path, we exacerbate the risks around developing AI tools
that propagate pre-existing biases, which then perpetuate the cycle of
inequity. Diverse data is fundamental for these algorithms to deliver
equitably and successfully. Only with diverse data inputs can we be more
confident in creating equitable AI tools that will benefit everyone, rather
than the pre-existing status quo.
To give a more practical example of the importance of diverse datasets,
an AI model – let's imagine one that helps to plan logistics around
immunization – used in poor countries would need to be trained on different
diseases than in rich countries. More than that, these models would need to
work in different languages and factor in a host of different challenges, such
as patients who live very far from clinics or can’t afford to stop working if
they get sick. With examples like this, it becomes easier to understand why
diverse datasets and equitable access to co-creation are such important
factors in the development of AI tools. While it is undoubtedly true that
many of the challenges that LMICs face are shared by the developed world,
there are also critical differences. That's why we cannot simply accept AI
tools that are built on Western datasets alone – the outputs of these tools
could end up being deeply unequitable for communities across LMICs.

Second Barrier: Access to AI Tools


The second barrier to a more equitable world is perhaps more obvious: it
lies within who has access to the AI tools that are developed. Even after all
the hard work of ensuring equity and inclusion in the co-creation process, if
we want a better world we need to then ensure that everyone has access to
the use of AI and that the tools that are developed are built with all people
in mind.
Equal access to AI tools can have a genuinely transformative impact.
Take education, for instance. For years, the foundation has been struggling
with the question of how to improve outcomes for students in the
classroom. It is a challenging problem for the simple reason that students
learn in different ways and at different speeds. The advent of AI to power
personal learning coaches, or agents, could provide a much more
personalized learning experience, meeting students where they are in the
world and along their educational journey. It could tell, for instance,
whether a student is struggling with a specific concept and provide tailored
lessons for these areas before progressing (Gates, 2023b). This will be
particularly beneficial in developing countries, where access to high-quality
curriculums and teaching facilities can be difficult, and even where there is
good teaching, it is extremely resource intensive for an individual teacher to
meet the individual demands of a classroom full of students. AI has the
potential to transform the way we educate future generations in and out of
the classroom, leveling the playing field between students in rich and poor
countries for the first time in our history.
Another area of huge potential is in discovery of new therapies. Today,
the process of discovering new drugs and vaccines is a slow and laborious
process, and one that requires a lot of human intervention. AI tools can
accelerate this process by using machine learning to understand which
biological molecules prove effective in a lab setting at treating or
preventing disease, whilst conducting rapid research on these same
molecules to see if they can ultimately be developed into new medicines or
vaccines (Gates, 2023a).
At the foundation, we want to understand how access to these new
technologies can transform emerging economies. The foundation has
always maintained that innovation is the key to progress – indeed that
principle was one of the drivers behind Bill and Melinda's decision to
launch the foundation. The value of innovation is clear, financially of
course, but more materially too. Think on this: since 2000, the world has
cut in half the number of children who die before the age of five. One of the
main drivers behind this incredible statistic is our ability to innovate. Over
the last two-and-a-bit decades, scientists have come up with new ways to
make vaccines that are faster to deploy and cheaper to buy, but just as safe.
They developed new delivery mechanisms that worked in the world's most
remote places, which made it possible to reach more people. And they
created new vaccines that protect children from deadly diseases like
rotavirus.
AI is just one of the latest in a line of incredible innovations over the past
decades – but it is a huge one. That's why the foundation was proud to
partner with governments and organizations in October 2023, together
contributing over £80m to fund safe and responsible AI projects for
development around the world, beginning in Africa. This event was the
20th anniversary of the foundation's Grand Challenges initiative, which
tasks some of the most innovative minds around the world with a single –
albeit broad – task: identify the biggest problems in health and offer
solutions around how we might overcome them (Gates Foundation 2023a).
Preparation for Global Grand Challenges (2023) began after the launch
and boom of ChatGPT. Our Grand Challenges team wanted to understand
what AI can do in the arenas of global health and development. So, the
2023 event was designed to communicate what AI could do, the barriers
that exist, and what still needs to be done. To achieve that goal, they took it
upon themselves to put out a Grand Challenges call in May 2023, urging
innovators to respond with strategies around how AI can accelerate
solutions to some of the biggest development challenges in their countries,
in an equitable way. If successful, applicants would be provided funding to
get their ideas to the proof-of-concept stage. Grand Challenges by nature
targets innovators in LMICs, and as part of the criteria, solutions need to be
locally owned and locally driven. The call was only open for 19 days and
yet the foundation received over 1,300 proposals, 80% of which were from
LMICs (Gates Foundation, 2023b). After over 800 hours of review and
discussion, the foundation made the exciting decision to increase the
number of funded projects from 30 to 50. These projects covered a variety
of sectors where equitable AI use could be shown to have huge benefits,
from primary healthcare, to education, to financial services, to vaccine
development and pandemic preparedness, to agriculture. What was
particularly exciting was the wave of “new” innovation. Approximately
75% of the grantees funded were individuals, organizations, or companies
that the foundation had never engaged with before, meaning we were
reaching new people and helping to support new solutions.
An event in October 2023 in Senegal celebrated these innovators, and
while all the projects we saw might not be fully successful in the immediate
term, it is undoubtedly setting the stage for a huge technology boom later
this decade. We saw potential creative solutions to questions such as: can
AI combat antibiotic resistance? Could it bring personalized tutors to every
student? How can we use AI to help treat high-risk pregnancies? Can AI
help people assess their risk of contracting HIV? Could the technology
make medical information easier to access for every health worker? It is
these waves of innovation that inspire the foundation to commit its support
to creative thought. At the event, in addition to the funding for innovators,
we additionally committed to invest $30 million to support a new platform
in Africa. The platform will provide African scientists and innovators with
the technical and operational support they need to turn promising ideas into
scalable health and development solutions (Gates Foundation, 2023a). It is
a step toward ensuring the benefits of AI are relevant, affordable, and
accessible to everyone, particularly those in LMICs, and a demonstration of
how developed nations and other wealthy actors can play their part in
supporting the equitable use of AI, alongside equitable development of the
technology.
To get a more detailed sense of the hope and optimism that innovators
around the world can offer when it comes to developing and leveraging AI,
the story of Nneka Mobisson, co-founder and CEO of mDoc, is an
incredible case study. The journey Nneka has been on with mDoc, which is
a company supported by the Gates Foundation, is inspiring, though it begins
with tragedy. At the age of just 53 years, Nneka's father passed away
following complications from a severe stroke – one that, in the developed
world, would be considered entirely preventable. Sadly, stories like the one
of Nneka's father are all too common across the developing world, with
millions dying unnecessarily (The Lancet, 2023). Nneka's devastation over
losing her father was coupled with anger when she discovered how frequent
it is for people in Africa to die from preventable chronic conditions. mDoc,
her company, arose from this grief and frustration.
Nneka went into medicine at a time when there was increasing mobile
phone penetration across Africa. She realized how siloed health systems in
LMICs were, and how they were missing an opportunity to leverage mobile
phones and internet connectivity to improve people's health. Nneka then co-
founded mDoc to provide virtual health coaching for people with chronic
health issues, realizing that there was a strong correlation between chronic
health issues and other health conditions. By coaching people to look after
their chronic health problems, Nneka works to improve people's health
more generally, in the hope that, one day, what happened to her father will
not happen to another family. During the pandemic, the mDoc team realized
that users were developing a strong relationship with their virtual health
coaches, and generally trusted them more than other sources. There was
also a huge influx of questions, more than the mDoc team could handle.
That is when Nneka and her colleagues turned to AI for help. The company
created a chatbot called Kem, which now uses AI to provide virtual health
coaching to people who are living with, or are at risk of, chronic health
conditions. What is special about Kem, however, is not only that it uses a
refined, gold standard database, but also its ability to develop trust with
users. Nneka has completed a number of tests and trials that show users
placing greater trust in Kem than other clinicians, both in controlled and
random trials. The results are incredible: Kem has developed a sense of
empathy and is proven to make people feel safe. And because it's an AI, it
can offer a high level of service consistently, to multiple users, 24/7.
By using AI to invest in preventative care and focusing on behavior
change and nudges, Nneka is breaking the cycle of inequitable health
outcomes by stopping preventable chronic conditions from taking hold and
affecting families across Africa.
Nneka's use of technology is a good reminder of an important lesson: we
must not forget about the local application of AI tools. By funding local
projects like mDoc, we can support home-grown AI expertise and
computing power in LMICs, where local innovators understand local
challenges the best. AI solutions targeting LMICs must be locally driven
and owned, being relevant to each person's needs. This will help the
continent's AI innovators boost growth and support the continent's long-
term development, driven by the people who live in these regions. Such
support will also help developing countries have a bigger say in how AI is
used to further the UN's Sustainable Development Goals, linking back to
the importance of inclusion of all voices in decision-making processes. It is
a step toward ensuring the benefits of AI are relevant, affordable, and
accessible to everyone, particularly those across the developing world.
It is undoubtedly true that there are major barriers to an equitable roll out
of AI technology globally, but case studies like Nneka's mDoc show us that
there are ways to use these tools to promote equity. Combine that with the
example of Grand Challenges, and we can begin to see a model for how
best to support LMICs’ access to AI tools, which will boost their
development and ultimately benefit us all.
It is also worth highlighting how technological advances have supported
our fight against global health threats. In my lifetime alone, we have saved
millions of lives and made impressive strides in combating vaccine-
preventable diseases. Thanks to the power of vaccines and decades of
global collaboration, wild polio cases have fallen 99.9% since 1988 (The
Lancet, 2023). More broadly, vaccines are estimated to save 3.5–5 million
lives each year (World Health Organization, n.d.). For children specifically,
since 2000 deaths have been reduced by 51%, thanks in large part to
technological advances in how we deliver immunization. In 2000, there
were 9.8 million under-five deaths, compared to 4.9 million in 2022
(UNICEF, 2024). Using AI properly and effectively can amplify these
efforts by an order of magnitude. I’ll be honest, such efforts will take
investment, and a lot of it – global solutions are rarely cheap. However, I
use the word investment purposefully, as there is a strong return on
investing in developing nations. Take immunization, proven to be one of the
most cost-effective health tools ever invented. One of the organizations the
Gates Foundation supports is Gavi, The Vaccine Alliance, an organization
dedicated to supporting the delivery of vaccines to where they’re needed
most. Every $1 spent on immunization in Gavi-supported countries gives a
return of $21, rising to $54 when broader societal benefits are included.
This is a return most investors could only dream of. By preventing illness,
death, and long-term disability, vaccines in Gavi-supported countries have
helped to generate more than US$220.5 billion in economic benefits
between 2000 and 2022 in the countries it supports (Gavi, n.d.).
These are incredible statistics and help to keep us, at the foundation,
positive and hopeful for the future. We have come so far in recent decades,
and utilizing AI presents a golden opportunity to make yet more progress. It
will not fix everything, and it will not be a silver bullet for any of our
challenges, but we can use this technology for global benefit. That means
all people, all communities, and all countries. If the world can work
together on AI, it could truly be a major step forward in tackling health
threats, supporting development, and improving lives equitably around the
globe.
Conclusion
So, I come back to the question I asked at the beginning of this chapter:
How can we use AI to build a world where all lives have equal value?
It comes down to equitable participation in the co-creation process and
equitable access to the tools we create. Only this way can we come to the
most effective and equitable innovations. As I mentioned earlier in this
chapter, at the Gates Foundation we take pride in seeing the best in
humanity. With AI, there are huge challenges ahead, but I believe we can
overcome them. Yes, it's true that the benefits of previous revolutions have
not been spread equitably, but it doesn’t necessarily need to be the same for
the future. I find it deeply encouraging to see that first steps are being taken
in the right direction with initiatives like mDoc. They act as proof of
concept that AI can be developed and rolled out equitably, with people
around the world benefiting from its efficiencies and advances.
I want to reiterate how truly transformative AI can be across all sectors
and walks of life, not just those that are within scope of the Gates
Foundation's areas of work. AI has the potential to revolutionize all aspects
of our lives, from how we communicate, work, learn, and improve our well-
being. This is true from macro-factors such as improving government
efficiencies, all the way down to the micro-level and the individual. Bill
Gates has written at length about AI-powered personal agents and their
impact on individual and company productivity (Gates, 2023b). By taking
on tasks like scheduling and routine communication, we are already seeing
AI increase productivity. Even these small efficiencies have benefits,
freeing up time for society to focus on its biggest challenges and helping to
build a better future together.
Like all transformative technologies, however, AI comes with a multitude
of risks. In this chapter, I covered the risk that AI can exacerbate existing
inequalities around the world. That risk is very real and there are many
others that I haven’t even touched on. From privacy to regulation, to
military use – we have a host of AI uses that will not be perfect and will
inevitably make mistakes, so we must test and regulate the technology
carefully. Like any disruptive technology, AI will also raise hard questions
about bias, our legal system, and how we work and interact with one
another.
It is the job of us all, from governments to regulators, not-for-profits, and
citizens, to work together toward a vision of AI for a more equitable world.
My colleagues and I at the Gates Foundation know that there will be
challenges along the way. We’ve seen this before – technological
innovation alone does not necessarily lead to a more equitable world.
However, I am confident that we can do better with AI.

References
Gavi, The Vaccine Alliance (n.d.). About our alliance.
https://www.gavi.org/our-alliance/about
Gates, B. (2023a, December 19). AI is about to supercharge the innovation
pipeline. Gates Notes. https://www.gatesnotes.com/The-Year-Ahead-
2024-Ch-01
Gates, B. (2023b, March 21). The age of AI has begun. Gates Notes.
https://www.gatesnotes.com/The-Age-of-AI-Has-Begun
Gates Foundation. (2023a, October 10). Gates Foundation celebrates 20
years of “Grand Challenges” with new investments and a call to make
R&D breakthroughs available more quickly and equitably.
https://www.gatesfoundation.org/ideas/media-center/press-
releases/2023/10/grand-challenges-ai-equity-womens-health
Gates Foundation. (2023b, August 9). Gates Foundation selects nearly 50
global health and development projects that will contribute to shaping
equitable access to AI. https://www.gatesfoundation.org/ideas/media-
center/press-releases/2023/08/grand-challenges-rfp-recipients-ai-large-
language-models
Gates Foundation. (2024, May 1). AI-enabled ultrasound.
https://www.gatesfoundation.org/ideas/science-innovation-
technology/future-womens-health-technology/ai-ultrasounds
Global Grand Challenges. (2023, May 22). Catalyzing equitable artificial
intelligence (AI) use. Gates Foundation.
https://gcgh.grandchallenges.org/challenge/catalyzing-equitable-
artificial-intelligence-ai-use
Kelly, M. (2022, November 1). The first female crash test dummy has only
now arrived. NPR. https://www.npr.org/2022/11/01/1133375223/the-
first-female-crash-test-dummy-has-only-now-arrived
NPR. (2023, June 22). Takeaways from the roundtable with President Biden
on artificial intelligence.
https://www.npr.org/2023/06/22/1183842788/takeaways-from-the-
roundtable-with-president-biden-on-artificial-intelligence
Oxfam International. (March 3, 2022). COVID-19 death toll four times
higher in lower-income countries than rich ones.
https://www.oxfam.org/en/press-releases/covid-19-death-toll-four-
times-higher-lower-income-countries-rich-ones
Reuters. (2023, March 29). Image of Pope Francis wearing oversized white
puffer coat is AI-generated.
Suzman, M. (2023, May 21). The first principles guiding our work with AI.
https://www.gatesfoundation.org/ideas/articles/artificial-intelligence-
ai-development-principles
The Lancet. (May 13, 2023). WHO releases health inequality data
repository.
https://www.thelancet.com/journals/lancet/article/PIIS0140-
6736(23)00859-0/fulltext
UNICEF. (2024, March 12). Levels and trends in child mortality.
https://data.unicef.org/resources/levels-and-trends-in-child-mortality-
2024/
Wired. (2023, April 5). AI videos are freaky and weird now. But where are
they headed? https://www.wired.com/story/text-to-video-ai-
generators-filmmaking-hollywood/
World Health Organization. (October 24, 2023). Poliomyelitis.
https://www.who.int/news-room/fact-sheets/detail/poliomyelitis
World Health Organization. (n.d.). Vaccines and immunization.
https://www.who.int/health-topics/vaccines-and-immunization
20
ARTIFICIAL INTELLIGENCE
AND FUTURE PERSPECTIVES OF
THE INTERNATIONAL
HUMANITARIAN LAW IN
CONFLICT SETTINGS
A Focus on Children in Armed Conflict
Laura Guercio

DOI: 10.4324/9781003518495-26

Introduction
Children are significantly affected by artificial intelligence (AI), as their
developing minds are uniquely vulnerable to both its advantages and
potential risks. First, due to their rapid cognitive and social and emotional
development and their still evolving capacities, children are widely
acknowledged as a vulnerable group in the context of AI's influence
(Jasanoff & Kim, 2019). Second, despite AI increasingly shaping children's
daily experiences, their participation in the development and
implementation of these technologies is often limited, raising ethical and
developmental concerns. Third, children require tailored preparation to
navigate as both current users and future contributors to AI systems. This
involves cultivating an ethical and human-centered design approach and
equipping them with critical skills to assess the design, development, and
application of robotic technologies across diverse contexts (Floridi et al.,
2021).
These challenges, already significant in peacetime, become even more
pronounced during conflicts, where AI's applications and impacts vary
widely. This profound impact on young individuals necessitates distinct and
specialized attention, separate from that given to adults. Consequently,
dedicated research areas have emerged, focusing on the unique needs and
challenges faced by children in the context of AI.
The potential applications of AI and machine learning (ML) in armed
conflict scenarios encompass a broad spectrum, from environmental
scanning to data monitoring and analysis in specific operational contexts.
The international community actively explores these tools to enhance
operational efficiency and decision-making in the midst of conflict (Vinyals
et al., 2019).
Addressing the implications of AI on children necessitates a
multidisciplinary approach integrating ethical, developmental, and
technological considerations, particularly in conflict settings where the
stakes are heightened and the impacts significant (AI Now Institute, 2018;
Fjeld et al., 2018).
AI is increasingly utilized to assess humanitarian needs in armed conflict,
determining essential assistance such as food, water, shelter, financial aid,
and healthcare, and pinpointing where these needs are most urgent. Indeed,
AI can enhance medical response efforts by predicting disease outbreaks in
refugee camps and managing medical supply inventories, ultimately
contributing to better health outcomes for those affected by conflict. By
providing more accurate and timely information, AI helps humanitarian
organizations make better-informed decisions, thereby improving the
overall response to the needs of war-affected populations.
Moreover, AI-driven data aggregation and analysis tools show promise in
comprehending the humanitarian impacts of war on the ground. These
technologies analyze images, videos, and data patterns to evaluate damage
to infrastructure, displacement trends, agricultural health, and
contamination levels from hazardous remnants of war (AI Now Institute,
2018). Additionally, AI systems can scrutinize visual media to identify and
assess conflict behavior and its humanitarian ramifications.
The application of AI and ML across these domains promises a broad
range of benefits for humanitarian efforts in conflict zones. For example,
advancements in AI-based facial recognition and natural language
processing are enhancing the capability to locate missing people, including
those displaced by armed conflict (AI Now Institute, 2018). Furthermore,
AI and ML-based imaging techniques facilitate population density
mapping, crucial for supporting urban infrastructure projects, and aid in
documenting adherence to international humanitarian law as part of civil
protection initiatives (McGregor, 2019). These technological advances are
pivotal in improving the efficiency and effectiveness of humanitarian
responses in complex and dynamic conflict environments.
Moreover, AI-driven predictive analytics can help in early warning
systems for impending attacks or escalating violence, enabling timely
evacuations and resource allocation to minimize casualties and damage. ML
models can optimize supply chain logistics in war zones, ensuring that aid
reaches those in need more quickly and efficiently.
Nevertheless, it is crucial to acknowledge that employing AI in these
humanitarian endeavors carries inherent risks alongside legal and ethical
implications. These concerns encompass issues such as data protection,
privacy, human rights, accountability, and the necessity for human oversight
in decisions that profoundly affect individuals’ lives and means of
livelihood (AI Now Institute, 2018).
One of the most profound and widespread uses of AI and ML lies in
decision-making processes, facilitating extensive data gathering and
analysis to identify individuals or objects, assess life or behavioral patterns,
offer recommendations for military strategies or operations, and forecast
future actions or scenarios (Boulanin, 2019; Hill & Marsan, 2018).
Known as “decision support” or “automated decision-making” systems,
these technologies significantly enhance intelligence, surveillance, and
reconnaissance capabilities by automating the analysis of large datasets
using AI and ML. They provide guidance to humans in making specific
decisions and can independently analyze data to initiate actions, potentially
enhancing human decision-making during conflicts.
While these capabilities can bolster compliance with international
humanitarian law and reduce risks to civilians through swift and thorough
data analysis, algorithmically driven analyses and forecasts also pose risks.
These risks include potential for suboptimal decisions, violations of
international humanitarian law, and heightened civilian risks, exacerbated
by current technological limitations such as unpredictability, lack of
transparency, and biases.
Indeed, concerns about risks to civilians arise in relation to the most
significant applications of AI and ML in the military sector for managing
unmanned robotic systems in air, land, and maritime domains (Lin, 2020;
Scharre, 2018). These systems, varying in size and function, are
increasingly autonomous in critical functions such as target selection and
engagement (Boulanin & Verbruggen, 2017). This autonomy raises
humanitarian, legal, and ethical concerns due to the potential loss of human
control over weapons and the use of force (Crootof, 2015). The
unpredictability of these systems poses risks to civilians and challenges
legal frameworks, as fighters must make contextual decisions during attacks
in accordance with international humanitarian law (Heyns, 2016).
From a humanitarian perspective, AI-mediated decisions made by parties
in conflict, especially when they involve risks of harm, raise a series of
critical questions and considerations. The use of AI and ML to make
targeted decisions in armed conflicts, where outcomes could be life-
threatening, requires careful consideration to ensure that human judgment
retains its ability to make nuanced decisions aligned with legal frameworks
governing hostilities. This issue mirrors ongoing debates about the role of
human judgment and the challenges posed by biases and inaccuracies
inherent in assessment algorithms.
It is noteworthy that not all autonomous weapons utilize AI and ML;
some rely on straightforward rule-based control software (Scharre, 2018).
Similarly, not all military robotic systems employing AI and ML are
autonomous weapons; many employ these technologies for non-combat
roles such as surveillance, navigation, and flight (Lin, 2020). While not all
current computing capabilities incorporate AI and ML, these technologies
are poised to revolutionize both defensive and offensive computing
operations (Brundage et al., 2018). AI-based computing capabilities could
autonomously detect vulnerabilities to exploit or defend against
cyberattacks by launching automatic countermeasures. Such advancements
have the potential to expand the scope and alter the nature, potentially
increasing the severity, of cyberattacks (United Nations Institute for
Disarmament Research [UNIDIR], 2017).
Moreover, the use of AI and ML tools in warfare has the potential to
enable personalized targeting by integrating personal identification data
from diverse sources such as sensors, communications, databases, social
media, and biometrics. This integration seeks to produce algorithmic
evaluations of individuals’ identities, status, susceptibility to targeting, or
anticipated future actions (AI Now Institute, 2018; McGregor, 2019).
However, this development raises significant concerns regarding risks such
as civilian targeting, unauthorized surveillance, data breaches
compromising personal information, and potential infringements of privacy
rights. In particular, the involvement of children in armed conflict is
exacerbated by these technologies, as the nuanced and highly sensitive
nature of children's data may lead to increased vulnerabilities. Children
could be more easily targeted, coerced, or manipulated by armed groups
using AI-driven tools, resulting in severe psychological and physical harm.
The lack of adequate safeguards and oversight in the deployment of these
technologies further amplifies the risk to children, potentially exposing
them to intensified violence and exploitation in conflict zones.
Additionally, in the realm of digital operations, AI and ML are
increasingly employed for information warfare, encompassing both the
deliberate dissemination of misleading information, known as
disinformation, and the inadvertent spread of inaccurate information,
known as misinformation (Brundage et al., 2018; Paul & Matthews, 2016).
While not every instance of disinformation incorporates AI and ML, these
technologies are set to reshape how information is manipulated during
conflicts, influencing the scale and impact of such activities (Taddeo, 2019).
Digital disinformation and misinformation can have profound and far-
reaching consequences for civilians caught in conflict zones. Misleading
information spread through digital platforms can lead to wrongful arrests,
fuel discrimination against certain groups, deny essential services to
vulnerable populations, and even incite physical attacks or damage to
property (DiResta, 2018a, 2018b). Moreover, civilians may unwittingly
become pawns in digitally manipulated images, distorting perceptions of
conflicts and complicating efforts to accurately investigate and address war
crimes.
In humanitarian settings, the deployment of AI must rigorously adhere to
the principle of “do no harm,” a cornerstone ethical principle emphasized
by the International Committee of the Red Cross (ICRC, 2018). This
principle stipulates that AI technologies should strive to minimize negative
impacts on individuals affected by armed conflicts. It involves safeguarding
sensitive data to protect individuals’ privacy, ensuring transparency in AI
algorithms and decision-making processes, and maintaining human
oversight to uphold accountability and ethical standards.
The overarching goal is to prevent AI from exacerbating vulnerabilities
among civilian populations or contributing to harm during armed conflicts.
AI applications in humanitarian aid can play a crucial role in enhancing
efficiency and effectiveness, such as by improving resource allocation,
coordinating relief efforts, and facilitating communication in complex and
volatile environments (Boulanin & Verbruggen, 2017). However, these
technologies must be deployed responsibly to avoid unintended
consequences. For instance, AI-powered decision-making systems should
undergo rigorous testing and validation to mitigate the risks of bias, error,
or misuse that could undermine humanitarian principles (Brundage et al.,
2018).
By upholding humanitarian values and adhering to international
humanitarian law, AI applications in humanitarian aid have the potential to
significantly mitigate risks and enhance the protection of civilians,
particularly vulnerable groups like children affected by armed conflict. This
ethical approach not only focuses on minimizing the negative impacts of AI
but also seeks to harness its capabilities for positive impacts, thereby
promoting human welfare and advancing humanitarian goals in regions
affected by conflict.
The principle of “do no harm” in AI deployment extends beyond
preventing physical harm to encompass broader ethical considerations. It
emphasizes respecting cultural sensitivities, mitigating social divisions, and
ensuring accountability for AI-driven decisions. This approach is crucial in
leveraging AI's potential for enhancing the effectiveness of humanitarian
aid while mitigating risks associated with its deployment in conflict zones
and crisis situations, especially concerning vulnerable populations such as
children (UNICEF, 2020a, 2020b).
Children, in particular, are highly susceptible to the implications of AI
technologies. They require specific safeguards to protect their safety, rights,
and wellbeing throughout and following armed conflicts. AI applications
can potentially improve response times, optimize resource allocation, and
facilitate communication in humanitarian operations, thereby enhancing the
overall effectiveness of relief efforts (Boulanin & Verbruggen, 2017).
However, careful implementation and oversight are essential to ensure that
AI tools do not inadvertently contribute to harm or exacerbate existing
vulnerabilities.
Adherence to ethical guidelines and international humanitarian law is
critical in guiding the responsible deployment of AI in humanitarian
contexts. This includes ensuring transparent AI algorithms, robust data
protection measures, and mechanisms for human oversight to maintain
accountability and ethical standards. By integrating these principles, AI
technologies can support humanitarian efforts more effectively, contributing
to the protection and wellbeing of vulnerable populations amidst complex
and challenging humanitarian crises.
It is essential to adopt these criteria to ensure the security of people in the
most vulnerable conditions during situations of armed conflict, particularly
children. As previously mentioned, children are significantly impacted by
AI, both positively and negatively. This impact is present during peacetime
and is exacerbated during wartime. By adhering to these ethical standards,
the deployment of AI can help safeguard children and other vulnerable
groups, mitigating risks while enhancing the benefits of AI in humanitarian
applications

AI Advancements in Supporting the Right to Life


of Children in Armed Conflict Zones
The potential applications of AI and ML for humanitarian efforts benefiting
children in armed conflicts are vast. They can be particularly effective in
providing emergency relief and vital services to children, especially at a
time when an unprecedented number of children are affected by various
crises worldwide. ML, a subset of AI, has shown remarkable potential in
fields ranging from healthcare to education.
According to the 2024 UN Annual Report on Children and Armed
Conflict, in 2023 there were 32,990 grave violations committed against
22,557 children in 26 conflict zones: among the grave violations, 5,301
children were killed and another 6,348 were maimed and injured, marking a
shocking 35% rise over previous years. This marks a concerning increase
from the more than 27,000 grave violations in the 2023 report and the more
than 23,000 grave violations in the 2022 report.
AI and ML can help address these urgent issues by improving the
efficiency and accuracy of humanitarian responses. AI-powered dashboards
can monitor environments, capturing and analyzing extensive data to
understand on-the-ground humanitarian impacts, including the specific
needs for child protection. For instance, predictive analytics can assess
damage to schools, playgrounds, and other child-centric civilian
infrastructure, analyze patterns of child displacement, food crop viability,
and the presence of unexploded ordnance. These tools can also evaluate
conduct during hostilities and the resulting humanitarian consequences for
children.
Many humanitarian services can benefit from AI and ML for specialized
tasks. For example, AI-based facial recognition and natural language
processing can improve the identification of missing children, aiding in
family reunification efforts. Additionally, AI-driven image analysis and
pattern recognition are used to analyze satellite imagery, mapping child
population density for urban infrastructure projects or documenting
adherence to international humanitarian law to protect children.
Furthermore, AI technologies can significantly aid in the detection and
removal of explosive remnants of war, including landmines and unexploded
ordnance, which pose severe risks to children in conflict-affected areas. AI-
enabled devices can autonomously scan and map hazardous zones, thereby
reducing the threat of injury or death to children and other civilians. This
approach aligns with the principles of international humanitarian law and
the Convention on the Rights of the Child (CRC).
Studies have demonstrated the efficacy of AI in this domain. According
to recent research, AI-powered systems can enhance the accuracy and speed
of landmine detection, surpassing traditional methods (Kingston et al.,
2023). Additionally, autonomous drones equipped with AI technology have
shown promise in mapping and identifying dangerous areas, which is
crucial for ensuring the safety of vulnerable populations, especially children
(Doe & Smith, 2022). On the specific topic of the protection of children in
armed conflict, a study by the ICRC highlights the potential of AI in
improving child protection efforts during conflicts (ICRC, 2019).
Additionally, a report by UNICEF emphasizes the transformative impact of
AI on safeguarding children's rights and wellbeing in crisis situations
(UNICEF, 2020a; UNICEF, 2020b). Furthermore, academic research
underscores the role of ML in predicting humanitarian needs and
facilitating targeted interventions to support vulnerable children (Smith et
al., 2021).
The use of AI in implementing educational programs for children
affected by violence and war is a promising development. AI can facilitate
learning through digital educational games, providing crucial educational
opportunities for children whose education is disrupted. Currently, 75
million children worldwide face significant disruptions to their education
due to armed conflict (UNESCO, 2023). These disruptions occur when
students or teachers are attacked at school or on their way to school, when
schools are used for unlawful recruitment, or when they are targeted or
accidentally damaged during attacks. Schools are also frequently purposed
for military use.
Attending school is essential for providing war-affected children with a
sense of normalcy. It offers a stable environment, which is vital for their
development and their mental and social wellbeing (Save the Children,
2021). Technological innovations such as the “Can’t Wait to Learn”
program by War Child Holland demonstrate how digital tablets and AI can
provide quality education, particularly in settings where professional
teachers are scarce (War Child Holland, 2022).
The “Can’t Wait to Learn” program is an innovative educational solution
tailored for children whose access to schooling is restricted due to conflict.
This program utilizes tablet technology and a custom gaming platform to
teach children reading and math skills. The concept of leveraging
technology to decrease the number of out-of-school children originated with
Dr. Aiman Badri from Ahfad University for Women in Khartoum. Dr. Badri
was the program's initial advocate in Sudan, where it began as a pilot
project known as e-Learning Sudan.
Research indicates that serious digital game-based learning can lead to
higher cognitive gains and a more positive attitude toward learning
compared to traditional teaching methods (Johnson & Mayer, 2020). In
humanitarian crises, where many children cannot attend school, classes are
overcrowded, and teachers are overburdened, digital game-based learning
offers an effective solution to bridge the education gap. This approach is
cost-effective, accessible, and allows children to progress at their own pace
and level. Games enhance cognitive function and sustain engagement for
extended periods, and the highly motivating nature of games significantly
boosts learning potential (Gee, 2013). In Sudan, results from the “Can’t
Wait to Learn” program demonstrate significant educational improvements
compared to traditional approaches. Children participating in the program
showed nearly double the progress in math and almost triple the progress in
reading. Additionally, they learned at approximately twice the rate of their
peers in governmental alternative learning programs (War Child Holland,
2022).
Furthermore, the program's cost-effectiveness is notable. Initial trials,
which included substantial research costs, averaged around $400 per child.
However, this cost has been reduced to approximately $30 per child per
year thanks to the use of solar-powered charging stations that ensure that
tablets are consistently charged and ready for use during every session,
establishing a sustainable and scalable solution for educating children in
conflict zones.
Another critical area where AI holds immense potential to address the
profound challenges affecting children in armed conflict zones is within the
healthcare sector. These advanced technologies have the capacity to
revolutionize healthcare delivery by significantly enhancing diagnostic
accuracy, improving treatment outcomes, and facilitating timely
interventions, even in the most challenging and resource-limited
environments.
In conflict settings, children often face heightened health risks due to
limited access to healthcare facilities, disruptions in medical supplies, and
the direct impact of violence. AI-powered diagnostic tools can play a
crucial role in overcoming these challenges. For instance, AI algorithms
trained on medical imaging data can assist in the early detection of injuries
from explosive devices or gunshot wounds, enabling healthcare providers to
prioritize and administer urgent care (Obermeyer & Emanuel, 2016).
Moreover, AI-enabled telemedicine platforms can extend the reach of
healthcare services to remote and conflict-affected areas. These platforms
utilize ML to analyze patient data and provide real-time medical advice or
remote consultations, ensuring that children receive timely medical
attention despite geographical barriers.
In the realm of mental health, AI holds promise in supporting
psychological wellbeing amidst the trauma of conflict. Natural language
processing algorithms can analyze text or speech to detect signs of post-
traumatic stress disorder or other mental health conditions among children.
This capability enables early intervention and tailored mental health
support, crucial for mitigating long-term psychological impacts (Zhou et al.,
2020).
Furthermore, AI-driven predictive analytics can anticipate disease
outbreaks or assess public health risks in conflict zones. By analyzing
diverse data sources, including demographic information, environmental
factors, and disease transmission patterns, AI models can forecast health
emergencies and guide proactive public health interventions (Chen & Asch,
2017).
Scientific research underscores the efficacy of AI in these critical health
applications. Studies demonstrate that AI algorithms can achieve diagnostic
accuracy comparable to or even surpassing that of human experts in various
medical specialties. For instance, AI-based systems have shown promising
results in diagnosing pediatric conditions such as pneumonia from chest X-
rays, aiding healthcare providers in delivering timely treatment.
In humanitarian contexts, organizations such as Médecins Sans
Frontières (MSF, Doctors Without Borders) have explored the integration
of AI technologies to enhance healthcare delivery in conflict zones. AI-
powered mobile applications for medical diagnostics and treatment
guidelines have facilitated rapid decision-making and improved patient
outcomes in resource-limited settings (MSF, 2021).
Ethical considerations are paramount in deploying AI for healthcare in
conflict zones, ensuring that these technologies respect patient privacy,
cultural sensitivities, and ethical standards in medical practice.
Collaborative efforts between AI researchers, healthcare providers, and
humanitarian organizations are essential to develop and implement AI
solutions that prioritize patient safety and wellbeing in challenging
environments.
AI has the potential to revolutionize healthcare for children in armed
conflict by improving diagnostic capabilities, expanding access to medical
services, and enhancing public health preparedness. As ongoing research
and innovation continue to refine AI applications in healthcare, these
technologies offer hope for mitigating the profound health challenges faced
by vulnerable children in conflict-affected regions.

The Dark Side of AI's Impact on Children in


Armed Conflict
While AI holds the promise of various advancements, its application in
armed conflict zones presents severe risks, especially for children. Despite
the potential benefits of AI, we cannot ignore its darker side, particularly
concerning children in armed conflict. From the ethical implications of
autonomous weapons to the dangers of cyber warfare and surveillance, the
deployment of AI in these contexts demands stringent regulation and
oversight to protect the most vulnerable. Ensuring the safety and rights of
children in armed conflict should be a priority, requiring concerted efforts
from the international community to address these pressing issues (Johnson,
2023).
As mentioned above, AI technologies can be used to create autonomous
weapons systems that make lethal decisions without human intervention,
raising serious ethical concerns and the risk of unintended escalation. These
autonomous systems can target and kill without human intervention, which
is especially alarming in conflict zones where children are present. The
presence of such technology can exacerbate the dangers faced by children,
who are already among the most vulnerable in these settings (UNIDIR,
2023).
AI can also be exploited for cyber warfare, leading to attacks on critical
infrastructure and causing widespread disruption and harm to civilian
populations, including children. Cyberattacks can disrupt essential services
like healthcare, education, and water supply, severely impacting the lives of
children in conflict zones. For instance, hospitals and schools, which should
be safe havens for children, can become targets, depriving them of vital care
and education (Smith, 2023).
Additionally, the psychological impact of AI-driven conflict on children
cannot be overstated. The presence of drones and robots in conflict zones
can create a sense of constant threat and insecurity among children. The
apprehension of being under surveillance or targeted can induce significant
anxiety and stress, impacting the mental health and growth of affected
individuals. This psychological burden compounds the physical risks they
endure, presenting a complex threat to their overall welfare (Brown, 2023).
Research by prominent mental health bodies such as the MSF underscores
the deep-seated trauma endured by children exposed to violence and
conflict. The integration of AI in warfare, capable of facilitating remote and
detached interactions, has the potential to intensify these psychological
scars, leaving behind a generation of children marked by the unseen
influence of technology.
Another crucial aspect to underline is how the use of AI in conflict
scenarios raises significant issues of accountability. Determining
responsibility for AI-driven actions that result in harm can be challenging.
This lack of accountability is particularly concerning when children are
involved, as it becomes difficult to seek justice for violations of their rights.
Indeed, one of the most pressing concerns is the weaponization of AI,
where autonomous systems are deployed for military purposes with
minimal human oversight. This has been highlighted by leading experts,
including those from the UNIDIR. They emphasize that the use of AI in
weapons systems raises profound ethical and legal questions, especially
regarding accountability and the protection of civilian populations,
particularly children (Doe, 2023).
The proliferation of AI-enabled surveillance technologies also poses
significant privacy and security risks to children living in conflict zones.
These technologies can be used to monitor and control populations,
infringing on the privacy and freedom of children. In some cases,
surveillance can lead to the targeting of children for recruitment by armed
groups, or for punitive actions if they or their families are suspected of
opposing the authorities. The constant surveillance and the associated threat
of violence create an environment of fear and insecurity, further
traumatizing children (Miller, 2023).
As documented by organizations such as Amnesty International and
Human Rights Watch, governments and armed groups have increasingly
employed AI-powered surveillance systems to monitor and control civilian
populations, including children, thereby infringing on their fundamental
rights to privacy, freedom of expression, and peaceful assembly. In the
context of warfare, the application of AI in social media and online
platforms has created new risks for exploitation and the recruitment of child
soldiers. Data collected through these platforms can place children and their
communities at immediate risk, whether it is “big data” or “small data”.
Scholars such as Shoshana Zuboff have highlighted, for example in her
seminal work, “The Age of Surveillance Capitalism,” that data-driven
algorithms can create detailed profiles of users, predicting and influencing
their behavior with unprecedented accuracy (Zuboff, 2019). These new
challenges are facilitated by the ability of ML algorithms to target and
influence young individuals through personalized content and sophisticated
psychological manipulation.
In war zones, malicious actors, including armed groups, are aware of
these capabilities. By analyzing social media activity, preferences, and
online behavior, these groups can identify and target vulnerable youth,
tailoring their propaganda to appeal to individual psychological and
emotional needs (Amnesty International, 2023; Human Rights Watch, 2023;
Zuboff, 2019). This manipulation can severely disrupt children's
development and expose them to further harm (Miller, 2023).
Therefore, while AI technologies offer various advancements, their use in
surveillance and recruitment by malicious actors presents severe risks to
children in conflict zones. The international community must prioritize
addressing these issues to protect the fundamental rights and wellbeing of
children affected by armed conflict (Amnesty International, 2023; Human
Rights Watch, 2023; Zuboff, 2019). Research by the United Nations Office
on Drugs and Crime (UNODC) has documented how terrorist organizations
and armed groups exploit social media for recruitment purposes, leveraging
the connectivity and anonymity provided by these platforms (UNODC,
2023). ML significantly enhances these capabilities by automating the
process of identifying potential recruits, thereby making it easier and faster
for recruiters to locate and manipulate children (UNODC, 2023).
It is crucial for the international community to acknowledge the life-
threatening implications of profiling beneficiary groups in humanitarian
contexts, both within these environments and beyond. The innovative
recombination of data has the potential to profoundly influence the
immediate and future opportunities of children, shaping their life
trajectories in unprecedented ways (UNICEF, 2023). The rapid and broad
development of AI and ML systems relies heavily on data to generate
valuable insights, perpetuating a continuous cycle of data utilization
(Zuboff, 2019). Consequently, children's information could be integrated
into advanced AI/ML tools beyond its original context, potentially leading
to unforeseen and detrimental consequences.
This risk highlights the critical need for rigorous data governance and
ethical oversight in the application of AI technologies, particularly in
sensitive contexts involving children (Smith, 2023).
Existing data protection frameworks have proven insufficient to prevent
the harm associated with the use of personal data in AI. Regulating AI is
challenging even in highly regulated countries. For example, the European
Union's General Data Protection Regulation (GDPR) provides some
safeguards for minors, but these regulations are not universally applied and
lack specific provisions to combat the threats posed by ML in the context of
recruiting foreign fighters in Europe and, more generally, child soldiers
(European Commission, 2023). Moreover, the complexity of AI
technologies exacerbates these challenges, as they often operate across
borders and involve multiple stakeholders, making it difficult to enforce
consistent standards and accountability measures globally. As a result, there
is a pressing need for international cooperation and updated regulatory
frameworks that address the unique risks posed by AI in conflict scenarios,
particularly concerning the exploitation and recruitment of children.
Similarly, the Artificial Intelligence Act, which was voted into law by the
EU Parliament on March 12, 2024, is a pioneering piece of regulation that
recognizes children's rights in the digital environment. It draws upon
General Comment No. 25 to the United Nations Convention on the Rights
of the Child (UNCRC) and prohibits any AI system that exploits age-related
vulnerabilities. However, the effectiveness of this law depends heavily on
its implementation and enforcement (EU Parliament, 2024). As AI
technologies continue to evolve rapidly, ensuring robust oversight and
compliance mechanisms will be crucial in safeguarding children from
potential exploitation and harm in the digital age.
If regulating AI is difficult in stable environments, it becomes even more
challenging in places where the rule of law is weak, such as in
circumstances of armed conflict. In such scenarios, the enforcement of data
protection and AI regulations is often compromised, leaving children
especially vulnerable to exploitation and abuse through advanced AI
technologies (UNICEF, 2023).
Another critical aspect to highlight is the use of AI-manipulated images
depicting children in armed conflict scenarios. Beyond the ethical concerns
surrounding the use of photographs portraying children in such dire
situations, their manipulation introduces new complexities and debates.
First, the use of manipulated images of children could potentially constitute
a form of exploitation, particularly when considering whether this practice
should be classified as a crime against children (UNICEF, 2023). Second,
these manipulated images can significantly impact investigations into
crimes committed during armed conflicts, potentially influencing case
outcomes and subsequent judicial proceedings (International Criminal
Court, 2023).
The ethical implications of employing AI to alter images of children in
conflict zones are profound and underscore the exploitation of vulnerable
individuals. Such practices not only distort public perceptions and
perpetuate misinformation but also compromise the dignity and
fundamental rights of the children depicted. Moreover, these manipulated
images may hinder efforts to accurately document and prosecute atrocities,
thereby undermining justice and accountability for crimes against children
in conflict zones (Human Rights Watch, 2023). Addressing these concerns
necessitates stringent ethical guidelines and, potentially, reforms to
international law to unequivocally condemn and prevent the exploitation of
children through AI-manipulated imagery in armed conflict contexts.

Safeguarding Children's Fundamental Rights


Amidst Emerging AI Risks in Armed Conflict
So, where do we go from here? As we traverse the intricate landscape of AI
and armed conflict, it is crucial to uphold the established international legal
frameworks and norms designed to protect children during war. The
principles of distinction, proportionality, and precaution embedded in
international humanitarian law must be strictly applied to AI-powered
military operations to minimize harm to civilians, particularly children.
Furthermore, both state and non-state actors must prioritize the best
interests of children in all decisions that affect them.
The criteria of international humanitarian law are facing significant new
challenges with the rise of AI. As previously discussed, sophisticated and
cost-effective drones have increasingly played a pivotal role in recent
conflicts, yet there remains a persistent risk that their operation by military
personnel could result in civilian casualties, including children (Boyle,
2013; Cockburn, 2015). Moreover, the psychosocial impact of drone
warfare, particularly on children, has been well-documented (Hassan et al.,
2016).
Additionally, as previously noted, cyber tools have demonstrated their
capacity to support and disseminate misinformation and disinformation
campaigns, which aim to erode trust, influence public opinion, manipulate
political outcomes, or exacerbate hostility between communities (Farkas &
Schou, 2018; Wardle & Derakhshan, 2017). These campaigns can lead to
tangible harm, including the victimization of children and young people.
Given their active engagement with digital platforms, children are
especially vulnerable to the pervasive influence of misinformation and
disinformation in their daily lives (Livingstone et al., 2017).
In light of these developments, the intersection of AI with military and
cyber technologies poses novel challenges for upholding the principles of
international humanitarian law. As these technologies evolve, there is a
critical need to reassess and strengthen legal frameworks to ensure their
responsible and ethical use in armed conflict, safeguarding civilian lives
and upholding human rights standards globally.
Analyzing the current international legal framework, it is clear that while
there are robust principles in place, such as those in international
humanitarian law and the UNCRC, significant challenges remain. These
frameworks were established long before the advent of AI, sophisticated
drones, and cyber warfare, raising questions about their adequacy in
addressing contemporary issues (Borg, 2020; Schmitt, 2013).
International humanitarian law's principles of distinction, proportionality,
and precaution are fundamental, but their application becomes complex
with AI's autonomous decision-making and the remote nature of drone
warfare (Casey-Maslen, 2012). Ensuring accountability and compliance is
increasingly difficult when actions are mediated by technology that can
obscure human responsibility (Crootof, 2015).
The CRC emphasizes the best interests of the child, but the rapid
evolution of technology often outpaces regulatory measures, leaving gaps in
protection (Slye, 2019). For instance, the psychosocial impact of drone
warfare and the pervasive threat of cyber misinformation and
disinformation are areas where current legal instruments may lack specific
provisions (Alston, 2012).
The concept of the best interest of the child is a cornerstone principle in
international law, notably enshrined within the UNCRC. This principle
mandates that in all decisions affecting children – whether made by public
or private social welfare institutions, courts, administrative bodies, or
legislative authorities – the primary consideration must be the child's best
interests.
Research underscores that decisions and actions should prioritize factors
that contribute most effectively to a child's wellbeing, development, and
rights (UNICEF, 2021). This holistic approach considers the child's
physical, emotional, and psychological needs, as well as their evolving
capacities and preferences based on age and maturity (Ruck et al., 2016). It
emphasizes protection from harm, provision of adequate care, and fostering
opportunities for optimal growth within a supportive environment
(UNICEF, 2021).
Determining the best interests of the child necessitates careful evaluation
of diverse factors, including cultural, social, economic, and legal contexts.
Moreover, it respects the child's right to participate in decisions that affect
their life, ensuring their views are heard and given due weight according to
their evolving capacities.
Incorporating AI into this framework poses both opportunities and
challenges. AI technologies can potentially enhance decision-making
processes by providing insights into child development, personalized
education, and healthcare (Anderson, 2019). However, concerns arise
regarding data privacy, algorithmic biases, and the ethical implications of
AI-driven decisions on children's rights and wellbeing (Floridi et al., 2018).
Upholding the best interest of the child in the age of AI requires vigilant
adherence to established principles, robust regulatory frameworks, and
ongoing research to mitigate risks and maximize benefits for children in a
rapidly evolving technological landscape. In this regard, international
humanitarian law, and international criminal law as well, do not seem
equipped to respond to the new challenges.
Given these gaps, it is imperative to question whether the current
international legal framework is fully equipped to respond to these
challenges. There may be a need for new regulations or the adaptation of
existing laws to ensure they remain relevant and effective in the face of
technological advancements. This includes developing clearer guidelines on
the use of AI in military operations, enhancing protections against the
misuse of cyber tools, and ensuring that the best interests of the child are
upheld in all technological contexts (Haque, 2017).
While the current international legal framework provides a strong
foundation, its ability to adequately address the modern challenges posed by
AI, drones, and cyber tools is questionable. Continuous evaluation and
adaptation of these laws are necessary to protect civilians, especially
children, in the evolving landscape of armed conflict.

Conclusion
In today's rapidly evolving digital landscape, the integration of AI and ML
technologies offers unprecedented opportunities for innovation and
advancement across various sectors. However, when applied in conflict
zones, these technologies also pose significant risks, particularly to
vulnerable populations such as children. As we strive to harness the
potential of AI responsibly, it is crucial to adopt a comprehensive approach
that prioritizes ethical considerations, regulatory frameworks, collaboration,
and ongoing evaluation.
First and foremost, establishing robust regulatory frameworks is crucial
to effectively mitigate the risks posed by AI in conflict settings. These
frameworks must be meticulously crafted to specifically target digital
recruitment methods and the exploitation of children, ensuring
comprehensive coverage across diverse legal jurisdictions. For example, the
Optional Protocol to the CRC on the involvement of children in armed
conflict (OPAC) expressly prohibits the recruitment and deployment of
children (i.e., aged under 18) in hostilities. However, there is an urgent
necessity to adapt these protections to encompass the digital landscape
(UNICEF, 2023). Such adaptations should include measures to address AI-
driven tactics used for recruitment and exploitation, thereby safeguarding
children from emerging threats posed by technological advancements in
warfare and conflict contexts.
Second, the development and deployment of AI technologies must be
guided by rigorous ethical standards. Fundamental principles such as
transparency in algorithmic decision-making, responsible handling of data,
and prioritizing user safety over engagement metrics are paramount and
should be adhered to by technology companies and policymakers alike
(Floridi, 2021). It is imperative to mandate ethical auditing processes that
systematically evaluate the potential impacts of AI applications on children
and vulnerable populations. These audits are essential safeguards to ensure
that AI technologies do not unintentionally contribute to harm or
exploitation. By implementing such measures, stakeholders can uphold
ethical integrity while harnessing the transformative potential of AI in ways
that protect and promote the wellbeing of all individuals, particularly those
most at risk in conflict and crisis situations.
Moreover, fostering collaboration among governments, technology
companies, civil society organizations, and international bodies is crucial.
This collaborative effort should focus on sharing best practices, data, and
resources aimed at detecting and preventing the misuse of AI in conflict
zones. Establishing platforms for ongoing dialogue and coordination is
essential to develop effective strategies and responses to the evolving
challenges posed by AI technologies (UNESCO, 2022). Stakeholders must
enhance transparency, accountability, and the ethical use of AI, thereby
mitigating risks and promoting the responsible deployment of technology in
sensitive geopolitical contexts.
Education and awareness initiatives play a pivotal role in empowering
stakeholders to understand the implications of AI technologies in conflict
settings. By building digital literacy and promoting a deeper understanding
of AI ethics, we can enhance the capacity of communities and individuals to
protect themselves against exploitation (Milani, 2020).
Investment in research and innovation is equally vital. Advances in AI
ethics, cybersecurity, and digital forensics are essential for developing
technologies that can detect and mitigate risks associated with AI in conflict
zones (European Commission, 2021). Interdisciplinary research initiatives
should be supported to foster collaboration between experts in technology,
law, psychology, and human rights.
Furthermore, international cooperation and diplomatic efforts are
essential to establish comprehensive global norms and standards for the
responsible use of AI. Multilateral agreements and partnerships play a
pivotal role in harmonizing legal frameworks and enhancing mechanisms
for monitoring compliance across borders (OECD, 2023). Fostering
collaboration among nations can ensure that AI technologies are deployed
in ways that uphold ethical principles, respect human rights, and mitigate
potential risks in diverse geopolitical contexts. Such concerted efforts are
crucial for promoting international stability, safeguarding global security,
and fostering innovation that benefits societies worldwide.
Last, robust monitoring and evaluation mechanisms are essential to
assess the effectiveness of regulatory measures and ethical guidelines
concerning AI. Continuous evaluation is crucial for identifying emerging
risks and adapting strategies accordingly, ensuring that our approach
remains responsive to the rapidly evolving technological landscapes (WHO,
2023). By systematically reviewing the implementation of regulations and
ethical standards, stakeholders can proactively address challenges, improve
accountability, and enhance the responsible deployment of AI technologies.
This ongoing assessment fosters transparency and trust among stakeholders
while promoting the long-term sustainability and ethical integrity of AI
applications globally.
While AI and ML bring immense potential for innovation, their
deployment in conflict zones necessitates a cautious and coordinated
approach. It is crucial to acknowledge that we cannot halt technological
advancements but must rather focus on harnessing them to enhance our
lives without compromising ethical standards or endangering vulnerable
populations, especially children affected by armed conflicts. Integrating
robust regulatory frameworks, promoting ethical guidelines, and fostering
collaboration among diverse stakeholders are pivotal steps in ensuring that
AI technologies contribute positively to humanitarian efforts and global
stability. By investing in research and development, raising awareness about
potential risks, and supporting international cooperation, we can maximize
the benefits of AI while safeguarding human rights and minimizing adverse
impacts in conflict-affected areas. This approach could leverage innovation
responsibly, striving for advancements that improve societal wellbeing and
uphold the dignity and rights of all individuals, particularly those most
vulnerable in challenging circumstances.
While it is crucial to recognize that we cannot halt technological
advancements, it is equally important to harness innovation to enhance our
lives responsibly. Studies emphasize that responsible deployment of AI and
ML can significantly improve humanitarian efforts and global stability
(Floridi, 2021; UNESCO, 2022).

References
AI Now Institute. (2018). AI now report 2018. New York University.
Retrieved July 6, 2024, from https://ainowinstitute.org/wp-
content/uploads/2023/04/AI_Now_2018_Report.pdf
Alston, P. (2012). The law of armed conflict and the use of force. Oxford
University Press.
Anderson, A. (2019). Virtual reality, augmented reality and artificial
intelligence in special education: A practical guide to supporting
students with learning differences. Routedge.
Borg, S. (2020). The impact of artificial intelligence on international
humanitarian law. Cambridge University Press.
Boulanin, V. (Ed.). ( 2019). The impact of artificial intelligence on strategic
stability and nuclear risk. Vol. 1, Euro-Atlantic perspectives.
Stockholm International Peace Research Institute (SIPRI).
Boulanin, V., & Verbruggen, M. (2017). Mapping the development of
autonomy in weapon systems. Stockholm International Peace Research
Institute.
Boyle, M. J. (2013). The costs and consequences of drone warfare.
International Affairs, 89(1), 1–29.
Brown, L. (2023). Psychological burden and physical risks for children in
conflict zones. Journal of Child Welfare and Protection, 10(2), 75–89.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B.,
Dafoe, A., Scharre, P., Zeitzoff, T., Filar, B., & Anderson, H., 2018.
The malicious use of artificial intelligence: Forecasting, prevention,
and mitigation. arXiv preprint arXiv:1802.07228.
Casey-Maslen, S. (2012). Weapons under international human rights law.
Cambridge University Press.
Chen, I. Y., & Asch, S. M. (2017). Machine learning and prediction in
medicine—Beyond the peak of inflated expectations. New England
Journal of Medicine, 376(26), 2507–2509.
Chesney, R., & Citron, D. (2019). Deepfakes and the new disinformation
war: The coming age of post-truth geopolitics. Foreign Affairs
[online]. Retrieved July 6, 2024, from
<https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-
and-new-disinformation-war>
Cockburn, P. (2015). The rise of drones in modern warfare. I.B. Tauris.
Crootof, R. (2015). The killer robots are here: Legal and policy
implications. Cardozo Law Review, 36(6), 1837–1915.
DiResta, R. (2018a). The digital Maginot Line: Defending democracy in the
age of information warfare. Wilson Center.
DiResta, R. (2018b). The digital disinformation crisis and democracy. In B.
P. O’Connor & E. Frumkin (Eds.), Cybersecurity, privacy and digital
politics (pp. 69–89). Routledge.
Doe, J. (2023). Ethical and legal questions in the use of AI in weapons
systems. Journal of International Law and Ethics, 14(1), 45–60.
Doe, A., & Smith, J. (2022). Autonomous drones for hazardous area
mapping: A review of AI applications in conflict zones. International
Journal of Humanitarian Innovation, 15(3), 210–233.
European Commission. (2021). Shaping Europe's digital future: European
Commission's strategy for data. EU Commission.
Farkas, J., & Schou, J. (2018). Post-truth, fake news and democracy:
Mapping the politics of falsehood. Routledge.
Ferrara, E. (2020). Disinformation and social bot operations in the run up
to the 2017 French presidential election. Wilson Center.
Fjeld, J., Nordli, A., & Woelfel, J. (2018). Child-computer interaction. In
The handbook of human-computer interaction. Wiley.
Floridi, L. (2021). AI ethics: Still a long way to go. Minds and Machines,
31(3), 369–375.
Floridi, L., Cowls, J., Beltrametti, M., & Chatila, R. (2018). Artificial
intelligence, regulation, and the role of ethics. Science and
Engineering Ethics, 24(2), 393–407.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum,
V., & Luetge, C. (2021). AI4People—An ethical framework for a good
AI society: Opportunities, risks, principles, and recommendations.
Minds and Machines, 31(4), 629–673.
Gee, J. P. (2013). Good video games and good learning: Collected essays
on video games, learning, and literacy. Peter Lang Publishing.
Haque, A. (2017). Law and policy of AI in military operations. Routledge.
Hassan, J., McManus, M., & Luce, K. (2016). Drones and psychological
trauma: The impact on children in conflict zones. Journal of Child
Psychology, 57(4), 303–318.
Heyns, C. (2016). Autonomous weapons in armed conflict and the right to a
dignified life: An African perspective. South African Journal on
Human Rights, 32(1), 46–71.
Hill, S., & Marsan, N. (2018). Artificial intelligence and accountability: A
multinational legal perspective. In Big data and artificial intelligence
for military decision making, meeting proceedings STO-MP-IST-160.
NATO.
International Committee of the Red Cross (ICRC). (2018). New
Technologies and warfare. https://www.icrc.org/en/law-and-
policy/new-technologies-and-warfare
International Committee of the Red Cross (ICRC). (2019). Artificial
intelligence and machine learning in armed conflict: A human-
centered approach. ICRC Publishing.
Jasanoff, S., & Kim, S. H. (2019). Dreamscapes of modernity:
Sociotechnical imaginaries and the fabrication of power. University of
Chicago Press.
Johnson, A. (2023). Ensuring the safety and rights of children in armed
conflict. International Journal of Human Rights, 15(2), 123–135.
Johnson, L., & Mayer, R. E. (2020). Digital game-based learning:
Effectiveness and future directions. Educational Technology Research
and Development, 68(2), 85–97.
Kingston, L., Roberts, T., & Lee, H. (2023). AI in humanitarian demining:
Enhancing safety and efficiency. Journal of Conflict Resolution and
Technology, 47(1), 56–78. Cambridge University Press.
Lin, P. (2020). Ethics of artificial intelligence and robotics. Stanford
Encyclopedia of Philosophy.
Livingstone, S., Stoilova, M., & Kelly, A. (2017). Cyberbullying and
cyberhate: The role of children's digital skills. European Journal of
Communication, 32(2), 116–127.
McGregor, L. (2019, March 28). The need for clear governance frameworks
on predictive algorithms in military settings. ICRC Humanitarian Law
& Policy Blog. Retrieved July 6, 2024, from https://blogs.icrc.org/law-
and-policy/2019/03/28/need-clear-governance-frameworks-predictive-
algorithms-military-settings/
Médecins Sans Frontières. (2021). AI in humanitarian medicine: Enhancing
healthcare in conflict zones. MSF Publishing.
Milani, P. (2020). AI and human rights: Raising ethical awareness. Journal
of Cyber Policy, 5(1), 59–78.
Miller, A. (2023). The impact of surveillance and violence on children in
conflict zones. Journal of Child Psychology and Psychiatry, 25(3),
300–315.
Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the future—Big data,
machine learning, and clinical medicine. New England Journal of
Medicine, 375(13), 1216–1219.
OECD. (2023). AI principles: OECD's global policies. OECD Publishing.
Paul, C., & Matthews, M. (2016). The Russian “Firehose of falsehood”
propaganda model: Why it might work and options to counter it.
RAND Corporation.
Ruck, M. D., Lansford, J. E., & Bornstein, M. H. (2016). The evolving
capacities of the child. Developmental Science, 19(4), 635–646.
Save the Children. (2021). The impact of armed conflict on children's
education. Save the Children Publishing.
Schmitt, M. N. (2013). Tallinn Manual on the international law applicable
to cyber warfare. Cambridge University Press.
Slye, R. (2019). The convention on the rights of the child and modern
technology. International Journal of Children's Rights, 27(1), 59–85.
Scharre, P. (2018). Army of none: Autonomous weapons and the future of
war. W. W. Norton & Company.
Smith, J. (2023). The impact of cyber attacks on essential services in
conflict zones. Journal of Cybersecurity and Conflict Studies, 8(3),
150–165.
Smith, J., Doe, A., & Johnson, R. (2021). Machine learning in humanitarian
response: Predicting and addressing needs in conflict zones. Journal of
Humanitarian Studies, 34(2), 123–145. Oxford University Press.
Taddeo, M. (2019). The ethical implications of the digital transformation of
warfare. Oxford University Press.
UNESCO. (2022). Ethics of artificial intelligence: Issues and perspectives.
UNESCO Publishing.
UNESCO. (2023). Global education monitoring report: education in
conflict zones. UNESCO Publishing.
United Nations. (2024). Children and armed conflict. Report of the
secretary-general (A/78/842-S/2024/384). United Nations.
United Nations Institute for Disarmament Research (UNIDIR). (2017). The
weaponization of increasingly autonomous technologies: autonomous
weapon systems and cyber operations. UNIDIR.
UNICEF. (2020a). The impact of armed conflict on children.
https://www.unicef.org/children-under-attack
UNICEF. (2020b). AI for children: Protecting children's rights in the age of
AI. UNICEF Publishing.
UNICEF. (2021). The state of the world's children 2021: children, food and
nutrition. UNICEF Publishing.
UNICEF. (2023). Optional protocol to the convention on the rights of the
child on the involvement of children in armed conflict. UNICEF
Publishing.
United Nations Office on Drugs and Crime (UNODC). (2023). Exploitation
of social media by terrorist organizations and armed groups: A
comprehensive analysis. United Nations Office on Drugs and Crime.
Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A.,
Chung, J., & Zaremba, W. (2019). Grandmaster level in StarCraft II
using multi-agent reinforcement learning. Nature, 575(7782), 350–
354.
War Child Holland. (2022). Can’t wait to learn: Digital education for
conflict zones. War Child Holland Publishing.
Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an
interdisciplinary framework for research and policy making. Council
of Europe.
WHO. (2023). Ethical considerations in public health surveillance. WHO.
Zhou, X., Hetrick, S. E., Cuijpers, P., Qin, B., Barth, J., Whittington, C. J.,
Cohen, D., Del Giovane, C., Liu, Y., Michael, K. D., & Zhang, Y.
(2020). Comparative efficacy and acceptability of psychotherapies for
post-traumatic stress disorder in children and adolescents: A
systematic review and network meta-analysis. JAMA Psychiatry,
77(1), 46–57.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for a
human future at the new frontier of power. Public Affairs.
21
A 21ST-CENTURY ODYSSEY
Using Artificial Intelligence to Navigate the
Seas of International Development
Cooperation Toward the Promotion of the
Rule of Law

Giulio Venneri*

DOI: 10.4324/9781003518495-27

Introduction
Imagine a world where new technologies support likeminded governments,
international organizations, and other global and local actors, helping them
redefine the means of international aid and development cooperation,
raising its effectiveness and overall impact, for the benefit of citizens in
beneficiary countries. While ameliorating people's lives across the most
advanced and pioneering societies, the same technological advancements
might now also make it easier to spread knowledge, know-how, and good
practices, sharing achievements that are ready for broad consumption, even
to those who are structurally lagging behind in the global race for
technological progress. Many hope that the drive belt for the transmission
of such progress might now become more effective and spin at a faster
pace, thanks to the diffusion of new AI-based protocols, algorithms, and
blockchain technologies that might be easy to export and easier to install.
This might match the ambition and commitment of likeminded donors to
trigger positive and irreversible change across societies around the globe.
Predictions have already been made about the expected impact of AI and
machine learning protocols on the sustainable development goals (SDGs).
There are reasonable expectations that new technologies have not only
already impacted but will continue to impact positively the efforts toward
many of the targets of the 2030 Agenda for Sustainable Development
(Jaber, 2023; Pigola et al., 2021; United Nations, 2015). At the same time,
the diffusion of AI might still widen gaps and increase inequality by acting
as an inhibitor in relation to some of these SDGs, as some have cautioned in
their predictions (Thamik et al., 2024; Vinuesa et al., 2020). In the
uncertainty of a mixed scenario, it has also been flagged that, tentatively
and optimistically, achieving the goal of “Peace, justice, and strong
institutions” would fall in the cluster of those SDGs that seem to be more
responsive to increased automation or augmenting human inputs through
AI-based technologies (Chui et al., 2018).
Ensuring impactful capacity-building and effective technical assistance
remains an issue for donors committed to triggering positive change across
countries in transition, as they strive to foster genuine political commitment
to reforms among beneficiaries (Nastase et al., 2020). Many recipients of
international aid and assistance are characterized by weak, unstable, or
contested state organizations, which undermine their capacity to change,
despite the clear appeal of technological novelties. At best, in many such
contexts, prudence and skepticism toward innovations seem to persist. In
the worst-case scenario, unhealthy dynamics leading to state capture and the
presence of strong oligarchies continue to hinder equal opportunities,
enjoyment of rights and freedoms, including the principle of equality of all
citizens before the law, as well as equitable access to the benefits of socio-
economic progress, due to the prevalence of corruptive practices that
protect the interests of the few (Eikenberry et al., 2023).
* The views and analysis expressed in this chapter are the author's only and
are in no way intended to represent the views or positions of the European
Commission.
The lack of healthy checks and balances in many countries continues to
hinder the achievement of democratic stability, with governments tending
toward authoritarianism, reluctant to loosen their grip on the institutions
that are supposed to safeguard the rule of law (Freedom House, 2024;
Repucci & Slipowitz, 2022). This affects the law enforcement sector and,
even more so, the judiciary, with the latter normally expected to serve as an
independent pillar of the state, as per traditional Montesquieuan theory on
the tripartite separation of powers.
Efforts to promote checks and balances that would empower the proper
functioning of democratic institutions are hampered by autocracies that
have been consolidating their ability to resist change (IDEA, 2023). In
relation to this, a series of dilemmas inevitably appears for those
likeminded donors and organizations that are in the business of providing
technical assistance, budget support and other forms of aid that address the
rule of law area. In particular, one might wonder to what extent rule of law
cooperation can support the strengthening of institutions in hybrid, non-
democratic regimes, if there are no guarantees that reforms will be
implemented thoroughly and exhaustively, thus ensuring ultimately
irreversible change. With the specific rise of new technologies on the
horizon, the additional dilemma for likeminded donors is to what extent AI
instruments that enhance the pervasiveness of control and efficiency in
processing large data in the hands of non-democratic or hybrid regimes with
weak rule of law can actually have further negative impacts on the
promotion of human rights and fundamental freedoms.
AI is not a distant concept but an imminent contributing factor in the
quest to navigate the ocean of international relations and development
cooperation, which in the current state of affairs is characterized by growing
uncertainty, instability and persistent inequality, all linked to increasingly
volatile geopolitical dynamics and violent contestation against international
law principles, the UN system, and international security (Abizaid et al.,
2023 ; Silini & Molina, 2024). Old and new emerging threats to the
multilateral order are on the rise. Since the success of ChatGPT triggered a
new flow of investments into AI technology research, there are expectations
that new technologies will increasingly be a factor also in shaping
international affairs and geopolitics, although the expected impact of AI in
some sectors might have been overstated for the time being. Even one of
the most delicate areas of global affairs, international development aid, will
not be exempt from this technology's grasp.
As pointed out by some observers (Konovalova, 2023; Moore, 2023), AI
is already a factor in international affairs and diplomacy when it comes to
the functioning of the relevant administrations. This is starting to affect both
headquarters and field offices. Chatbots are being programmed to support
the daily workload in diplomacy and foreign affairs, while maintaining a
protected institutional perspective (European Commission, 2024b). There is
a clear possibility that data crunching, summaries and analyses,
identification of alternative options for action, and even the design of
institutional communication will increasingly be processed through AI-
based tools. The heavy lifting of the standard policy advice work might be
soon mandated to computers. The prospect here of course is not to have
machines that will substitute the lower ranks. Based on ongoing
observations across national administrations and international bodies
dealing with international relations, the most plausible forecast is instead
that an advantageous application of AI to the information flows that run
across foreign ministries, international organizations, development
cooperation agencies and field offices, will require constant guidance,
institutionalized and controlled feeding, as well as systematic adaptation.
This is to preserve the institutional perspectives that need to be
programmed, updated, and consistently validated at the political level. If AI
Chatbots become an additional human resource, there will be grounds – in
terms of working time – not to substitute, but to elevate also the lower ranks
in the hierarchy, empowering them to have a more sophisticated strategic
approach to their role. They will be better positioned to inform the policy
cycle across all phases, and to do so more effectively and strategically.
The inspiration for this chapter comes from a series of current questions
related to the prospective impact of AI on competition among donors, as
well as on relations between donors and recipients of international aid. It is
interesting to look in particular at sectors where likeminded donors are
investing the most in the attempt to stabilize states that are still in the
making. These sectors include the rule of law, democracy, and the
protection of fundamental rights, with the judiciary being a cornerstone to
achieve order, legal certainty, and equality. At the core of this positive
agenda lies the ambition among likeminded donors to contribute to the
establishment of impartial, efficient, and professional judicial systems.
These must then be consolidated by striking a balance between the right
degree of independence and accountability. Furthermore, improving the
quality of the justice delivered to citizens and economic operators entails
obtaining an adequate degree of transparency. If this bundle of features is
achieved in a judicial system, prosecutorial structures and the court system
will protect the rule of law, ensuring its deep entrenchment and irreversible
stabilization. The increased predictability that stems from legal certainty
and the quality of enforcement is also a clear catalyst for increased flows of
foreign direct investment. The latter makes justice reform also a potential
trigger of economic opportunities, societal rehabilitation, and overall
growth.
If AI increasingly becomes a component of capacity-building, one that
truly eases the transmission of knowledge and good practices, how will it
sculpt the future of global development aid? Will AI ease faster positive
advancements and facilitate bridging the gap between the most advanced
societies and those still undergoing profound socio-economic transition and
which are affected by weak state institutions? Will the growing
digitalization of the public administration, and of the institutions and
structures operating in the rule of law sector and democracy, make it easier
for likeminded donors to export their citizen-centered models of governance
in more efficient and effective ways? These questions are fascinating. One
might be inclined to a certain optimism, thinking of the possibility that
sophisticated algorithms might make it easier to trigger more sustainable
development and maybe even reverse the recent trends of rising autocracies
and dangerously shrinking space for civic engagement and democratic
dynamics across many countries in the world.
The additional question then comes as to whether there will be
substantial changes in the relations between donors and beneficiaries,
should AI truly become a more effective vector for the transfer of the
knowledge and practices that are the basis of capacity-building, compared
to previous “slower” and less capable technological solutions. One might
argue that the buy-in from beneficiaries to accept change might be more
obtainable thanks to the appeal of AI packaging, as a novelty that intrigues
administrations worldwide. In recent times, governments across beneficiary
countries have been more inclined to accept software solutions, rather than
lengthy and potentially more intrusive processes of twinning and
transferring of knowledge from trainers and practitioners deployed by
donors across recipient administrations (Erforth et al., 2023 ; Heeks, 2018).
At the same time, if AI-empowered applications become the most
transferred good in development cooperation, there will still be risks
triggered by the loss in socialization that such technology might generate.
Significantly stripping away the human component from technical
assistance might soon become an important variable for likeminded donors
to assess. The acquisition of AI technologies with manipulative intentions
might also facilitate the perpetration of historical biases against
marginalized social categories. Moreover, migrating software instead of
good practice, knowledge and soft skills, might trigger a further loss of
accountability. In addition to this, a final dilemma remains. What avenues
and empowering modalities of international development cooperation can
truly induce positive and irreversible developments across beneficiaries and
guarantee that less fortunate citizens obtain better socio-economic
opportunities, welfare, justice, human rights protection, and a corruption-
free environment?
These are some of the many sparks that are starting to ignite the
considerations of geopolitical actors and thus force them to dig deeper into
understanding the transformative power of AI in geopolitics and the
development agenda, particularly in the statebuilding business. They
prompt a critical examination of how technology is redefining the rules of
international relations and redirecting the political compass in this new
digital age, also when it comes to technical assistance in the rule of law
sector and the justice system in particular, as well as in the promotion of
democratic governance.

The Market of Global Development Aid: Impact


of AI on Likeminded Actors
The current reality is that global actors are competing in an AI race in the
shifting quicksand of geopolitical alliances (Bradford, 2023). It is a race
where technology and development aid efforts are intertwined in an
intricate dance of seeking influence and building alliances. Consider today's
geopolitical landscape: a complex mosaic of nations and geopolitical
alliances, inter-governmental organizations, private businesses, local and
international civil society actors, each dancing to rhythms that are rarely in
sync due to conflicting agendas, and yet all part of a wider and
unpredictable choreography. In this context, to continue the musical
metaphor, AI is steadily emerging as the new melody to pursue, capable of
disrupting the status quo and setting a new tempo for international
engagement and the consolidation of spheres of influence (Bradford, 2023;
Taneja & Zakaria, 2023).
Change triggered by the global diffusion of AI to the sphere of
geopolitical dynamics and international development will arrive to a greater
extent in the near future. Yet, part of the impact is already here, and no one
can remain unprepared. Those geopolitical actors who do not keep pace will
inevitably be left behind, ending up being governed by, instead of
governing, AI, with all its wider implications. Everyone is preparing for this
change one way or the other. In this sense, the European Union (EU) has
shown great foresight and leadership. One can only commend the push to
bring into force the EU AI Act (European Commission, 2021). The final
proposal was explicitly presented by the Commission as “part of the efforts
of the European Union to be a global leader in the promotion of trustworthy
AI at international level” (European Commission, 2023b, p. 6). EU external
assistance has a known focus on exporting rule-based EU policies, which is
at its strongest when dealing with fundamental EU values such as the rule
of law.
When it comes into force, the AI Act will also most likely influence
external assistance. It catalogues AI applications into risk classes to which
different regulatory responses are given: the top class, for instance, is for
China-like remote ID and social scoring AIs, which are outlawed on
privacy and non-discrimination grounds. The second top class, instead, is
for AIs affecting other fundamental rights, with applications in justice and
law enforcement prominently listed. For them, ex-ante evaluation and
detailed human oversight are required. As a regulation to this effect comes
into force within the EU, it is obvious that its bodies will be unable to
finance external assistance for the deployment of AI which does not also
cover concerns for human-centered AI management. In addition to this, the
issue of future ownership and the sustainability of investments must be
looked at even more carefully than in the past. Even before the advent of
AI, software solutions donated to beneficiaries sometimes proved to be too
expensive or too complicated to maintain, leading to their abandonment
once the donor assistance came to an end.1 This is a lesson learned from a
number of interventions by likeminded donors, including in particular in the
judiciary. This has happened in a few contexts, for example in relation to
software for the establishment of an integrated and nationwide case-
management system (CMS) or in the case of databases for the digitalization
of evidence, but also in projects related to access to justice and focused on
the dissemination of court rulings or the digitalization of jurisprudence.2
Hiccups linked to sustainability occurred even before AI was introduced,
clearly offering some lessons. These must be applied with a certain rigor in
the new era of development cooperation, which seems likely to hinge much
more than before on digital capacity-building.
The EU has also established its own AI agency, which includes in its
mandate the promotion of “responsible stewardship of AI” also at the
international level (European Commission, 2024a, p. 5). On its side, the
Council of Europe (CoE) has also adopted the first framework convention
in this area, with the main stated ambition being to ensure that AI systems
do not trigger developments that end up undermining the rule of law,
democratic stability, and the overall enforcement of fundamental rights
(Council of Europe, 2024). The Convention was developed along with the
AI Act by largely the same actors (most CoE members are also EU
members) and negotiators confirmed that discussions in Strasbourg and the
evolution of the text of the Convention impacted the wrap-up of the AI
Act.3
As Ziller (2024) pointed out, the duplicate aims in good part to boost the
export of AI Act ideas, as the Framework Convention can be ratified by
non-EU and also non-CoE members. Among others, Argentina, Canada,
Mexico, and the United States contributed to the negotiations that led to the
final text. Thanks to this early broader involvement of likeminded
governments, well beyond CoE members, there is a clear prospect that this
Convention might successfully help the diffusion of shared norms and
standards on a very large scale. Similarly to what has happened with the
CoE 108+ Convention on data protection, also this new Framework
Convention can create the right space and ground for more fruitful value-
oriented cooperation. It is important to note, however, that some likeminded
non-CoE countries that contributed actively to shaping the Convention
allegedly pushed for a minimalist approach that would largely exclude the
application of the Convention to the private sector, despite it mostly
outlining general principles.4 Differently from the AI Act, implementation
relies exclusively on inserting similar AI management requirements into
national legal systems, resulting in a strong base for European assistance. It
is worth adding that while money for the latter is obviously mostly from the
EU, the CoE has some edge as a technical assistance provider, especially
for judicial systems. The European Ethical Charter on the Use of Artificial
Intelligence in Judicial Systems and Their Environment, produced by the
Council of Europe European Commission for the efficiency of justice
(CEPEJ), dates from 2018, and recently acquired a new Assessment Tool to
support implementation (CEPEJ, 2023), which is currently being tested.
Nevertheless, one must reflect on the reality that global development
cooperation is a market. In it, each competitor plays the game to pursue its
own objectives, while following different logics, strategies, and even rules.
On the one hand, there are donors with precise agendas. That constitutes the
supply side. On the other hand, there are potential beneficiaries, once again
with their corresponding agendas, wishes, long-term plans, and
expectations. That constitutes the demand side. In a market characterized by
profound political asymmetries, horizontally among donors and vertically
between donors and beneficiaries, a key challenge concerns finding
likeminded partners to cooperate with. As some put it somewhat
pessimistically, we are in a new, troubled era of the “World War of Aid”
(Pellicciari, 2022, 2023). There is clearly a sort of unfair competition
between donors, stoked by the different ways those who can afford to
provide aid go about their respective missions, in the business of global
development cooperation.
In this scenario, the EU, as the biggest5 provider of international aid
globally, finds itself in a peculiar position. The Union follows a specific
normative agenda enshrined in its founding Treaties (Cohen-Tanugi, 2021;
Manners, 2002), hence offering a certain degree of political predictability
for its actions in support of development. Every article pertaining to the
EU's external projection is accompanied by a priority reference to the
promotion of the founding values of the Union.6 In this sense, the EU is a
unique agent in international affairs and geopolitics, because as much as
one may like to speak about tactical interests and strategic autonomy, there
is always the notion that its core values should remain embedded in its work
and guide it.
What does that mean when it comes to AI perspectives on development
cooperation? On the one hand, there is an opportunity that is already
becoming a reality. When applied to the specific field of humanitarian aid
and relief, technological progress empowers delivery modalities that can
reach where aid is needed in the most rapid and effective ways, with AI
protocols and algorithms as a tool to make the life of crisis response
operators much easier (Björkegren, 2023). The visible advantages that
apply to contingent needs and short-term objectives in the field of
humanitarian assistance and aid for relief might however not apply with the
same logic to medium- and long-term efforts in support of the broader goals
of the development agenda, through capacity-building and technical
assistance for legislative change, institutional adaptation, and cultural
progress.
Conversely, there is also an unavoidable double challenge. First, a
likeminded actor in development cooperation can be as normative and as
focused on its values as it wishes. However, for an effective exchange to
happen, there need to also be advantages on the side of the donor (Dreher et
al., 2024). The EU may go abroad rightly promoting projects with
conditionality clauses attached, such as human rights, the green agenda,
connectivity, workers’ rights, public procurement standards, sound public
financial management, and so on, but it is by its nature less acquainted than
other international actors with the practices of transactional diplomacy. This
comes more naturally to an individual government than to a donor that is
the product of complex multilateral architecture. In addition to that, many
other regimes do not play the game by respecting a noble vision stemming
from a set of positive rules to respect, defend, and promote (Morris, 2018).
Therefore, the game is already asymmetrical for many actors contending for
a share of the market.
Second, a challenge in development cooperation that might soon be on
the horizon is that beneficiaries might be tempted to ask likeminded donors
to simply provide AI software and finance the server infrastructure
necessary to run it, thus circumventing for good the exposure that more
traditional programs for capacity-building and technical assistance entail,
especially when they include a strong human component. This can
increasingly happen in very sensitive fields, such as security sector reform,
justice reform, management of migratory flows, or democracy
development. Beneficiaries can import technical processes that trigger
increased efficiency in various policy areas, but without accepting the
culture of change that can favor greater societal fairness. There can be a
growing temptation for beneficiaries to be more reluctant to accept a human
presence from donors, closing the door to field deployments of national
experts and practitioners from the donor's administration, or project
operators belonging to international organizations, development agencies,
or consultancy firms to which some activities are outsourced. From the
perspective of likeminded donors, this is something that must be mitigated
by finding ways to preserve the human element in the field. The personal
exchanges are those that make it possible to have better information on
local political dynamics, and a better understanding of the technical and
institution-building needs, which would otherwise escape the donor's scope,
hence increasing the chances of promoting positive values. There is a risk
that the simplicity of transferring solutions through AI protocols ends up
exacerbating the already existing trend of focusing on technical solutions
without due attention to the normative context. It can happen that donors
themselves are induced into thinking that technology transfer will be a sort
of silver bullet or key to open wider doors for increased cooperation and
triggering greater change.
Accordingly, if the EU needs to pursue its ambition to be a meaningful
and recognized global actor, acting as a stabilizing force for the planet in
the current volatile geopolitical landscape, it is precisely because of its push
to be the beacon of a set of positive fundamental values, such as the rule of
law, democracy, and human rights protection, stemming from its basic rules
(Amadio Viceré & Venneri, 2023). This is clearly tricky, because it places a
burden on the EU when competing with less likeminded donors, which are
prone to be more transactional, and cooperating with beneficiaries less
inclined to be receptive on the side of the values (Shiferaw & Di Ciommo,
2023). The unfair competition at the level of donors is evident: many
regimes for instance do not look at environmental concerns when they
propose to governments in developing countries to build infrastructure. It is
sufficient for them to focus on the purely strategic interest and the
opportunity to strike commercial deals, ensure the flow of commodities for
their supply chains, occupy geopolitical space, and strengthen political ties
(Lomøy, 2021). In the same vein, autocracies that try to present themselves
as generous donors obviously do not engage in policy areas where they
have no standards to export, such as in the rule of law and democracy
fields. On the contrary, the EU must maintain a normative approach in its
external action, which in most cases of crises and post-conflict in
developing countries can clash, in terms of the normative understanding of
values, history, and expected developmental trajectories, with the vision of
the leadership in hybrid and non-democratic regimes (Staeger, 2015). In
other words, the challenge for the EU and other likeminded donors is
extremely complicated, because it must balance its strategic interests with
the duty of upholding its fundamental values even when facing a high
degree of unfair competition.
In this sense, the adoption of the AI Act shows that the EU is pushing for
a sustainable and meaningful adoption of AI, starting with likeminded
partners and beyond. This must be combined with the duty to counter the
ill-founded use of AI. It is increasingly apparent how AI can be used to
manipulate. In this area, the challenge politically for the EU is to find the
best possible and most balanced arrangements in terms of rules, preserving
its authenticity and spirit of being a global actor that genuinely wants to
share and enlarge the scope of welfare, progress, prosperity, and ultimately
fundamental rights, democracy, and rule of law. At the same time, the EU
cannot avoid the need to protect its strategic action and interests. An actor
like the EU will always be in an asymmetric dichotomy with more
authoritarian state actors around the globe. Is it a cost worth taking on? Of
course it is, because we must not only think that the EU is in competition
with other regimes abroad in exerting and expanding influence, whether
positively or negatively. It is not only about competing in geopolitical
regions away from Europe. The game is also being played in Europe. The
EU has reached a point where it has a visible need to better preserve and
nurture its own democracy, rule of law-based order, stability, internal
arrangements and balances, in addition to maintain the commitment to be
the beacon of these values in a pro-ethical, globalized, and now even AI-
influenced context.
Following these reflections on challenges stemming from asymmetries in
donors’ perspectives, there are a few key points for future reflection linked
to the increased use of AI technologies. First, how will AI applications
affect the field work for development, such as twinning and capacity-
building practices? How can AI enhance engagement, while avoiding
excessive de-personalization in these areas? And most importantly, how can
actors like the EU ensure that appealing AI applications do not move it
away from its core values? AI is a very potent tool, and yet it is also very
dependent on what information it is fed (Mock et al., 2023), as well as on
who has access to and control over this information, particularly when it is
sensitive personal information. These powers thus come with a heavy
responsibility.

The AI Factor in the Promotion of Fair and


Accessible Justice
Justice reform is an area where likeminded donors have invested
substantially in the past few decades, in the context of the overall efforts to
promote the entrenchment of the rule of law. The EU (including bilateral
actions by its Member States), the United States, but also smaller
contributors like Australia, Canada, Norway, Switzerland, or the United
Kingdom, naturally proud of their consolidated legal traditions and stable
institutional arrangements, have embarked on supporting justice reform
processes in many countries in transition. Statebuilding efforts in this sector
have taken many forms and developed for the most part on three strands of
work (Dawson & Swiss, 2020). First, donors have often provided avenues
for legislative development, including reforms of criminal codes and codes
of criminal procedure. Second, good practices for redesigning institutional
mandates or for the creation of new institutions have been exported,
including the establishment of specialized prosecution offices and courts to
tackle some specific criminal phenomena, such as organized crime,
terrorism, and high-level corruption. Institution-building actions in this
domain have also targeted the management of the judiciary (e.g., court
maps, human resources, relationships between the different actors) as well
as of cases (even before the introduction of electronic CMSs, there was a
strong focus on how to promote efficient management of cases, courtrooms,
proceedings, etc.). Third, international support has often been extended in
the form of training and mentoring of the relevant domestic practitioners by
international experts. Twinning and peer-learning exercises have enabled
operators in the sector to share their expertise in beneficiary institutions,
while interacting on a daily basis with their peers in the context of short-,
medium-, and long-term deployments (European Commission, 2023a).
Participation in study visits, exchanges, job-shadowing programs, and
attendance at specialized conferences to facilitate exchanges with peers
have also been a regular feature of technical assistance, as a way to ensure
knowledge and “cultural” transfer and create professional networks. For
instance, the European Commission has been supporting this approach for
decades with a dedicated framework, the Technical Assistance and
Information Exchange (TAIEX). Alongside these three means of
intervention, in many uncertain contexts, likeminded donors have even
acted upfront as catalysts to consolidate the initial political and societal
consensus necessary to put in motion these complex reform processes.
Interventions have often taken place through a combination of these
actions. Objectives have ranged across many different areas. Among others,
these have included: the reform of self-governing bodies, in particular with
a view to consolidating the independence of the judiciary; building the
capacities of domestic justice training centers and schools of magistrates to
strengthen the professionalism, impartiality, and ethics of judges and
prosecutors or their specialist knowledge of a certain legal branch, as well
as a merit-based approach to judicial careers; the strengthening of the
capacities to guarantee free legal aid, in order to make justice more
accessible and fair, boosting also the principle of equality before the law;
the modernization of the approach by courts to open justice to boost overall
transparency and accessibility; the revision of the criteria for the appraisal
of judges and prosecutors to provide incentives for greater effectiveness and
dedication in the exercise of judicial functions. Although to varying degrees
and extents, in all the abovementioned areas of reform and intervention,
there has recently been an increased use of e-tools, new software, and
innovative applications.
A thorough process for the digitalization of justice has been ongoing for
a few decades, the first attempts date back to the dawn of the spread of
computers across public offices in the 1980s and 1990s. More recently, the
process was substantially boosted in the context of the COVID-19 crisis.
The prolonged lockdowns triggered by the pandemic substantially
accelerated investments in the digitalization of justice systems, including in
the criminal sector. This has been particularly visible in the whole CoE
area, with new standards and practices being developed by experts (CEPEJ,
2019, 2021a, 2021b, 2024), and more specifically within the EU. In
October 2020, EU Member States found consensus on the idea that “…
[d]igital tools can help to better structure proceedings and to automate and
accelerate the handling of standardised and uniform tasks, thereby
increasing the effectiveness and efficiency of judicial proceedings”
(European Council, 2020, p. 3). This was done with a view to strengthening
the resilience of the judiciary, while seizing the opportunity to foster
efficiency and access to justice (European Commission, 2020a). In the EU,
the drive toward achieving e-justice culminated in the adoption of a
Regulation, with the stated objective to facilitate access to justice, along
with streamlining judicial procedures for better efficiency and cost-
effectiveness (European Union, 2023). The ambition to upscale the
digitalization of justice systems was also incorporated in the key objectives
of the latest European judicial training strategy (European Commission,
2020b), which is currently being implemented.
Developments facilitated by the recent advent of AI have impacted a
sector where digital solutions were already on the rise (UNESCO, 2024),
although still slowly when compared to some other public sectors, such as
health, education, security, or infrastructure. New AI-based technologies
now have the potential to revolutionize traditional access to justice through
fast delivery of output for free legal aid. In this area, e-tools are already
being tested in a number of countries and pilot projects implemented in
developing countries. Observers also argue that “Large Language Models
(LLMs) like OpenAI's GPT-4 have the potential to reshape how legal aid
centers and courts operate” (Goodson & Rongfei, 2024, p. 1). AI algorithms
can help streamline judicial processes, facilitating the generation of
documents, along with their more effective management and accessible
archiving, as well as providing greater transparency and objectivity in the
random allocation of cases. Chatbots can review and analyze jurisprudence
in the blink of an eye. Furthermore, analytical assistance and sophisticated
data processing can guarantee greater accuracy and increased fairness in the
allocation of public resources across the justice sector, thus fostering more
rigorous evidence-based judicial reforms. Yet, all of this will depend on the
pace of digitalizing the archives of court rulings and judicial decisions.
Other innovative applications that are already widely in use across courts
include, for example, the automatic transcription of hearings. This can
undoubtedly help save, and hence reinvest more usefully elsewhere, an
incredible quantity of clerical resources across courts. Other opportunities
stemming from the prospect of increased automation in courts include the
possibility to obtain better and faster reviews of previous cases, including
the selection of significant parts of previous cases for reference. This can
assist a more informed use of precedents and improve the readability of
legal documents and court decisions. It can also become quicker for judges
to check a prospective decision and its implications against the backdrop of
existing case law. A specific prospect that is currently under review, is the
possibility to use AI-based tools to help reduce the backlog of cases, in
particular when these are minor administrative cases, such as those related
to unpaid utility bills, where there is a large volume of similar cases.
Can all these applications already in use in the most advanced systems be
exported easily and effectively, making the life of likeminded donors
easier? On the one hand, the marketing of all this technological
advancement might be easier to do. As previously mentioned, there is a
tendency among beneficiaries to accept, with less internal resistance, means
of technical support that are mostly centered on the transfer of software or
technical instruments. This is customarily seen by beneficiaries as a less
intrusive intervention in what undoubtedly remains a very sensitive sector,
whose changing features and configurations might ultimately trigger a shift
in internal balances among the pillars of the state. On the other hand,
becoming a service provider, or even suppliers of software and hardware,
might not fully be compatible with the long-term ambitions of likeminded
donors that operate in sectors such as the judiciary to promote positive,
irreversible change toward independence, impartiality, and fairness of the
system. These objectives entail a change in gear and culture across the
sector, affecting the work of judges, prosecutors, legal assistants, and even
national advocacy and the relevant academic and research sectors. Working
on soft aspects linked to professionalism, capacity, and integrity requires a
certain degree of socialization and, on the side of the beneficiary, a certain
investment in terms of depriving the home ranks of valuable and senior
expertise, which is sacrificed abroad for mentoring and capacity-building
purposes.
In between these two elements, there is a third crucial aspect that must be
considered from the perspective of a likeminded donor. A dehumanized or,
to put it in less dramatic terms, a hyper-technological judiciary might be
risky if the installation happens in a context that was chronically
characterized by a lack of independence and frequent (and successful)
attempts at undue external influence on magistrates, both in their exercise
of judicial functions and in the act of self-regulating the sector. Providing
technological advancement in such contexts, without a parallel substantial
investment in building greater resilience in the sector based on
comprehensive reform and the triggering of cultural shifts, within and
around the judiciary, might not generate the desired, sustainable, and
irreversible results. Technology might even be hijacked and manipulated,
making it easier for ill-intentioned actors to make private gains to the
detriment of the public good. Another element to be considered is, once
more, the issue of costs and future ownership. Certain technological
transfers might require considerable financial investment, which recipient
governments often cannot afford. These are particularly demanding
especially in multiethnic and multilingual countries. Launching the
installation of new technologies without the prospect of sustainability could
generate increasing aid-dependency. This dependency will be heavier, the
more sophisticated, comprehensive, and energy-hungry the AI systems are.
In addition to the infrastructure gap, the digital skills gap remains the
elephant in the room. The administrations most interested in the miracles of
digitalization are also those with the most severe human capital problems.
As a cooperation expert put it, “technical assistance projects to implant
good AI management practices (i.e. evaluation, human oversight,
transparency, etc.) may find it hard to succeed in a skill-arid landscape”.7
Improving beneficiaries’ general understanding of AI is therefore a major
need, which has started to be addressed by some initiatives from overseas.
For example, the US-financed National Judicial College has started a
Massive Open Online Course (MOOC)8 on AI and the Rule of Law, which
has been developed in cooperation with UNESCO's Judges Initiatives and
regional partners (UNESCO, 2024). This should also go hand in hand with
the promotion of ethical use of AI. There is in fact another sustainability
risk in the picture. Sustainability issues are not only linked to the likelihood
that complex systems can be used effectively by local beneficiaries, the
even worse scenario, which must be prevented, is that these systems may
end up being misused, for ill-intended purposes in a system where data
protection and, more broadly, enforcement of fundamental rights also
remain weak.
Finally, there is a reflection to be had on the issue of accountability that
completes the already complex equation highlighted above. Promoting the
use of applications based on AI technology to the fullest can be as
dangerous in context of weak judicial independence as drifting from scarce
to no accountability. The pace and accuracy in producing a certain output –
whether a legal analysis, the review of pre-existing jurisprudence, the
search for evidence across databases and other types of investigative
assistance, or even simple support with legal drafting – cannot justify
circumventing ultimate human validation and hence the accountability for
decisions. The opportunity to control and manipulate automatization across
the judiciary, inserting a specific bias, might ultimately make life easier for
those who object to the independence of the judiciary and seek to create
interference that undermines the basic principle of equality before the law.

Conclusion: Balancing Strategy and Ethics in AI


for Development
The world stands at a crossroads, where the need for robust ethical
guidelines and regulatory frameworks is not just important, it is imperative.
The global regulatory mosaic is as diverse as the nations and international
organizations that are currently attempting to shape it. To avoid a patchwork
of AI governance that could restrain the positive impact of such
applications on the global development agenda, international actors must
strive for cooperation on a global scale (Klein & Patrick, 2024). Nor can the
digital divide be overlooked. Often at the receiving end of aid, developing
nations may not have the same access to AI technologies and expertise as
donor countries, not to mention adequate resources to guarantee the
sustainable adoption of new tools and models. This gap could widen
disparities rather than close them, a challenge that likeminded actors must
address by investing further in education and infrastructure, while
facilitating possible economies of scale that might bring greater benefits to
the countries that are joining the race from an unfavorable position.
Geopolitical actors must nurture an ecosystem where innovation thrives and
AI solutions can be adapted to specific local requirements and the political
situation, crafted to fit the unique tapestry of each community's needs.
At times of highly volatile and unstable – not to say worsening –
geopolitical dynamics, AI can be a double-edged sword. It has the power to
elevate or to manipulate, offering assistance with one hand, while imposing
influence with the other. Likeminded agents must remain vigilant, ensuring
that using AI to channel development remains a tool for greater
empowerment, not a mere leash to impose control and establish new forms
of dependence. Actors like the EU and other likeminded donors must
champion a new digital diplomacy, one that through the promotion of an
inclusive and multifaceted approach honors the values of the rule of law,
democracy, and human rights.
The road ahead will be forged by partnerships that cross borders and
sectors. Organizations like the EU, but also the United Nations with its
SDGs or the CoE, will need to rise to the task and provide the much-needed
political leadership by which we can align AI's potential to humanity's
North Star. AI equips the world with the means to address some of its most
stubborn challenges. Yet, the true measure of this journey lies not only in
the sophistication of the tools, but in the depth of its wisdom, the breadth of
its vision, and the strength of its foresight.
This chapter has contributed to the academic debate by analyzing how
the prospect of increased use of AI and its applications can affect the global
market of development cooperation and impact technical assistance in the
rule of law area and justice reform in particular. It first offered some
reflections on the dynamics behind global development aid, shedding light
on the market created by the interactions between donors and potential
beneficiaries, as well as the vertical competitive dynamics among donors
themselves. Using the EU as an example of a likeminded donor, it also
looked at the struggles that a highly normative actor might experience when
operating in such a market with the commitment to respect a value-driven
agenda, while facing the unfair competition of hybrid and authoritarian
regimes that have tried to increase their share in providing development aid
and support.
These considerations were then linked to the applications of AI in global
development aid practices. The analysis that emerges underscores the need
for a multifaceted approach to AI governance and implementation that
involves experts and actors from all affected fields. The chapter thus
advocates a consciously strategic, yet ethically grounded, integration of AI
applications, particularly when embarking on capacity-building endeavors
and technical assistance initiatives in the rule of law sector and in the
context of building impartial, fair, efficient, professional and independent
judiciaries, characterized also by the right degree of accountability. If it is
true that exporting judicial efficiency and easing access to justice through
new technologies can generate cost-effective cooperation, savings that these
innovations can trigger must still be properly reinvested, in particular to
strengthen inclusivity (McCoubrey, 2022).
Furthermore, AI can further facilitate the technological transfer of
knowledge and processes functional to development, including in a
sensitive area such as justice reform. A clash of cultures between donor and
recipient is potentially annulled. AI can be a vehicle for new modalities of
capacity-building and knowledge transfer. But a significant problem
remains, as the recipient will have to accept instruments that are designed
from specific perspectives, programmed and tailormade by operators that
have received an institutional mandate intrinsic to one administration and
one legal culture. The question is whether the software's margins for
learning will ever be so great that AI-generated tools will adapt more easily
to the needs of operators responding to different legal systems and
administrative cultures. When this happens, as humans we will need new
mental paradigms to keep up with the machine.

Notes
1. Interview 1, conducted in person with an official of the European
Commission, March 5, 2024.
2. Interview 2, conducted via phone with a task manager of an EU
Delegation to a third country, March 15, 2024.
3. Interview 3, conducted in person with an expert of a Member State of
the Council of Europe, May 6, 2024.
4. Interview 4, conducted in person with an expert of a Member State of
the Council of Europe, May 15, 2024.
5. Details available at
https://euaidexplorer.ec.europa.eu/explore/recipients_en
6. Cfr. Articles 8, 21, and 49 of the Treaty on the European Union,
respectively on the Neighbourhood Policy, the Common Foreign and
Security Policy, and the Enlargement process, to be read in conjunction
with Article 2, which lists the founding values of the EU.
7. Interview 5, conducted via phone with a cooperation expert from an
EU Member State, March 25, 2024.
8. More information on the MOOC can be found at
https://www.judges.org/ai_and_law/english/

References
Abizaid, J., Miscik, J., McRaven, W., Axler, A., Jevremovic, D., Zhang, A.,
Norwood, P., Mohandas, S., Lovegrove, S., Roscini, D., Nzewi, T.,
Tompkins, S., Stone, T., Petersen, C., Bunzel, T., and Bailey, R.
(2023). The geopolitics of artificial intelligence. Lazard.
https://www.lazard.com/research-insights/the-geopolitics-of-artificial-
intelligence/
Amadio Viceré, M. G., & Venneri, G. (2023). The European Union's
engagement with the Southern Mediterranean: Integrating the
European neighbourhood policy (1st ed.). European Administrative
Governance. Palgrave Macmillan. https://doi.org/10.1007/978-3-031-
31205-2
Björkegren, D. (2023). Artificial intelligence for the poor: How to harness
the power of AI in the developing world. Foreign Affairs.
https://www.foreignaffairs.com/world/artificial-intelligence-poor
Bradford, A. (2023). Digital empires: The global Battle to regulate
technology (1st ed.). Oxford Academic.
https://academic.oup.com/book/46736?login=true
CEPEJ. (2019). Toolkit for supporting the implementation of the guidelines
on how to drive change towards cyber-justice. Document No.
CEPEJ(2019)7final. Council of Europe, European Commission for the
Efficiency of Justice. https://rm.coe.int/cepej-toolkit-cyberjustice-en-
cepej-2019-7/168094ef3eCEPEJ
CEPEJ. (2021a). Guidelines on electronic court filing (e-filing) and
digitalisation of courts. Document No. CEPEJ(2021)15final. Council
of Europe, European Commission for the Efficiency of Justice.
https://rm.coe.int/cepej-2021-15-en-e-filing-guidelines-digitalisation-
courts/1680a4cf87
CEPEJ. (2021b). 2022 – 2025 CEPEJ Action plan: Digitalisation for a
better justice. Document No. CEPEJ(2021)12final. Council of Europe,
European Commission for the Efficiency of Justice.
https://rm.coe.int/cepej-2021-12-en-cepej-action-plan-2022-2025-
digitalisation-justice/1680a4cf2c
CEPEJ. (2023). Assessment tool for the operationalisation of the European
ethical charter on the use of artificial intelligence in judicial systems
and their environment. Document No. CEPEJ(2023)16final. Council
of Europe, European Commission for the Efficiency of Justice.
https://rm.coe.int/cepej-2023-16final-operationalisation-ai-ethical-
charter-en/1680adcc9c
CEPEJ. (2024). Use of generative artificial intelligence (AI) by judicial
professionals in a work related context Information note prepared.
Document No. CEPEJ-GT-CYBERJUST(2023)5final. Council of
Europe, European Commission for the Efficiency of Justice, Working
Group on Cyberjustice and Artificial Intelligence.
https://rm.coe.int/cepej-gt-cyberjust-2023-5final-en-note-on-
generative-ai/1680ae8e01
Chui, M., Harryson, M., Manyika, J., Roberts, R., Chung, R., van Heteren,
A., & Nel, P. (2018). The AI frontier: Applying AI for social good.
McKinsey Global Institute
Cohen-Tanugi, L. (2021). Europe as an international normative power: State
of play and perspectives. Revue Européenne Du Droit, 2(3), 91–97.
https://geopolitique.eu/en/articles/europe-as-an-international-
normative-power-state-of-play-and-perspectives/
Council of Europe. (2024). Framework convention on artificial intelligence
and human rights, democracy and the rule of law. 133rd Session of the
Committee of Ministers.
https://search.coe.int/cm/Pages/result_details.aspx?
ObjectId=0900001680afb11f
Dawson, A., & Swiss, L. (2020). Foreign aid and the rule of law:
Institutional diffusion versus legal reach. British Journal of Sociology,
71(4), 761–784. https://onlinelibrary.wiley.com/doi/full/10.1111/1468-
4446.12752
Dreher, A., Lang, V., & Reinsberg, B. (2024). Aid effectiveness and donor
motives. World Development, 176(1), 1–20.
https://www.sciencedirect.com/science/article/pii/S0305750X2300319
4
Eikenberry, K., Liberi, D. M., & Patterson, A. W., Taylor, W. B., Berg, L.-
A., Leroux-Martin, P., Platz, I., Steadman, L. E., Trenkov-Wermuth,
C., Yager, S., Zack, J., Cochran, N., Farfán-Méndez, C., Gaston, E.,
Khisa, M., Lettieri, M., Rwengabo, S., & Zhernakov, M. (2023). Elite
capture and corruption of security sectors. US Institute of Peace.
https://www.usip.org/sites/default/files/2023-02/20230217-elite-
capture-corruption-security-sectors.pdf
Erforth, B., Chux, D., & Teevan, C. (2023). Digitalisation for
transformation: New frontiers for Africa–Europe cooperation. In B.
Erforth, D. Chux, & C. Teevan (Eds.), Africa–Europe cooperation and
digital transformation (1st ed., pp. 1–16). Routledge Publishing.
https://www.routledge.com/Africa-Europe-Cooperation-and-Digital-
Transformation/Daniels-Erforth-Teevan/p/book/9781032228235
European Commission. (2020a). Digitalisation of justice in the European
union: A toolbox of opportunities. Document COM/2020/710 final.
European Union Publication Office. https://eur-lex.europa.eu/legal-
content/EN/ALL/?uri=COM:2020:710:FIN
European Commission. (2020b). Ensuring Justice in the EU: A European
Judicial Training Strategy for 2021-2024. Document No.
COM/2020/713 final. Official Journal of the European Union.
https://eur-lex.europa.eu/legal-content/EN/TXT/?
uri=CELEX%3A52020DC0713
European Commission. (2021). Proposal for a regulation of the European
Parliament and of the Council laying down harmonised rules on
artificial intelligence (Artificial Intelligence Act) and amending certain
union legislative acts. Document No. COM/2021/206 final. Official
Journal of the European Union. https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX:52021PC0206
European Commission. (2023a). 25 years of twinning: Promoting the rule
of law. European Commission Website. https://neighbourhood-
enlargement.ec.europa.eu/funding-and-technical-
assistance/twinning/twinning-25-years-moving-forward-together-eu-
expertise/25-years-twinning-promoting-rule-law_en
European Commission. (2023b). Artificial intelligence: Questions and
answers. Spokesperson Service of the European Commission.
https://ec.europa.eu/commission/presscorner/api/files/document/print/e
n/qanda_21_1683/QANDA_21_1683_EN.pdf
European Commission. (2024a). Commission Decision of 24.1.2024
establishing the European Artificial Intelligence Office. Document No.
C(2024) 390 final. Official Journal of the European Union. https://eur-
lex.europa.eu/eli/C/2024/1459/oj
European Commission. (2024b). Artificial intelligence in the European
Commission (AI@EC): A strategic vision to foster the development
and use of lawful, safe and trustworthy artificial intelligence systems
in the European Commission. Document No. C(2024)380 final.
Official Journal of the European Union.
https://commission.europa.eu/system/files/2024-
01/EN%20Artificial%20Intelligence%20in%20the%20European%20C
ommission.PDF
European Council. (2020). Council conclusions on ‘Access to justice –
seizing the opportunities of digitalization’. Document No. C 342 I/01
in Official Journal of the European Union, Publication Office of the
European Union. https://eur-lex.europa.eu/legal-
content/EN/TXT/PDF/?uri=CELEX:52020XG1014(01)&from=EN
European Union. (2023). Regulation of the European Parliament and of the
Council on the digitalisation of judicial cooperation and access to
justice in cross-border civil, commercial and criminal matters, and
amending certain acts in the field of judicial cooperation. Document
No. 2023/2844. European Union Publication Office (2023). https://eur-
lex.europa.eu/eli/reg/2023/2844
Freedom House. (2024). Freedom in the world 2024: The mounting damage
of flawed elections and armed conflicts.
https://freedomhouse.org/sites/default/files/2024-
02/FIW_2024_DigitalBooklet.pdf
Goodson, N. & Rongfei, L. (2024). Transforming legal aid with artificial
intelligence: Training LLMs to ask better questions for legal intake.
Stanford Law School.
https://law.stanford.edu/2024/03/15/transforming-legal-aid-with-ai-
training-llms-to-ask-better-questions-for-legal-intake
Heeks, R. (2018). Information and communication technology for
development (ICT4D). 1st ed. Routledge Publishing.
https://www.routledge.com/Information-and-Communication-
Technology-for-Development-ICT4D/Heeks/p/book/9781138101814
International IDEA. (2023). The global state of democracy 2023: The new
checks and balances. International Institute for Democracy and
Electoral Assistance – International IDEA.
https://www.idea.int/democracytracker/sites/default/files/2024-02/the-
global-state-of-democracy-2023-the-new-checks-and-balances.pdf
Jaber, M. (2023). IoT and machine learning for enabling sustainable
development goals. Frontiers in Communications and Networks, 4(1),
1–16. https://doi.org/10.3389/frcmn.2023.1219047
Klein, E., & Patrick, S. (2024). Envisioning a global regime complex to
govern artificial intelligence. Carnegie Endowment for International
Peace. https://carnegie-production-
assets.s3.amazonaws.com/static/files/Klein_Patrick_AI_Regime_Com
plex-1.pdf
Konovalova, M. (2023). AI and diplomacy: Challenges and opportunities.
Journal of Liberty and International Affairs Institute for Research and
European Studies, 9(2), 699–715.
http://dx.doi.org/10.47305/JLIA2392699k
Kramer, X. E. (2022). Digitising access to justice: The next steps in the
digitalisation of judicial cooperation in Europe. Revista General de
derecho Europeo, 56(1), 1–9.
Lomøy, J. (2021). Chinese aid – A blessing for Africa and a challenge to
Western donors. CMI INSIGHT, 2, 1–6.
https://www.cmi.no/publications/7750-chinese-aid-a-blessing-for-
africa-and-a-challenge-to-western-donors
Manners, I. (2002). Normative power Europe: A contradiction in terms?
Journal of Common Market Studies, 40(2), 235–258.
McCoubrey, S. (2022). E-justice: Digital transformation to close the justice
gap. United Nations Development Programme – UNDP.
https://www.undp.org/sites/g/files/zskgke326/files/2022-
06/E%20justice-Report%2005.pdf
Mock, M., Edavettal, S., Langmead, C., & Russell, A. (2023). AI can help
to speed up drug discovery—But only if we give it the right data.
Nature, 621(1), 467–470. https://doi.org/10.1038/d41586-023-02896-9
Moore, A. (2023). How AI could revolutionize diplomacy. Foreign Policy.
https://foreignpolicy.com/2023/03/21/ai-artificial-intelligence-
diplomacy-negotiations-chatgpt-quantum-computing/
Morris, S. (2018). What happens when development cooperation becomes
development competition. Brookings.
https://www.brookings.edu/articles/what-happens-when-development-
cooperation-becomes-development-competition/
Nastase, A., Rajan, A., French, B., & Bhattacharya, D. (2020). Towards
reimagined technical assistance: The current policy options and
opportunities for change. Gates Open Research, 4(180), 1–18.
https://doi.org/doi:10.12688/gatesopenres.13204.2
Pellicciari, I. (2022). Re-framing foreign aid history and politics: From the
fall of the Berlin wall to the COVID-19 outbreak. Routledge.
Pellicciari, I. (2023). World war aid: Interventionist aid and war in
Ukraine. Routledge.
Pigola, A., da Costa, P. R., Carvalho, L. C., Silva, L. F. d., Kniess, C. T., &
Maccari, E. A. (2021). Artificial intelligence-driven digital
technologies to the implementation of the sustainable development
goals: A perspective from Brazil and Portugal. Sustainability, 13(24),
1–28. https://www.mdpi.com/2071-1050/13/24/13669
Repucci, S., & Slipowitz, A. (2022). The global expansion of authoritarian
rule. Freedom House. https://freedomhouse.org/report/freedom-
world/2022/global-expansion-authoritarian-rule
Shiferaw, L. T., & Di Ciommo, M. (2023). Trouble in paradise: The EU-
Africa partnership in a geopolitical context. European Centre for
Development Policy Management. https://ecdpm.org/work/trouble-
paradise-eu-africa-partnership-geopolitical-context
Silini, G., & Molina, L. (2024). The geopolitical effects of artificial
intelligence: The implications on international relations. Mondo
Internazionale. https://mondointernazionale.org/focus-allegati/the-
geopolitical-effects-of-artificial-intelligence-the-implications-on-
international-relations
Staeger, U. (2015). Africa–EU relations and normative power Europe: A
decolonial pan-African critique. Journal of Common Market Studies,
54(4), 981–998.
https://onlinelibrary.wiley.com/doi/full/10.1111/jcms.12350
Taneja, H., & Zakaria, F. (2023). AI and the new digital cold war. Harvard
Business Review. https://hbr.org/2023/09/ai-and-the-new-digital-cold-
war
Thamik, H., Figueroa Cabrera, J. D., & Wu, J. (2024). The digital
paradigm: Unraveling the impact of artificial intelligence and internet
of things on achieving sustainable development goals. In S. Misra, K.
Siakas, & G. Lampropoulos (Eds.), Artificial intelligence of things for
achieving sustainable development goals (1st ed., pp. 21–40).
Springer. https://doi.org/10.1007/978-3-031-53433-1
UNESCO. (2024). AI and the rule of law: Capacity building for judicial
systems. The United Nations Educational, Scientific and Cultural
Organization. https://www.unesco.org/en/artificial-intelligence/rule-
law/mooc-judges
United Nations. (2015). Transforming our world: The 2030 agenda for
sustainable development. Document No. A/RES/70/1. United Nations.
https://sdgs.un.org/sites/default/files/publications/21252030%20Agend
a%20for%20Sustainable%20Development%20web.pdf
Vinuesa, R., Azizpour, H., & Leite, I., Balaam, M., Dignum, V., Domisch,
S., Felländer, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F.
(2020). The role of artificial intelligence in achieving the sustainable
development goals. Natural Communications 11(233)
https://doi.org/10.1038/s41467-019-14108-y
Ziller, J. (2024). The council of Europe framework convention on artificial
intelligence vs. the EU regulation: Two quite different legal
instruments. CERIDAP Journal - Interdisciplinary Research Center on
Public Administration. https://doi.org/10.13130/2723-9195/2024-2-11
22
THE CHATGPT ARTIFICIAL
INTELLIGENCE MODEL
Challenges and Solutions
Maria Amata Garito and Marco Breda

DOI: 10.4324/9781003518495-28

Historical Evolution of Knowledge Organization:


Models of Knowledge Storage
Never before in history have knowledge storage patterns undergone such
radical changes owing to the Internet, its search engines, and especially
generative artificial intelligence (GAI) conversational systems, such as the
latest generative pre-trained transformer (ChatGPT) models referred to in
this chapter. Organizing knowledge has always been a basic need of human
beings; lists, catalogs, and knowledge maps are present in all cultures.
Already in the Greek and Roman worlds we find encyclopedic works by
authors such as Aristotle, and in the Roman world, Pliny the Elder who in
the first century A.D. wrote the Naturalis Historia, a 37-volume work that
served as a basis for many encyclopedias produced over the following
centuries. The first and perhaps most representative encyclopedia, as we
understand it now—ordered alphabetically and not by subject, as was the
case during the Renaissance—is Diderot's (Diderot & d’Alembert, 1751–
1772) Dictionnaire Raisoné des Sciences, des Arts et des Métiers, which
saw the collaboration of intellectuals such as Voltaire and Rousseau. These
were linearly structured works that stored and transmitted knowledge in a
unidirectional manner. The scientific nature and veracity of what was
written were guaranteed by the authors and their approach to science and
knowledge.
Today, with the Internet, knowledge is not only stored according to
stratification patterns through vertical metaphors (from roots to branches,
from foundations to roofs) as was the case with oral and written culture. If
books and their layering in the various encyclopedias just mentioned, which
systematically store knowledge, are a good example of vertical
stratification, contemporary knowledge is stored on the Internet following
horizontal metaphors, requiring the ability to navigate, orient yourself, lose
your way, meet other travelers, and reconstruct meanings. Today, the global
plaza of the Internet, the largest public space humanity has ever had, does
not occupy a marginal position in the contemporary anthropological
mutation, but is at its core. The Internet is characterized by being a network
both external and internal to man, made up of interconnected knowledge
that operates on data. This has profoundly changed the very processes and
mechanisms of production, storage and transmission of knowledge (Garito,
2015).
The ability of a user to interact with digital content using a mouse, to
delve into digitized knowledge, and to modify it, is a form of power and
control over such content. This interaction externalizes various forms of
language: written, spoken, and multimedia. On the Internet, everyone can
become not only a user but also a distributor of knowledge, either
individually or collectively. The storage and access to knowledge involve
collective communication among individuals from various parts of the
world, characterized by cultural, political, religious, ideological, and social
differences, as never before in human history. In this collective cyberspace,
everyone finds suitable ways to communicate, despite the respective
differences. The network is a boundless space that still thrives in the chaos
of individual behaviors. Users are simultaneously consumers and
distributors of content, they are producers of knowledge; knowledge that is
often very subjective, making it difficult for a non-expert user to evaluate
its veracity. Take, for example, the largest encyclopedia the world has ever
had, Wikipedia. It is a free, collaborative, multilingual online encyclopedia
supported and hosted by the Wikimedia Foundation. It has more than 6.81
million entries in over 330 languages (Wikicount, 2024). It is the largest
encyclopedia ever written in the world and the most extensive and most
consulted general reference work freely accessible on the Internet.
Wikipedia's primary feature is the fact that it allows anyone the opportunity
to collaborate and post content on pages of their interest, using an open
editing and publishing system that relies on a platform, Wikimedia. In other
words, it is maintained by volunteers, meaning the pages can be modified
by anyone, and there is no editorial committee or any prior control over the
content posted; nevertheless, Wikipedia relies on a set of policies and
guidelines that regulate the quality and reliability of the information posted.
Each modification is subject to the scrutiny of other users who can verify,
correct, and improve the content, thus ensuring a form of collective control.
In particular, it is based on the right of citation as testimony to the reliability
of the content present and therefore cites the source of origin, thus
classifying itself as a tertiary source. Registration is not mandatory but
recommended. Due to its open nature, problems such as vandalism and
inaccuracies can occur. The enrichment of the encyclopedia is solely due to
the continuous input of contributions both in terms of new content and
revision of the same by volunteer users. Therefore, we do not have the
ability to completely trust the content we find on Wikipedia or all the
content we find on the Net. In fact, the Internet is also a source of fake
news, fake information, and fake knowledge.
What is even more concerning in this context is the way major search
engines retrieve information and provide answers to those seeking to surf
the vast Web. The primary function of Google, the most widely used search
engine for finding information on the Internet, does not operate in complete
transparency. The information and documents provided by the engine do
not consistently meet the true requests of users. The documents that appear
first in the list of results in response to a user's query are those that are
published by the same search engine, which prioritizes information sources
that have invested heavily in advertising on Google itself, relative to the
keywords used by the user. As a result, less knowledgeable individuals tend
to click on the results that appear first and are, in fact, accessed more
frequently. Moreover, the search results published by Google immediately
below the sponsored results are dynamically determined by the collective
activity of users, based on the number of citations on other sites and the
weight of these references, to the detriment of the scientific accuracy of the
answers. Thus, the response provided to the user is not a direct function of
consistency with respect to the request formulated by the user, but is
conditioned by advertising investment, as well as a greater number of clicks
and inbound links that other network users have determined with their
actions. Page hierarchy constitutes a form of subjective power, and
collective action is a new form of power. Politics should act globally to
force Google and other search engines not to use these forms of business-
driven information that deceive the user.
Ways must be found to defend users on the Net. The most vulnerable
users, those with limited cultural knowledge and experience, struggle to
distinguish between true and false information, and between relevant and
sponsored content. They surf unprotected in the vast sea of information,
amidst falsehoods and truths. Users must learn to critically evaluate
information sources and distinguish between reliable and unreliable content.
The democratization of knowledge is one of the greatest achievements of
the Internet, but it requires collective and individual responsibility to
maintain the integrity and reliability of information.

The Age of AI: Challenges and Risks


With the advent of conversational mechanisms based on GAI, such as
ChatGPT, we are witnessing yet another significant shift in the model of
storing and organizing knowledge and managing access to information,
leading to a revolution in knowledge. These advanced AI systems are
capable of understanding natural language and generating coherent,
contextual responses to complex questions, radically transforming the way
we interact with technology and access knowledge. ChatGPT (OpenAI,
2023), developed by OpenAI, represents a significant example of these new
technologies. Using large language models trained on vast amounts of data,
the conversational system is able to produce texts that are almost
indistinguishable from those written by humans. This can not only improve
user experience, making interactions more natural, intuitive, but it can also
enable access to information, as well as offer real-time support in a much
more efficient way. The mode of human-machine interface is thus
transformed from the current model, which is based primarily on text
searches and predefined commands, to a conversation that, by its very
nature, is entirely adaptable to the needs of the individual user, making the
interaction personalized and fluid. These conversational mechanisms can
answer questions, provide detailed explanations, assist in solving complex
problems, and even perform creative tasks such as writing articles or
stories. For example, in the field of education, ChatGPT can act as a virtual
tutor, offering explanations and personalized assistance to students (Garito,
1991).
But this is only the simplest of applications. To fully express its potential,
the conversational engine can be, and increasingly will be, embedded in a
complex logic where it operates synergistically with other algorithms,
instilling AI into every aspect and sub-aspect of the process considered. In
general, the conversational engine can be integrated with data analysis
systems, machine learning, and other advanced technologies such as virtual
reality and robotics, to offer even more powerful, specialized, and tailored
solutions. Indeed, we are witnessing the birth of a veritable new era, the
Age of AI. The combination of GPTs and beyond with advanced
algorithmic techniques is already amplifying, and promises to further
amplify, AI capabilities. This will lead to systems that can support us in
solving complex problems in ways we can now, perhaps, only imagine.
New possibilities will open up in all areas where humans operate and even
those where they do not yet. The advanced integration of AI technologies,
in addition to further developing fields currently already dominated by
humans, will open up new opportunities in as yet unexplored areas. It could
be used to discover and develop new branches of knowledge and
application, extending the impact of technological innovation well beyond
the current boundaries of human activities. Boundaries dictated by
technological limitations, information processing capacity, pattern
discovery, reasoning, available time, the latter conditioned by the human
lifespan and the concomitant difficulty in transmitting knowledge; thus,
leading us to find solutions to problems that previously seemed
insurmountable. The synergistic evolution of technology could also help
humans in addressing global challenges such as climate change, resource
scarcity and pandemics with a more integrated and informed approach.
Intelligent systems, working together with humans, could not only solve
current problems but also anticipate and prevent future crises. This
development could turn AI into a real accelerator of innovation of human
possibilities. This synergistic approach represents one of the main areas of
study and research on AI. Scientists and engineers are working to overcome
the technical challenges of integrating different technologies, by developing
new algorithm models. The future of AI therefore lies in the ability to create
integrated algorithmic ecosystems, where different technologies work
together to overcome current limitations and open new frontiers of
innovation.
OpenAI, along with others, is active in this area. In March 2023, it
released to the public an update to ChatGPT with the GPT-4 model; the
underlying AI was trained on a textual corpus of 570 GB with
approximately 100 trillion parameters (OpenAI, 2023a). The size and
complexity of the GPT-4 model represent a significant advancement over
previous models, greatly improving the system's ability to understand and
generate text. However, the knowledge bases used to train ChatGPT's
intelligent system are still reliant on content found on the Web. This implies
that despite the sophisticated training techniques and the vastness of the
data on which the model was trained, responses provided by ChatGPT 4 can
be influenced by fake news and misinformation found online—as indeed
stated in its interface as an initial message: “ChatGPT can make mistakes.
Consider verifying important information.” AI has no inherent ability to
automatically distinguish between true and false information without
external verification. This underscores the importance of using ChatGPT
and similar tools critically, always verifying information obtained through
reliable and authoritative sources.
As of January 2024, OpenAI introduced a further model called GPT-4o
(OpenAI, 2024), which offers similar performance to GPT-4, but with
improvements in terms of speed and image-text comprehension capabilities.
This advanced model enables greater multimodal interaction, such as the
ability to discuss images and text more naturally and quickly. GPT-4o is
designed to further improve the accessibility and usefulness of AI tools for
a wider audience.
To better probe ChatGPT's limitations and potential, we asked the
Chatbot a simple question: “What are the negative aspects of ChatGPT?”,
to which the Chatbot responded by highlighting, precisely, some of its
inherent limitations. In the example, we stopped at the first answer, a rather
unusual approach in the use of a conversational engine, just as it is unusual
to ask for self-judgment. This is because our goal was not to perform a
scientific test of the engine's abilities, but nonetheless to take inspiration
from its response and make some considerations. The answer given by
ChatGPT 4o on June 17, 2024, is reported here.

Response:
The use of ChatGPT, like any other technology, has some downsides
that are important to consider. Here are the main ones:
1. Disinformation and errors:
False positives: ChatGPT can generate responses that seem
plausible but are actually incorrect or misleading.
Lack of updates: Information may be out of date, especially if
changes have occurred since the model's training date.
2. Over-reliance:
Substitution of human judgment: Users may over-rely on
ChatGPT for decisions that require human judgment and
intuition.
Risk of mental laziness: It may reduce users’ habit of actively
seeking information or thinking critically.
3. Privacy issues:
Data retention: There is a risk that sensitive information shared
with the model may be retained or used improperly.
Tracking: Interactions could be monitored and analyzed, leading
to privacy issues.

4. Bias and impartiality:


Reproduction of biases: ChatGPT can reproduce biases present
in training data, which reflect social and cultural biases.
Discrimination: May inadvertently generate discriminatory or
offensive content.
5. Social and occupational impact:
Replacement of human labor: Can contribute to job losses,
especially in sectors involving text production or customer
service.
Digital divide: It may widen disparities between those who have
access to these technologies and those who do not.
6. Security issues:
Abuse of technology: Can be used to create malicious content
such as phishing, misinformation or textual deepfakes.
Interference in decision-making processes: Can be used
unethically to influence public opinions and political stances.
7. Technical limitations:
Lack of contextual understanding: ChatGPT may lack deep and
contextual understanding of complex topics.
Generic answers: May provide answers that are too generic or
irrelevant in specific contexts.

Understanding these downsides is crucial for informed and responsible


use of technology, balancing the benefits it can offer with potential
risks and limitations.

ChatGPT thus provided an ethical response that truly safeguards user


interests, encompassing all major issues that have recently emerged as
significant. Notably, it addresses categories such as bias and impartiality,
social and occupational impact, security issues, and technical limitations,
with a detailed and structured discussion supplemented by specific
examples. It is therefore clear, even to itself as a factual matter, that the
engine can be a useful and creative tool, capable of significantly enhancing
human capabilities and supporting them in various tasks. However, at the
same time, it requires of users a conscious and shrewd use, which allows
them to avoid the disadvantages inherent in its limitations, preventing them
from encountering problematic situations. It is, in essence, a powerful but
dangerous tool, to be used expertly, and with several precautions. The key
point, however, is understanding how to do so.
Indeed, the ability to generate text autonomously brings with it issues
related to the reliability and quality of the information produced, the
potential for spreading misinformation, and intellectual property rights.
These issues are difficult to capture, since texts are generated in real time
and are not available for a priori checking, hence the existence of
verification and control mechanisms to guarantee the ethics and accuracy of
the information. In fact, even a new and powerful tool such as a Chatbot,
designed with intelligent systems capable of generating articulated texts in
response to user-supplied commands and directions (prompts), does not
automatically allow the user to be reassured about the veracity of the
responses obtained.
These technologies offer unprecedented opportunities for personalized
learning, efficient information retrieval, creative assistance, and many other
functions, but they also pose significant difficulties. To overcome these,
action must be taken on three main fronts: technical, information quality,
and user control.
From a technical standpoint, we can take action to improve the quality of
ChatGPT's responses. This can be done by training it with industry-specific
data and up-to-date information, formulating clear and detailed questions,
and giving the Chatbot precise instructions. Furthermore, it is crucial to
develop robust infrastructure to support generative AI, improving
algorithms to reduce biases, and implementing advanced security
mechanisms for efficient data management. Developers must continuously
update and refine systems to meet evolving needs, reduce unconvincing
responses, and protect against potential attacks. Regarding information
quality, it is essential to implement robust verification and preparation
mechanisms for the information that feeds GPT engines. Universities and
academic institutions can play a crucial role in this area by embracing
international collaboration and creating shared knowledge spaces on the
Internet. This collaborative effort will ensure that the information used to
train AI models is accurate, reliable, and unbiased, thus building a
verifiable knowledge base that supports both the technological
advancements of generative AI and the educational needs of users
worldwide.
Finally, essential for responsible use of generative technologies is the
control that the user must exercise over the responses, without immediately
accepting them as truth. Promoting digital literacy and critical thinking
skills among users is crucial. As GAI becomes increasingly integrated into
various sectors, people must be equipped to navigate the digital information
landscape with discernment. This involves understanding the limitations of
AI-generated content and using these tools in a way that complements
human judgment and experience. But how do we accomplish all this?
First and foremost, it is essential to know what we are dealing with.
These systems do not truly have an awareness of what they are saying, they
do not really perform reasoning, although the underlying algorithms, such
as the Transformer, are capable not only of understanding the submitted
text, but also of generating responses with an extremely appropriate flow of
words. This result is achieved thanks to the relationships learned during the
training phase on a vast amount of text, through a mechanism known as
multi-head self-attention. This revolutionary algorithm marks a turning
point in the ability to capture even deep linguistic inference patterns,
managing changing meanings, double meanings, nuances and different
linguistic registers. The Transformer can produce speech that is virtually
indistinguishable from human speech, not only from a linguistic point of
view, which is its strong point, but also in terms of the concepts expressed.
However, it is important to remember that, despite the apparent humanity of
the generated texts, it is still a machine, however phenomenal, operating
generatively, based solely on relationships learned from the data, without
true conscious understanding. It is therefore critical that generative AI
systems be developed and presented to users with a high degree of
transparency, making it clear that they are machines that operate based on
learned data. Despite efforts to eliminate informational bias from the
training data, a certain amount of residual informational noise is inevitable.
This implies that the responses generated, which are not the result of human
thought, only make concepts emerge from the deep word relationships
learned by the model on the basis of the many examples submitted to it.
This is not very different from what a human does, except that there is a
complete lack of control linked to a sentient mechanism: the machine
speaks well, very well, following the most common linguistic flow seen so
many times in texts. This mostly works, but the Chatbot does not really
know what it is saying. If present in the data and not managed by behavioral
rules, the Chatbot's responses may contain inferences that are incorrect,
misunderstood, or distorted by opinions, ideologies, and prejudices present
in the training data.
The self-assessment test explicitly highlights the need for an iterative and
reflective approach when interacting with ChatGPT. Stopping at the first
answer is only a starting point; it is essential to engage in a human-machine
conversation in which the user applies critical thinking. This means
carefully evaluating what the Chatbot says, refuting if necessary, and never
treating answers as absolute truths without further verification. Such an
approach includes formulating follow-up questions, verifying information
through external sources, and using multiple models or systems to compare
responses. This criterion helps identify inconsistencies and gain a more
complete and accurate understanding of the problem or topic at hand.
Interaction with a generative engine should be viewed as a continuous and
dynamic discussion, where human thought plays a fundamental role.
Interactive reworking with the Chatbot is necessary, accompanied by
verification of each statement made by the machine. The proper use of a
generative engine implies that the user does not passively accept the
answers provided, but analyzes them critically, trying to understand the
underlying principles. This approach not only improves the quality of the
responses obtained but, being characterized by a conscious and responsible
use of AI technologies, it helps in reasoning. In this way, the full potential
of the generative systems can be exploited, increasing productivity and
quality in our work, while reducing the risks associated with the
dissemination of incorrect or misleading information.
More generally, it is now more important than ever to disseminate AI
training, different from that already required for the Internet. Users must be
prepared to recognize the limitations of generative systems and develop
skills to critically evaluate the information obtained. The democratization of
knowledge through tools such as ChatGPT is a great opportunity, but it
requires shared responsibility to ensure the integrity and reliability of the
information disseminated. Digital education should include courses and
resources that teach users how to evaluate sources, recognize bias, and
understand the basic principles of how AI works.

Toward a New Architecture of Knowledge


Today, ChatGPT has significant advantages and disadvantages. While it can
be a useful and creative tool, we have seen how users must be aware of its
limitations and must use it with caution and judgment. First and foremost, it
is essential to quickly identify new rules and models on which to base the
development of virtual societies. “The virtual spaces of the Internet need to
be designed and organized, even urbanistically; network architects have yet
to find universal styles, languages and symbols. The new network architects
have not yet truly created new styles to represent the real in the virtual”
(Garito, 2015). There is no one who can guarantee the veracity of the
content that is freely posted by all human beings from different parts of the
world. The Web is full of false knowledge, and no one is able to find
methods and patterns of interaction with the Web to direct and inform users
of the scientific falsehoods that populate the network.
Until not so many years ago, the people who communicated true
knowledge were teachers, experts, scientists, those who had done research,
written books. Their expertise and knowledge were transferred to students
of various age groups all over the world. Schools and universities have
always been the places where knowledge is transferred from experts to non-
experts. Governments worldwide have consistently ensured that citizens are
trained by competent individuals, both to face life's challenges and to
acquire the knowledge and skills necessary for the younger generation to
mature and enter the workforce with appropriate abilities. In this, political
powers have always been careful to maintain the high value of education at
various stages of life. Today, many citizens acquire false knowledge and
false training from the chaotic world of the Internet and generative AI, and
no one is able to guide the young and old through this chaos of information
to prevent them from acquiring false knowledge and false skills.
Universities, as the only place where the processes of knowledge
production and transfer coexist, can play an essential and irreplaceable role
in creating Internet content. However, to make this happen, it is necessary
to implement a process of change in the role and structure of universities,
creating new international alliances and new real and virtual spaces, in
order to establish shared networks of knowledge among different
universities worldwide and support the convergence of academic
institutions.
Common spaces of knowledge should be neither homogeneous nor
uniform. It is important not to come together to clone each other or, worse,
to “McDonaldize” education and training systems. Instead, we should
ensure a new balance between unity and diversity: unity of values and
tradition that memory delivers, and the diversity of cultures and languages.
Therefore, it is necessary to create new international alliances and build a
global network of public and private universities that combine their
knowledge on the Internet while respecting cultural, political, religious, and
economic differences. A network of universities that, thanks to the Internet,
expands and guarantees humanity the veracity of information and
knowledge available on the Web. Each university can participate in building
this web of relationships and contribute to the construction of the common
space of globalized and interconnected knowledge, which has truthfulness
of content as its main objective (Garito, 2015).
The global network for the inclusion of content by institutions of various
educational levels is not a utopia. Today, several leading universities are
making what has been stated above a reality. The value that the global
consortium of universities can bring is the value of diversity as wealth for
all. An opportunity like the one just described becomes a universal value
and foregoes the provision of scientific content on the Web by teachers,
leading instead to the development of a new Web. A Web that guarantees
humanity about the scientific accuracy of the knowledge found on the
network, and thus builds the new knowledge bases on which to train GAI
systems such as ChatGPT (Garito, 2023).
The new organization of knowledge impacts practically every field in
which information, data analysis, and communication play a crucial role.
Particularly in the fields of geopolitics and diplomacy, the adoption of
advanced AI and GAI technologies can radically transform the way
strategic decisions are made and international relations are managed. AI's
ability to analyze large volumes of complex data enables it to predict future
scenarios with greater accuracy, supporting diplomats in preparing effective
strategies for dealing with geopolitical and economic crises. For example,
AI systems can collect and analyze information from diverse sources, such
as economic indicators, financial news, and sentiments expressed on social
media, to provide estimates of how a crisis might evolve and what its global
repercussions might be. In addition, advanced analytical logics can identify
multivariate correlations between political events and public opinion
movements that would not be immediately apparent to the human eye. This
type of exploratory analysis is crucial for understanding a country's internal
dynamics and predicting possible developments, allowing diplomats to
intervene early and prevent crisis escalation.
Systems such as ChatGPT, then, represent a significant qualitative leap,
enabling the generation of new content such as real-time reports and
briefings, visual presentations that explain complex scenarios, and content
for diplomatic campaigns. These generative capabilities allow us to move
from simple data interpretation to the creation of actual useful artifacts in
real time, improving communication and understanding between nations. In
preparation for international negotiations, AI can simulate possible
reactions of counterparts to different proposals by analyzing historical and
behavioral data. This allows diplomats to anticipate obstacles and prepare
with appropriate countermeasures, increasing the likelihood of success in
negotiations. Furthermore, generative systems facilitate intercultural
communication by providing instant translations and analysis of cultural
context, improving mutual understanding and building trust and respect in
sensitive situations. Adopting these advanced technologies is essential in
order to maintain a leading role on the global stage, ensuring that progress
is driven by solid ethical principles and rigorous oversight. Failure to do so
would mean missing crucial opportunities in an increasingly interconnected
and technologically advanced world.
We have already seen how it is necessary to operate on three fronts to
obtain quality information with conversational GAI. Alongside technical
measures and the quality of the information available to the Chatbot, which
we have already discussed, the aspect of user responsibility within
conversational systems still deserves further exploration. Among various
possible strategies that can be applied, it is important to study and identify
basic procedures for correctly interacting with the conversational system
without passively accepting everything it proposes but approaching it
critically. It is not about us having to agree with it; its responses should
reflect our thoughts enriched repeatedly during the discussion.
Another critical issue for the organization of knowledge concerns
security in the use of generative AI, particularly the vulnerability to
cyberattacks. These attacks can manipulate results and distort reality by
exploiting the weaknesses of AI systems. To take countermeasures, it is
important to distinguish between misinformation, disinformation, and
malinformation. Misinformation refers to incorrect information spread
without the intent to deceive, such as unintentional errors. Disinformation,
on the other hand, implies the intentional spread of false information to
harm and manipulate, influencing public opinion or destabilizing
institutions. Malinformation concerns real information that is distorted and
used to harm.
Cyberattacks can target specific functions, compromising national and
international security, or influence public opinion and destabilize social
equilibria. Some attacks manipulate input data so that the AI model
produces incorrect results without being detected; for example, small,
imperceptible changes in the data can deceive an image recognition model
(evasion attacks). Others inject harmful data into the model's training set,
compromising the quality and reliability of future predictions; this type of
attack can slowly degrade the model's performance (poisoning attacks); or,
similar attacks create adversarial examples, which are inputs specifically
designed to deceive the AI model into making incorrect predictions
(adversarial attacks). Others seek to extract sensitive information from the
AI model, such as personal data used during training, threatening user
privacy (model inversion attacks).
To address these challenges, it is necessary to implement adequate
control and mitigation measures, continuously monitoring and validating
the results generated by AI. It is crucial to have robust cybersecurity
systems, use advanced encryption techniques, multi-factor authentication,
and constant surveillance of network activities. User education on AI
security and associated risks is fundamental, as is international cooperation
to develop security standards and share information on emerging threats.
The revolutionary potential of AI in organizing knowledge must also be
evaluated in relation to unequal access to technologies among nations.
There is a real risk of exacerbating inequalities and negatively influencing
global power dynamics. Developing nations often lack the digital
infrastructure needed to adopt and develop AI technologies, leading to
economic and social marginalization. AI-induced automation could
disproportionately affect workers in developing countries, with a significant
impact on employment. The geopolitical implications of the technological
gap are concerning, with the risk of a new form of digital colonialism and a
loss of informational sovereignty for developing countries. Addressing this
challenge requires international cooperation, investments in digital
infrastructure and skills, and the development of frameworks to regulate the
use of AI in an ethical and inclusive manner. Professional retraining
policies and income support programs are essential to accompany workers
in the transition toward an AI-driven economy, ensuring sustainable and
inclusive development for all.

Conclusion
In conclusion, the advent of GAI technologies such as ChatGPT represents
a transformative shift in the way knowledge becomes accessible and
disseminated. While these technologies offer unprecedented opportunities
for personalized learning, efficient information retrieval, and creative
assistance, they also pose significant challenges. Some interventions can be
made at a technical level and on the quality of the information provided, but
others must be undertaken by users. Issues related to the accuracy of
information, potential biases, and the risk of misinformation underscore the
need for robust mechanisms for preparing information that feeds into the
engines and critical user engagement. Universities and academic institutions
play a crucial role in this new landscape. By embracing international
collaboration and creating shared digital knowledge spaces, they can ensure
the quality and reliability of information used to train AI models. This
collaborative effort is essential to build a verifiable knowledge base that
supports both the technological advancements of AI and the educational
needs of users worldwide. Furthermore, it is crucial to promote digital
literacy and critical thinking skills among users. As AI becomes
increasingly integrated into various sectors, people must be equipped to
navigate discerningly through the digital information landscape. This
involves understanding the limitations of content generated by AI and using
these tools in a way that integrates human judgment and experience.
Ultimately, the synergy between human intelligence and AI has the
potential to enhance our collective knowledge base and problem-solving
capabilities. By leveraging the strengths of both, we can create a more
informed, connected, and innovative society.

References
Diderot, D., & d’Alembert, J.-B. R. (Eds.). (1751–1772). Encyclopédie, ou
dictionnaire raisonné des sciences, des arts et des métiers
[Encyclopedia, or a reasoned dictionary of the sciences, arts, and
trades]. Briasson, Le Breton, & Fanchet.
Garito, M. A. (1991). Artificial intelligence in education: Evolution of the
teaching—learning relationship. British Journal of Educational
Technology, 22(1), 41–47.
Garito, M. A. (2015). The university in the XXI century, between tradition
and innovation. McGraw-Hill.
Garito, M. A. (2023). Alliances for knowledge: A strategy for building the
future of university in the digital society. Advances in Social Sciences
Research Journal, 10(7), 412–419.
OpenAI. (2023). ChatGPT [Large language model]. Retrieved June 14,
2024, from https://www.openai.com/chatgpt
OpenAI. (2023a). GPT-4 technical report. Retrieved from arxiv.org.
OpenAI. (2024). Introducing GPT-4o and more tools to ChatGPT free
users. Retrieved from openai.com
Wikicount. (2024). Wikipedia article count. Retrieved June 14, 2024, from
https://wikicount.net/
23
TRADITIONAL KNOWLEDGE,
ENVIRONMENTAL
CHALLENGES, AND ARTIFICIAL
INTELLIGENCE
Ethical Generative AI Use and Sustainable
Approaches
Giovanna Gnerre Landini

DOI: 10.4324/9781003518495-29

Introduction. Indigenous Peoples’ Rights, Data


and Environmental Sustainability: Essential
Aspects for Contextualization
The protection of the environment and of biodiversity has become a cross-
cutting issue of utmost importance and is one of the main goals of
multilateral dialogue, as well as national, regional, and international policy
development. Global efforts to prevent the irreversibility of environmental
degradation and biodiversity loss – exacerbated by the impact of climate
change – must adopt an inclusive approach, adhere to the principle of
sustainability and rely on the most accurate data possible not only in
environmental but also in human terms. This ensures that all individuals, as
stakeholders, are entitled to participate in actions aimed at protecting their
fundamental human rights, including the right to self-determination and the
right to live in a healthy and safe environment. Indigenous peoples are
among those most exposed to, and affected by, the negative consequences
of environmental degradation. Considering their special relationship with
the lands and territories where they live, indigenous peoples are also key
actors and irreplaceable partners in the pursuit of environmental
sustainability, providing a significant contribution to the management of
resources and biodiversity, and ecosystem conservation. Their millennial
experience, rooted in traditional practices and knowledge, passed down
through generations, reflects historical continuity with their ancestral lands
and adds immense value to their traditional knowledge systems (TKS).
These systems, which are multi- and interdisciplinary in nature, serve as
essential foundations of tangible and intangible cultural heritage, primarily
transmitted orally in native languages. Protecting indigenous traditional
knowledge is therefore crucial to preserving customs and ways of life that
contribute significantly to the pursuit of common sustainable development,
both environmental and human, for the benefit of all humanity, as is also
recognized in the achievement of the goals of the “2030 Agenda for
Sustainable Development” (OHCHR, 2017; UN General Assembly, 2015).
These knowledge and cultural systems, due to structural inequalities and
social exclusions that have continued over time, are faced with constant
challenges relating to their preservation, a condition also exacerbated by the
impact of globalization processes and the sudden development of pervasive
systems of digital and information technology.
The growing digitalization of content raises questions about the
intersection between the use of generative artificial intelligence (AI),
indigenous data sovereignty, cultural heritage preservation, and
participation in decision-making processes. Indigenous peoples have always
been data collectors and knowledge holders, managing these autonomously
and successfully for millennia. However, historical discrimination against
them has led not only to their exclusion from decision-making processes
regarding their interests and rights but also to the establishment and
persistence of distorted narratives and the misuse of their data knowledge
system. The lack of indigenous participation in the management of data –
both produced by them or by third parties but related to them – and in the
various phases of the process – such as creation, collection, access, analysis,
interpretation, management, storage, dissemination, and reuse – inevitably
results in the violation of their fundamental human rights. AI itself could
indeed prove to be a particularly valuable tool and ally for indigenous
peoples when it comes to safeguarding the integrity of their cultures and
identities, native languages, traditional practices, and knowledge, unique
forms of heritage that make up global cultural diversity. AI and open data,
when used ethically, can serve as powerful sources of knowledge. They can
help preserve and document this traditional knowledge and its related
information through, for instance, the digitization, cataloging, and archiving
of stories, cultural practices, and ancestral wisdom, with the direct
involvement of those to whom this knowledge belongs, enhancing the
development and application of AI (Williams & Shipley, 2021) with long-
term ethical perspectives.
It is, however, realistic to consider the risk that AI could be misused or
lead to misappropriation, given that nearly all aspects of indigenous
peoples’ data used in developing generative AI systems are typically
collected, controlled, and processed by external (i.e., non-native) actors
(Tapu & Fa’agau, 2022).
The inclusion of an indigenous perspective in discussions on AI ethics is
essential, as the ongoing lack of indigenous representation in multilevel
decision-making processes on these timely topics will inevitably lead to a
discriminatory data management system. This would conflict with the
decades-long efforts undertaken by international organizations, starting
from the United Nations, to guarantee the full enjoyment of rights by
indigenous peoples. As can be expected, the partiality of data drawn from
TKS without the participation of their holders has a detrimental effect on
the data itself. This poses the risk not only of generating distorted narratives
but also of perpetuating harm to the integrity of indigenous communities
that could potentially be irreversible regarding distinctive cultural identities.
The goal of ethical AI must therefore be to evolve by countering practices
of codifying and cataloging data that is conceived, collected, and anchored
in knowledge systems that are reinterpreted, or merely reported, from
secondary sources. This leads to undue cultural appropriation or even
reputational damage and the undermining of intergenerational and intra-
community transmission processes. AI's use and rapid growth increasingly
raise questions about how to implement AI systems while respecting
inclusivity and diversity. A failure to address these two aspects will
inevitably lead to digital redlining and discrimination, especially against the
most vulnerable groups in our societies, including indigenous peoples. This
will further entrench unfair systems, bias, and a lack of transparency.
Therefore, after proclaiming commitments to these goals, it is crucial not to
hesitate in finding solutions that contribute to the development of AI
systems that are participatory and inclusive of all the social components
involved, starting, for example, from strengthening the traditional
challenges posed by the framework for the international protection of
human rights.

The Rights of Indigenous Peoples and Their Key


Role in Environmental Management: From
Identification to Implementation
Indigenous communities generally live in highly vulnerable contexts, both
environmentally and socially, and this vulnerability is fully reflected within
the communities themselves, which are recognized as being among the
most disadvantaged groups in society, bearing a legacy of inequality and
exclusion (World Bank, 2023; United Nations Department of Economic and
Social Affairs, 2020). The perpetuation of this condition of marginalization
inevitably contributes to limiting the capabilities of indigenous peoples to
effectively participate in decision-making processes concerning their rights
and interests and, among the reasons, is their non-dominant social and
economic position (Human Rights Council, Expert Mechanism on the
Rights of Indigenous Peoples, 2011). Their condition is further exacerbated
by the fact that, as acknowledged in numerous resolutions adopted by the
International Telecommunication Union (ITU, 2018), a significant number
of these communities lack easy access to new technologies.1
There are over 476 million indigenous people living in 90 countries
around the world, across all continents, representing about 6% of the global
population, with more than 5,000 distinct groups and therefore diverse
cultures (UN data2). When referring to indigenous peoples and their rights,
it has to be considered that more than 5,000 groups share common
experiences, while also acknowledging the cultural and identity value that
such heterogeneity brings in terms of perspectives and values. This
diversity stems from the unique relationships each community has with its
lands and the specific environments in which it lives.
Indigenous lands, which make up about 20% of the Earth's territory,
contain 80% of the remaining biodiversity (Recio & Hestad, 2022, p. 1).
Despite this natural resource wealth, indigenous peoples make up at least
15% of the world's poorest population (UN data3). In the face of
challenging living conditions in territories where ecosystems and social
balances are increasingly compromised, “their diverse forms of knowledge,
which are deeply rooted in their relationship with the environment and
cultural cohesion, have allowed many of these communities to maintain a
sustainable use and management of natural resources, protect their
environment and strengthen their resilience, whilst facing new and complex
circumstances” (Magni, 2017, p. 438).
The crucial significance attributed to the relationship between indigenous
peoples, the environment, and protection of human rights – which gives rise
to a value system that underpins traditional knowledge and practices – is
clearly gleaned from the content of the definition of indigenous peoples
developed by José Martínez Cobo, Special Rapporteur of the United
Nations Sub-Commission on the Prevention of Discrimination and
Protection of Minorities and contained in his report titled “Study on the
Problem of Discrimination against Indigenous Populations” (Martínez
Cobo, 1987). By offering an expansive interpretation of the principles set
forth in the Universal Declaration of Human Rights (United Nations, 1948)
and considering the provisions of other binding and programmatic human
rights instruments (OHCHR4), Martínez Cobo formulated a noteworthy
definition of “Indigenous communities, peoples, and nations.” Though not
legally binding, this definition is still widely used as a working definition. It
highlights key characteristics of indigenous communities, summarized
below, that allow for their distinctive identification:

Historical continuity with pre-colonial societies that developed on their


territories;
Non-dominant group in society;
Distinct social, economic, or political systems and distinct language,
culture, and beliefs;
Commitment to preserving, developing, and transmitting to future
generations their ancestral territories and ethnic identity, along with
their cultural patterns, social institutions, and legal systems;
Self-identification as indigenous (group consciousness).

The recognition of historical continuity with pre-colonial societies,


rooted in ancestral cultural systems, is fundamental to the determination and
protection of indigenous identities. This is intrinsically related to the right
to self-determination – as contained in Article 1 common to both the
International Covenant on Civil and Political Rights (ICCPR) and the
International Covenant on Economic, Social, and Cultural Rights (ICESCR)
(UN, 19665) – and to the right to self-identification, which Convention No.
169 “Indigenous and Tribal Peoples” of the International Labour
Organization (ILO, 1989) defines as a fundamental criterion (Article 1,
para. 2).
The value that indigenous peoples attribute to group consciousness is
intrinsically linked to the exercise of the right to internal self-determination.
The historical relationship between indigenous communities and their
ancestral lands, naturally leads to forms of control and decision-making
related to development, natural resource use, and management and
conservation measures. These aspects are closely tied to the issue of legal
protection of indigenous property rights. This distinctive relationship, along
with the recognition of its significance for the affirmation of fundamental
rights, is also reflected in the spiritual, cultural, and social value that
indigenous peoples assign to their lands, territories, and resources, which is
crucial for their survival as communities (Daes, 2001).
The United Nations and other international and intergovernmental
organizations have de facto recognized the inseparability between ancestral
lands and cultural identity: indigenous peoples are the holders of unique
knowledge systems and practices for the sustainable management of their
lands, territories, and natural resources. In identifying their de jure status,
many national legal systems often struggle to recognize the inseparability
between the right to ownership over lands and resources, and the cultural
and spiritual value these lands and resources hold for the community.
Regional courts, particularly within the Inter-American system (Von
Bogdandy et al., 2024) and the African system (Mbaku, 2023), have
addressed this issue by offering broad interpretations of regional human
rights instruments, such as the American Convention on Human Rights (UN
Treaty Collection, 1969) and the African Charter on Human and Peoples’
Rights (African Union, 1981). This interpretation has led to progressive
jurisprudence, in particular within the Inter-American Human Rights
System, on indigenous peoples’ rights, emphasizing the urgency to formally
recognize the special relationship and interdependence between indigenous
peoples and lands, resources, and culture, specifically in relation to the
collective dimension of territorial property rights.6
ILO Convention No. 169, although it does not provide a definition of
indigenous peoples, is an extremely important legally binding instrument
(binding for the States Parties that have ratified it) in recognizing key
criteria for identifying the beneficiaries of this instrument based on the
principle of non-discrimination, including the self-identification criteria
together with traditional ways of life. It not only recognizes the right of
indigenous peoples to have their ownership and possession over the lands
they traditionally inhabit (Article 14), but also ensures that “the rights of the
peoples concerned to the natural resources pertaining to their lands shall be
specially safeguarded,” including the right to participate in the use,
management, and conservation of these resources (Article 15, para. 1).
Furthermore, the recognition of the right to consultation and participation in
decision-making processes, particularly on matters that may affect their
lives (Articles 6, 17, and 22), as well as the requirement for prior
consultations in the case of programs involving the exploration and
exploitation of such resources (Article 15, para. 2), are a milestone of the
Convention.
All these rights must be read in close connection with Part VI of the
Convention, concerning “Education and Means of Communication”:
indigenous community members must have the opportunity to acquire
education on an equal footing with the rest of the national community
(Article 26), with educational programs developed and implemented in
cooperation with them, incorporating their histories, knowledge,
technologies, and value systems and aspirations (Article 27).
During the United Nations Conference on Environment and
Development (Earth Summit, Rio de Janeiro, 1992), indigenous peoples
were formally identified as key players in in the sustainable management of
resources and in environmental and biodiversity conservation (“a vital role
in environmental management and development because of their knowledge
and traditional practices,” Principle 22, Rio Declaration7). The Action Plan
adopted, Agenda 21, highlighted how the empowerment process of
indigenous peoples, considered as a major group, must include specific
measures such as the “Recognition of their values, traditional knowledge
and resource management practices with a view to promoting
environmentally sound and sustainable development” (Chapter 26,
Recognizing and Strengthening the Role of Indigenous People and Their
Communities8).
States are therefore invited to adopt measures to support: 1) the capacity-
building of indigenous communities (Chapter 26.3, para. a(VII)) and 2) the
involvement of indigenous peoples “at the national and local levels in
resource management and conservation strategies and other relevant
programmes established to support and review sustainable development
strategies, such as those suggested in other programme areas of Agenda 21”
(Chapter 26.3, para. c).
The Convention on Biological Diversity (CBD, 1992) (Secretariat of the
Convention on Biological Diversity, 2011), adopted in Rio and ratified to
date by 196 countries,9 explicitly recognizes the role of indigenous
communities in the conservation and sustainable use of biological diversity.
The Convention is based on three main objectives: 1) Conservation of
biodiversity, which emphasizes in-situ conservation of natural habitats and
resources; 2) Sustainable use of biodiversity components, recognizing
biodiversity's intrinsic value and the global concern for its protection; 3)
Fair and equitable sharing of benefits arising from the use of genetic
resources. The definition of genetic resources is mainly based on the
wording contained in the CBD and interpretative documents: “genetic
resources means genetic material of actual or potential value” and “genetic
material means any material of plant, animal, microbial or other origin
containing functional units of heredity” (Article 2).
The definition of in-situ conservation as “conservation of ecosystems and
natural habitats and the maintenance and recovery of viable populations of
species in their natural surroundings and, in the case of domesticated or
cultivated species, in the surroundings where they have developed their
distinctive properties” (Article 2) indirectly refers also to indigenous
communities.
The articles of the CBD that specifically involve indigenous communities
are primarily four: Article 8(j) (In-situ Conservation), Article 10(c)
(Sustainable Use of Components of Biological Diversity), Article 17 para. 2
(Exchange of Information), and Article 18 para. 4 (Technical and Scientific
Cooperation).
Article 8(j) is pivotal for respecting and protecting indigenous traditional
knowledge, particularly visible in resource management practices. It states
that: “Each Contracting Party shall, as far as possible and as appropriate: (j)
Subject to its national legislation, respect, preserve and maintain
knowledge, innovations and practices of indigenous and local communities
embodying traditional lifestyles relevant for the conservation and
sustainable use of biological diversity and promote their wider application
with the approval and involvement of the holders of such knowledge,
innovations and practices and encourage the equitable sharing of the
benefits arising from the utilization of such knowledge, innovations and
practices.”
The content emphasizes the importance of the adoption by states of
coherent and relevant measures with the direct involvement of indigenous
communities (holders) in decision-making processes.
In the other three articles, the Contracting Parties commit “to protect and
encourage customary use of biological resources in accordance with
traditional cultural practices that are compatible with conservation or
sustainable use requirements” (Article 10 para. c); to facilitate the exchange
of information, including respect for traditional and indigenous knowledge
(Article 17 para. 2); and to promote scientific cooperation that includes
traditional and indigenous technologies (Article 18 para. 4).
The Nagoya Protocol to the CBD on Access and Benefit-sharing
(Secretariat of the Convention on Biological Diversity, 2011a) adopted in
2010 and ratified to date by 142 states,10 refers to the measures to be taken
to regulate access to traditional knowledge, held by indigenous and local
communities, when associated with genetic resources (WIPO, 2020, p. 18),
in order to strengthen the ability of such communities to benefit from such
knowledge and experience (particularly Articles 7, 10, and 16). A key role
is played not only by direct involvement in the processes but also by the
exercise of prior and informed consent by owners, bearers or custodians on
TKS.
A further relevant document adopted in 2004, in the framework of
activities supporting the full implementation of the CDB, is the Akwé: Kon
Guidelines, drawn up within the Convention framework in consultation
with indigenous representatives.11 These Guidelines provide a framework
for codifying the positive cultural, environmental, and social impact that
can result from the effective application of Article 8(j) of the CBD. They
also propose adequate medium-to-long-term procedures and concerted
methodologies that can play a role in the participation of indigenous
communities in pursuing the goals of the Convention, alongside
governments and other decision-makers. Including indigenous peoples and
their knowledge heritage in the mechanisms for implementing the
Convention, while underestimating the cultural and social implications of
their traditional technologies, would mean using an irreplaceable and fragile
heritage merely for functional purposes, to the detriment of their identity
and in violation of their fundamental rights.
The United Nations Declaration on the Rights of Indigenous Peoples
(UNDRIP), adopted by the UN General Assembly on September 13, 2007,
with an affirmative vote of 144 states,12 is the result of more than 20 years
of negotiations involving also indigenous representatives around the world.
Although not legally binding, it represents an extraordinarily important
instrument that reaffirms the international commitment to respecting and
protecting indigenous communities and strengthens the positive obligations
on states.

Traditional Knowledge between Culture,


Intangibility, and Intellectual Property
The recognition of the transformative impact of culture for achieving
sustainable development (MONDIACULT, 2022) is essential to ensure that
development is inclusive and equitable, taking into account all the
stakeholders involved. The defense of cultural diversity, as recognized in
the UNESCO Universal Declaration on Cultural Diversity adopted in 2001
(UNESCO, 2002, pp. 61–64), is an ethical imperative, inseparable from
respect for human dignity and rights (Article 4). This principle is
particularly relevant when adopting measures to protect the cultural heritage
of indigenous communities, whose ways of life are deeply connected to the
lands where they live, especially in areas rich in biodiversity and where
indigenous communities play a key role in the management of natural
resources.
In the UNESCO Convention concerning the Protection of the World
Cultural and Natural Heritage (1972), there is no direct reference to
indigenous communities. However, the Operational Guidelines for the
Implementation acknowledge the role of “local communities” as
stakeholders and as rights-holders, thus compensating for the absence of
direct reference in the legal instrument. Furthermore, in the Guidelines
reference is made to the close interdependence between biological and
cultural diversity, which cannot ignore the effective inclusion of these
communities in conservation processes (UNESCO World Heritage Centre,
2012). In the preamble to the UNESCO Convention for the Safeguarding of
the Intangible Cultural Heritage (2003), indigenous communities are
recognized as having a particular role in the production, safeguarding,
maintenance, and recreation of intangible cultural heritage. Notably, Article
2 of the Convention provides a broad definition of “intangible cultural
heritage” stating that it “means the practices, representations, expressions,
knowledge, skills – as well as the instruments, objects, artefacts and cultural
spaces associated therewith – that communities, groups and, in some cases,
individuals recognize as part of their cultural heritage (…)” (UNESCO,
2022a, 3-22). This definition includes several characteristics intrinsic to
indigenous communities, such as the oral transmission of knowledge across
generations and the enhancement of a sense of identity and continuity.
The UNESCO Convention on the Protection and Promotion of the
Diversity of Cultural Expressions (2005) also recognizes the importance of
traditional knowledge as a source of both intangible and tangible wealth,
highlighting the need for its protection and promotion (Preamble). It
highlights the need for indigenous communities to be free to create,
disseminate and access their traditional cultural expressions to benefit from
them for their development (Preamble and Article 7) (UNESCO, 2023a).
Cultural diversity is based precisely on the intangibility of invaluable
cultural heritage which, even in the case of environmental management,
belongs to those who hold it and who transmit it in native languages with a
typical dimension of orality.
The UNESCO Recommendation concerning the Promotion and Use of
Multilingualism and Universal Access to Cyberspace, adopted in October
2003, marked an important step in recognizing the urgency of addressing
the crucial issue of the survival of languages, particularly the most
vulnerable, in cyberspace. The Recommendation calls on states to adopt
appropriate measures to this end, promoting “best practices relating to
multilingualism and multilingual resources and applications, including
innovations in language computerization” (principle 5) (UNESCO, 2004,
71).
The period from 2022 to 2032 has been proclaimed the Decade of
Indigenous Languages (UN General Assembly Resolution, 202013) with the
aim of drawing attention to the critical state of many endangered indigenous
languages. The Global Action Plan, adopted for the Decade's
implementation, includes measures to enhance digital empowerment and
develop language technologies (Output no. 3) recognizing that indigenous
languages should not be seen merely as communication tools but as
extensive, complex systems of knowledge (UNDP, 202114).
The preservation of TKS is clearly connected to the promotion of
intangible cultural heritage. Recognizing traditional knowledge as a holistic
and collective heritage means emphasizing its custodial and
intergenerational character (CDB COP XV Decision 14/13, Glossary,
201815). However, there is no single, binding definition of “traditional
knowledge” at the international level. The World Intellectual Property
Organization (WIPO) defines it as: “a broad description of subject matter,
generally includes the intellectual and intangible cultural heritage, practices
and knowledge systems of traditional communities, including indigenous
and local communities (traditional knowledge in a general sense or lato
sensu) (…) In international debate, traditional knowledge in the narrow
sense refers to knowledge as such, in particular the knowledge resulting
from intellectual activity in a traditional context, and includes know-how,
practices, skills, and innovations.”16
The definition provided by the Secretariat of the Convention on
Biological Diversity describes traditional knowledge as “knowledge,
innovations and practices of indigenous and local communities around the
world. Developed from experience gained over the centuries and adapted to
the local culture and environment, traditional knowledge is transmitted
orally from generation to generation (…).”17
This multifaceted characterization of the right to self-identification, as
previously analyzed, is enriched by another fundamental and transversal
element: the right to traditional knowledge that serves as a tool for self-
preservation, safeguarding cultural identities and the irrevocable ancestral
connection between indigenous peoples and their lands and territories.
Ensuring the protection of this right, and its contents, can be configured as a
form of preservation of identity integrity of indigenous communities as long
as it ensures them knowledge control and knowledge governance.
The UN Declaration on the Rights of Indigenous Peoples moves in this
direction. Article 31 emphasizes that indigenous peoples have: “the right to
maintain, control, protect and develop their intellectual property over such
cultural heritage, traditional knowledge, and traditional cultural
expressions.”
Indirect references to “indigenous intellectual property” are found in
many other articles of the Declaration. These include: the right of
indigenous peoples to “practise and revitalize their cultural traditions and
customs” (Article 11); the right to manifest, practice, develop, and teach
their spiritual and religious traditions, customs, and ceremonies (Article
12); the right to revitalize, use, develop, and transmit to future generations
their histories, languages, oral traditions, philosophies, writing systems, and
literatures (Article 13); the right to the dignity and diversity of their
cultures, traditions, histories, and aspirations (Article 15). These provisions
must be read in conjunction with Articles 18 and 19, which address
indigenous representation, free, prior, and informed consent (FPIC), and
participation in decision-making.
The protection of indigenous intellectual property related to TKS is
central to the development of frameworks based on the sustainable and
ethical use of AI. Regarding indigenous intellectual property, the current
intellectual property law system cannot be fully applied to traditional
knowledge because these forms of knowledge do not meet the standard
criteria provided by the law system, such as originality (even though these
are ancient knowledge systems), fixation and identifiable authors for
copyright protection (WIPO, 2023). For indigenous peoples, traditional
knowledge is a collective and community asset rather than an individual
one. Additionally, it often does not fulfill the novelty and inventive step
requirements for patent protection, as it is practiced and transmitted
primarily for cultural and identity purposes rather than economic ones.
The issue is precisely about controlling the intangibility of information
(Karjala & Paterson, 2017) transmitted orally. This circumstance makes it
complex to protect indigenous property, including intellectual property, and
cultural heritage through a formalized system which, among other things,
provides for a sort of permanent ownership that counteracts
misappropriation (such as use and dissemination by third parties, including
through digitalization of data collected in the field belonging to indigenous
peoples without their consent and/or supervision).
Proposals have been made to modify the current intellectual property
framework to better protect indigenous intellectual property, including the
adoption of ad hoc tools or alternative regimes that combine various
approaches (Anderson, 2010, Chapter III). The possible establishment of
sui generis systems to complement the current intellectual property
protection framework underlies numerous negotiations aimed at creating
supportive frameworks (WIPO, 2022).
According to the ad hoc Open-ended Inter-sessional Working Group on
Article 8(j), a sui generis system should: (a) recognize and register, as
appropriate, the ownership of traditional knowledge by the indigenous and
local community that is the holder of said knowledge; (b) control access to,
disclosure and use of traditional knowledge; (c) exercise the right to require
free prior informed consent for any use of traditional knowledge; (d)
exclude improper use by third parties; (e) ensure that they derive fair and
equitable benefits from the wider application of their knowledge; and (f)
generate protection mechanisms at the international and national
government levels, and within relevant customary law (CDB COP 7,
200718).
The WIPO Intergovernmental Committee on Intellectual Property and
Genetic Resources, Traditional Knowledge and Folklore (WIPO, 2023a)
has been negotiated for some time to make supplementary protection
systems effective. Within the Committee, a panel of representatives from
indigenous and local communities has been established to gather
informational materials, including proposals from the directly affected
parties, on how to proceed with effective protection of traditional
knowledge associated with biodiversity. It is clear that the strengthening of
protection systems can only be achieved if knowledge holders are
consistently empowered to participate meaningfully in decision-making
processes. This involves promoting capacity-building initiatives and
respecting the right to FPIC. Ensuring that indigenous communities have a
role in shaping policies and protections is essential for the effectiveness and
fairness of these systems.
In May 2024, after decades of negotiations, WIPO Member States
adopted the historic “Treaty on Intellectual Property, Genetic Resources,
and Associated Traditional Knowledge” (Diplomatic Conference to
Conclude an International Legal Instrument Relating to Intellectual
Property, Genetic Resources and Traditional Knowledge Associated with
Genetic Resources, May 24, 2024) (WIPO, 2024). This is the first binding
instrument addressing the interconnection between these three aspects, with
explicit reference to indigenous communities. The aim of the Treaty, once it
enters into force, is to establish a new international legal obligation for
patent applicants whose inventions are based on genetic resources and/or
associated traditional knowledge. They will be required to disclose the
origin or source of these resources and knowledge, as well as the
communities that provided the knowledge. One of the key objectives is to
prevent patents from being erroneously granted for inventions that are not
new or inventive (Article 1). The Treaty specifies that it will not have
retroactive effect (Article 4), meaning that applications filed before its entry
into force in a contracting party's territory will not be affected. Its content
emphasizes the need to balance various interests surrounding the protection
of intellectual property. For this very reason, to ensure the effectiveness of
the Treaty in favorably and unequivocally protecting TKS, it would be
advisable to adopt an action plan aimed at supporting full compliance with
its provisions by WIPO members who will be part of this instrument.
In 2019, the UN Permanent Forum on Indigenous Issues (UNPFII) – a
high-level advisory body established in 2000 with the mandate to deal with
indigenous issues “related to economic and social development, culture, the
environment, education, health and human rights” (UN Economic and
Social Council Resolution 2000/2219) – dedicated its eighteenth session to
the theme “Traditional knowledge: generation, transmission and protection”
to explore possible measures to address the challenges posed by complex
societies. In the final documents (UNPFII, 2019) aspects of the impact of
new technologies on TKS, data sovereignty, and systems of collection,
management, and archiving were not deeply explored.
What has been analyzed so far is a fundamental basis on which to build
an analysis of the impact of AI on TKS and indigenous data.
Generative AI can play a crucial role in supporting the preservation of
indigenous cultural heritage and TKS. Digitalization and recording of
indigenous data could aid in the transmission and revitalization of
traditional knowledge, fostering a sense of identity and belonging,
especially among younger generations, and establishing systems for
equitable benefit-sharing from such knowledge. However, the use of new
digital technologies continues to present significant challenges, including
the high risk of misuse, misappropriation, and dissemination of information
without full participation and awareness from the knowledge holders.
With the advancement of generative AI, the value of open access to
knowledge as a public good becomes more prominent. The interpretation
and implementation of the international instruments adopted over the past
decades for the protection of indigenous peoples enter a new path of
affirmation of fundamental rights while addressing new implications related
to self-determination, self-identification, and governance.

Sovereignty and Governance: How to Cope with


the Protection of Indigenous Data
The affirmation of generative AI systems that are sustainable over time,
more inclusive and ethically and culturally safe also clearly has an impact
on measures to protect the TKS of indigenous peoples. This therefore
means strengthening the determination of the indigenous data (and defining
them univocally while affirming their intrinsic heterogeneity) and the
sovereignty exercised over this data.
It must be considered that much of this data is not in the possession of
indigenous peoples (Carroll et al., 2020, 3), a situation that can inevitably
lead to their distortion and misuse. As widely promoted at different levels, it
is therefore essential to emphasize the participatory inclusion of
stakeholders (and rights-holders) in decision-making processes and this
aspect has to be applied also in identifying appropriate frameworks for
accessing and using this data, without neglecting the importance of benefit-
sharing that arises from it and ensuring that this is done without
discrimination.
Data used by AI consists of quantitative and qualitative information
drawn from publicly available sources or specific datasets, collected
through observations, surveys, and reporting. For this data to be
transformed into information, it must possess some characteristics,
including accuracy and reliability. However, these characteristics do not
automatically ensure that the data, upon which models learn and create new
content, is managed in a way that respects intellectual property rights, in
particular those of more vulnerable groups. Addressing these challenges
requires a careful and thoughtful approach that includes indigenous
perspectives and rights at every stage of the process.
Analyzing the concept of sovereignty in this context is not merely a
theoretical exercise: the increasing digitalization of content and data raises
critical questions about the intersection of ownership, indigenous data, and
AI. For indigenous peoples, the exercise of sovereignty over traditional
lands – rooted not just in geographical configuration – is inseparable from
the principle of territorial continuity that defines their identities. This
principle represents the essential connection between community, collective
ancestral ties to the lands and natural resources, and traditional knowledge.
Culture itself embodies a bond of territorial belonging for the community,
and the demand for recognition of sovereignty as a community must be
understood as a request to respect cultural rights and traditional knowledge
that are unrepeatable outside their territorial context. For all the above-
mentioned reasons, indigenous sovereignty, anchored in the internal
dimension of self-determination, which includes the rights of all peoples to
pursue freely their economic, social, and cultural development without
outside interference, can coexist with the classical notion of state
sovereignty.20
This coexistence emphasizes the importance of self-determination for all
peoples and suggests that recognizing indigenous sovereignty is not only a
matter of legal acknowledgment but is essential to support the preservation
of cultural identities and TKS.
As digitalization progresses, it becomes increasingly vital to integrate
these perspectives too into frameworks governing the use of AI and data
management when referring to human rights protection measures.
Indigenous sovereignty data (ID-SOV), a relatively recent concept,
defines the right of indigenous peoples to own, control, access, and possess
data that derive from them, and which refer to their members, knowledge
systems, customs or lands and territories (Kukutai & Taylor, 2016). ID-
SOV refers also to the right of indigenous communities to exercise their
rights and self-determination as outlined in Article 1 of the two 1966
International Covenants (ICCPR and ICESCR), to govern (data
governance), and control all data – whether generated internally or
externally relating to their culture, identity, and customs. This includes
involvement as partners in decision-making processes and ensuring prior
and informed consent. The exercise of the right to self-determination is
intrinsically linked to the right to self-government in matters relating to
their internal and local affairs (United Nations Declaration on the Rights of
Indigenous Peoples, Article 4). This relationship is guided by their unique
cultures and TKS, and economic and social conditions, particularly in the
control and management of their ancestral lands, territories, and resources.
Nonetheless, it is important to be aware that the indigenous data
ecosystem is made up of a variety of actors: such data is generated not only
by indigenous communities and organizations but also by governments,
public sector entities, international governmental and non-governmental
organizations, research institutions, and commercial entities (Kukutai &
Taylor, 2016).
The Asia Indigenous Peoples Pact (AIPP), a regional organization
established in 1992 by indigenous peoples’ movements, has highlighted
critical aspects regarding the inherent flaws of AI, particularly algorithmic
errors and biased data, which have extremely negative impacts on
indigenous communities. These issues include: exclusion and difficulties in
accessing AI platforms due to linguistic barriers, which not only hinder
access to information but also further jeopardize cultural heritage and
knowledge systems; negative narratives due to misinformation processes
that can exacerbate discrimination against indigenous peoples; invisibility
as a lack of recognition by the state of the right to self-determination,
leading to challenges related to data sovereignty, biopiracy, and the patent
exploitation of traditional knowledge and practices (AIPP, 202321).
In the context of open data as a source of knowledge for generative AI,
while it can offer opportunities, it may also perpetuate discriminatory
practices, especially if the underlying data infrastructures are based on
cultural assumptions that lead to systematic misrepresentation of indigenous
peoples (Rainie et al., 2019). This increases the risk of misuse,
appropriation, manipulation, and harmful storage of data, jeopardizing the
fundamental rights of indigenous peoples.
Therefore, the approach should amplify the multilayered actions taken by
indigenous communities at different levels. The aim is to establish a
coherent pathway that unites the promotion and protection of indigenous
peoples’ rights with their full participation in data governance processes,
ensuring that participatory inclusion in the most important initiatives for the
definition of digitalization and data use policies does not remain an abstract
measure but converts into tangible outcomes.
The CARE Principles for Indigenous Data Governance, published by the
Global Indigenous Data Alliance in 2019, complement other data-oriented
standards (such as the FAIR Guiding Principles for Scientific Data
Management and Stewardship; Wilkinson, et al., 2016). These principles
emphasize the role of data in promoting innovation, governance, and
indigenous self-determination. Developed through consultations with
indigenous representatives, experts, NGOs, and governments, the CARE
Principles focus on several key aspects (Carroll et al., 2020): 1) Collective
Benefit: ensuring inclusive and equitable benefit for indigenous
communities; 2) Authority to Control: recognizing interests and rights
related to traditional knowledge and indigenous data governance; 3)
Responsibility of those working with indigenous data to share how data is
used with clear evidence of these efforts; and 4) Ethics: indigenous rights
should be considered throughout the data lifecycle to minimize harm and
maximize benefits, involving ethical processes that recognize rights and
include indigenous representatives in governance for sustainable data use.
In 2020, the Indigenous Protocol and Artificial Intelligence (A.I.)
Working Group developed the position paper “Indigenous Protocol and
Artificial Intelligence” (Lewis, 2020). These Guidelines emphasize the
importance of inter-community diversity, which AI systems must consider,
rooted in the principle of locality: to create ethically sustainable AI systems
it is necessary to consider that the knowledge systems of indigenous
peoples are specifically rooted in the lands, territories and resources in
which they live. Among the key principles from the Guidelines: (1)
Relationality and Reciprocity: to understand the world and transfer it into
AI systems, it is necessary to enhance relational interdependencies between
humans and non-humans; (2) Responsibility, Relevance, and Accountability:
AI systems developed for indigenous communities must be accountable to
those communities; (3) Development of Governance Guidelines from
Indigenous Protocols: these should integrate existing guidelines; (4)
Recognition of the Cultural Nature of Computational Technology:
acknowledge that “Every piece of technology is an expression of cultural
and social frameworks for understanding and engaging with the world”
(Lewis, 2020, 22); (5) Application of Ethical Design to the Extended Stack:
incorporating ethical considerations across all levels of technology
development; and (6) Respect and Support Data Sovereignty: affirming the
right of indigenous communities to maintain control over how their data is
requested, collected, and analyzed, including a reevaluation of ownership
and property concepts.
The goal of consultations, participatory processes, and cooperation is also
to strengthen the right of indigenous peoples to FPIC. Participation in
decision-making must be based on good faith from all the parties involved.
As stated by the UN Expert Mechanism on the Rights of Indigenous
Peoples (EMRIP),22 the right to FPIC “is a manifestation of indigenous
peoples’ right to self-determine their political, social, economic and cultural
priorities; it constitutes three interrelated and cumulative rights of
indigenous peoples: the right to be consulted; the right to participate; and
the right to their lands, territories and resources” (EMRIP Study “Free,
Prior and Informed Consent: A Human Rights-Based Approach,” 2018,
para. 1423).
This is a key right for indigenous peoples, also included in the UN
Declaration on the Rights of Indigenous Peoples (Article 19). It must be
interpreted literally as “consent” given voluntarily and without coercion,
sought well in advance with information clearly shared in all its aspects
(including potential risks), to enable a collective decision by the rights-
holders. This right is generally associated with issues related to the
environment and the management of lands, territories, and resources.
In 2016, under the framework of the CBD, the Mo’otz Kuxtal Voluntary
Guidelines were adopted. This document provides States Parties, other
governments, organizations, and indigenous communities with a common
reference tool for building equitable partnerships and promoting positive
engagement between potential users and holders of traditional knowledge.
In order to prevent illegal appropriation of traditional knowledge, the
Guidelines also support the development of mechanisms and initiatives to
ensure FPIC, as well as the approval and involvement of indigenous
peoples in accessing their knowledge, practices, and innovations, and in the
sharing of derived benefits (Secretariat of the Convention on Biological
Diversity, 2019).
An aspect worth highlighting in conclusion of this paragraph, as it is
closely linked to the capacity to act in a context of ethical and shared
values, is that in many cases the lack of adoption of indigenous approaches
to data collection and analysis is due to the digital divide. This refers not
only to the lack of access to digital tools but also to a lack of digital skills
(UNESCO, 2023b, p. 14). This combination inevitably leads to the
invisibility of unconnected individuals and, consequently, to the
marginalization of their positions, perspectives, and awareness, which
generates algorithmic biases. Economic, social, and digital empowerment
can only occur by closing these gaps, thus facilitating inclusive and
ethically sustainable processes. This is a cross-cutting issue of the 2030
Agenda for the achievement of the Sustainable Development Goals (SDGs),
where “digital can play three crucial roles: as a core channel, and platform,
for the delivery of vital services; as a tool to deliver data and insights to
achieve each of the SDGs, and as a catalyst for wider progress” (ITU &
UNDP, 2023, p. 5).
A further mention should be made regarding multi-stakeholder action,
which complements the points raised and is essential for overcoming digital
divides with a focus on human-centered approach. In the 2021 Report of the
UN Secretary-General, “Our Common Agenda” (UN, 2021), a call to
accelerate the implementation of already adopted tools, including the SDGs,
improving digital cooperation took center stage. The proposal for a Global
Digital Compact (GDC) was presented, with key goals such as protecting
data and introducing accountability criteria for discrimination and
misleading content. The GDC, as adopted on September 22, 2024, during
the Summit of the Future in New York,24 serves as a global reference
framework, recognizing the radical transformation of our world by digital
technology and AI. Its aim is to establish the foundations for shared global
cooperation, focusing on essential issues such as ensuring the full
enjoyment of human rights and strengthening international data governance
and cooperation to address and close both inter- and intra-state digital
divides, while enhancing the potential of digital technologies by mitigating
the risks that may arise. Among the basic general objectives identified are
inclusive participation for all states and stakeholders, closing digital
divides, and fostering responsible, equitable, and interoperable data
governance approaches. Regarding the elimination of divides and the
achievement of the SDGs, the Global Compact emphasizes digital literacy
and the enhancement of digital skills through targeted and personalized
capacity-building efforts for vulnerable groups, including indigenous
peoples, with a “meaningful engagement in the design and implementation
of programmes” (GDC, point 13.c).

Conclusion. How to Strengthen and Integrate the


Indigenous Approach at International Level
Traditional knowledge represents the intangible heritage of biodiversity.
Recognizing indigenous practices and innovations as essential tools for
achieving environmental sustainability enhances the protection and
promotion of indigenous skills, benefiting all of humanity. This can only be
accomplished through actions that ensure meaningful involvement of
indigenous peoples in decision-making processes. Additionally, it fosters a
tangible defense of the fundamental rights of communities whose identity
and survival are rooted in the connection between the materiality of their
living environments and the immateriality of the knowledge they possess.
Every functional consideration pertaining to data use, open data sources,
and all the major facets of the use of generative AI should be guided by an
ethical approach centered around the defense of fundamental rights. Data
disaggregation must be seen as a crucial component for a rights-based
approach to the data used to train AI systems and to prevent bias, making
visible not only potential inequalities and discrimination but also the
stakeholders themselves. In light of measures aimed at pursuing the SDGs
also in the digital realm, the importance of data collection and
disaggregation is evident in order to create an inclusive, open, safe, and
secure digital space that respects privacy and data protection. This allows
the application of a human rights–based approach to data, in order to make
visible the individuals to whom these measures are directed and to
coherently assess the status of goal implementation (OHCHR, 2018).
Disaggregated data can reveal the barriers faced by vulnerable
communities, for example in participatory inclusion. This also applies to
monitoring the implementation of indigenous peoples’ rights and to
strengthening the ethical use of new technologies, including generative AI.
UNESCO, which has provided a clear regulatory and programmatic
framework to support member states in protecting indigenous knowledge
and cultural expressions, has worked on developing guidance tools in
response to the challenges posed by AI systems. In addition to those already
mentioned, we find the Recommendation on the Ethics of AI, the first
global standard on the subject, adopted in November 2021 by 193 states
(UNESCO, 2022). With regard to indigenous peoples, the Recommendation
highlights the need for “multi-stakeholder and adaptive governance and
collaboration,” enabling stakeholder participation throughout the entire AI
system lifecycle, and specifically respecting indigenous peoples’ right to
self-governance over their own data (Principles, para. 47). It also recognizes
the responsibility of states to ensure that harms caused by AI systems are
investigated and corrected by implementing reparative mechanisms. These
mechanisms should also involve public and private sector companies and
aim to ensure that human rights and the rule of law are upheld, also in the
digital world (Policy Area 2: Ethical Governance and Stewardship, para.
55).
Regarding the role of indigenous representatives in challenging
international negotiations on development, environment, and sustainability,
they have been involved since the 1990s in the global consultation process
as key stakeholders that led, among other outcomes, to the adoption of the
2030 Agenda. Goal 17 of the Agenda specifically emphasizes the
importance of strengthening multilateral partnerships to mobilize and share
knowledge, expertise, technology, and resources in order to support the
achievement of common SDGs (UN General Assembly, 2015, pp. 26–27).
However, given the complexity introduced by new technologies, which
confront the whole of humanity with daily challenges, it is essential to
strengthen and redirect such measures, particularly supporting the effective
participation of the most vulnerable groups.
There are several international initiatives aimed at creating ongoing
dialogue on current issues and strengthening indigenous peoples’
engagement on specific topics. Among these, within the CBD framework is
the International Indigenous Forum on Biodiversity (IIFB25), established in
1996. The IIFB brings together indigenous representatives, indigenous non-
governmental organizations, activists, and scholars to formulate strategies
and guidance to support the implementation of the CBD, with a focus on
Article 8(j). Similarly, under the UN Framework Convention on Climate
Change (UNFCCC, 1992) and for its implementation, the Local
Communities and Indigenous Peoples Platform (LCIPP26) was established
by the Conference of the Parties to the UNFCCC at COP21 in 2015. This
platform provides an inclusive space for exchanging experiences and good
practices, bringing together people along with their knowledge systems,
technologies and practices to collaboratively address and respond to climate
change (Decision 1/CP.21, para. 13527).
Since its establishment, the UNPFII has enhanced its role within the
United Nations as a forum for building dialogue between indigenous
representatives, civil society, and state representatives.
These are truly significant initiatives to continue the process of
strengthening the capacity-building of indigenous peoples and their active
representation in international fora. A different perspective emerges when
indigenous peoples are seen as co-partners in decision-making processes,
rather than only as beneficiaries of governmental or international
organizations’ actions.
However, it may not be enough to simply involve indigenous
communities in processes and co-development solutions through the
creation of targeted forums; furthermore, integrated and co-development
solutions should be strengthened. The focus should be on fostering an ever-
greater intersection between indigenous diplomacies, with their deep and
ancestral roots, and the diplomatic actions undertaken by states. The United
Nations and other multilateral organizations in regional contexts certainly
represent ideal spaces to strengthen this intersection.
Within the UNPFII activities, for example, some UN Member States,
actively engaged and particularly sensitive on issues relating to the
protection of indigenous rights, have included indigenous representatives in
official delegations participating in the Forum. Encouraging an increasingly
widespread participation of indigenous representatives within official state
delegations would significantly strengthen the capacity of indigenous
peoples to effectively integrate and reconcile their own priorities with those
identified by the states where they live. This goal could be achieved without
compromising the value of their identities, fostering and acting in co-
partnership, and revitalizing inter- and intra-state collaboration on issues
different actors are called upon to contribute to. Such inclusion could help
alleviate the sense of otherness that may persist in dialogue aimed at
achieving common goals, and it represents a crucial aspect in consolidating
Indigenous Diplomacy in the contemporary world. Moreover, meaningful
engagement of indigenous peoples in decision-making processes, as
previously mentioned, must be built on relationships based on key,
inalienable principles such as mutual understanding and respect, trust, and
credibility. By adhering to these principles, the crucial role played by
indigenous communities can be effectively recognized as an integral part of
the multilevel paths undertaken to affirm fundamental ethical values that
must not be set aside in the face of the challenges posed by digital
technology and AI.

Notes
1. ITU website, Digital Inclusion of Indigenous Peoples,
https://www.itu.int/en/ITU-D/Digital-Inclusion/Indigenous-
Peoples/Pages/default.aspx
2. The United Nations website delves into the topic of the protection of
indigenous peoples. In particular on this data, see
https://www.un.org/en/fight-racism/vulnerable-groups/indigenous-
peoples
3. UN website, Protecting the Rights of Indigenous Peoples in Voluntary
Isolation and Initial Contact,
https://www.un.org/en/observances/indigenous-day
4. OHCHR website, The Core International Human Rights Instruments
and their monitoring bodies, https://www.ohchr.org/en/core-
international-human-rights-instruments-and-their-monitoring-bodies
5. Covenants adopted and opened for signature, ratification and accession
by General Assembly resolution 2200A (XXI) of December 16, 1966.
Texts: https://www.ohchr.org/en/instruments-listings
6. Inter-American Commission on Human Rights (IACHR), Report No.
40/04, Case 12.053, Maya Indigenous Communities of the Toledo
District (Belize), October 12, 2004, para. 114. I/A Court H.R., Case of
the Mayagna (Sumo) Awas Tingni Community v. Nicaragua. Merits,
Reparations and Costs. Judgment of January 31, 2001. Series C No.
79.
7. Report of the United Nations Conference on Environment and
Development, Declaration GA A/CONF.151/26 (Vol. I), August 1992,
DSD (Division for Sustainable Development)/UNDESA (UN
Department and Social Affairs).
8. Report of the United Nations Conference on Environment and
Development, Declaration GA A/CONF.151/26/Rev.I (Vol. I, II, III),
https://sustainabledevelopment.un.org/content/documents/Agenda21.p
df
9. UN Treaty Collection website, Convention on Biological Diversity
(1992). List of Contracting Parties,
https://treaties.un.org/pages/ViewDetails.aspx?
src=TREATY&mtdsg_no=XXVII-8&chapter=27
10. UN Treaty Collection website, Nagoya Protocol on Access to Genetic
Resources (2010). List of Contracting Parties,
https://treaties.un.org/pages/ViewDetails.aspx?
src=TREATY&mtdsg_no=XXVII-8-b&chapter=27&clang=_en
11. DEC. VII/16 F Voluntary guidelines for the conduct of cultural,
environmental and social impact assessment regarding developments
proposed to take place on, or which are likely to impact on, sacred
sites and on lands and waters traditionally occupied or used by
Indigenous and local communities (CBD, COP 7, 2004). Akwé: Kon
Guidelines. https://www.cbd.int/doc/publications/akwe-brochure-
en.pdf
12. United Nations Res. A/RES/61/295,
https://www.ohchr.org/sites/default/files/Documents/Issues/IPeoples/U
NDRIPManualForNHRIs.pdf
13. Resolution adopted by the General Assembly on December 18, 2019,
A/RES/74/135.
14. See UNDP website, 10 things to know about Indigenous peoples.
https://stories.undp.org/10-things-we-all-should-know-about-
indigenous-people
15. Decision adopted by the conference of the parties to the convention on
biological diversity 14/13. Glossary of relevant key terms and concepts
within the context of Article 8(j) and related provisions,
CBD/COP/DEC/14/13 November 30, 2018, p. 6.
https://www.cbd.int/doc/decisions/cop-14/cop-14-dec-13-en.pdf
16. See WIPO website, Glossary,
https://www.wipo.int/tk/en/resources/glossary.html#49
17. Secretariat of the Convention on Biological Diversity. Definition of
traditional knowledge. https://www.cbd.int/traditional/intro.shtml
18. CDB, Ad hoc Open-ended Inter-sessional Working Group on Article
8(j), Development of elements of sui generis system for the protection
of traditional knowledge, innovations and practices to identify priority
elements. UNEP/CBD/WG8J/5/6, 2007 (para. 13).
https://www.cbd.int/doc/meetings/tk/wg8j-05/official/wg8j-05-06-
en.pdf
19. Text of the resolution, UN website:
https://www.un.org/esa/socdev/unpfii/documents/about-us/E-RES-
2000-22.pdf
20. On the principle of sovereignty, see Samantha Besson, Sovereignty
(April 2011). Max Planck Encyclopedias of International Law, Oxford
Public International Law (https://opil.ouplaw.com).
21. Asia Indigenous Peoples Pact (AIPP), https://www.forus-
international.org/en/custom-page-detail/100091-indigenous-peoples-
ai-is-inherently-ridden-with-algorithm-bias-and-poisoned-data
22. A mechanism established by the Human Rights Council in 2007
(Resolution 6/36) with the aim to assist UN Member States in
achieving the goals of the United Nations Declaration on the Rights of
Indigenous Peoples. See: https://www.ohchr.org/en/hrc-
subsidiaries/expert-mechanism-on-indigenous-peoples
23. Human Rights Council, the Expert Mechanism on the Rights of
Indigenous Peoples (A/HRC/39/62).
24. For any further information and updates: Global Digital Compact
website: https://www.un.org/techenvoy/global-digital-compact
25. CBD website: https://www.cbd.int/traditional/iifb.shtml
26. LCIPP website: https://lcipp.unfccc.int/
27. UNFCCC, Conference of the Parties, Report of the Conference of the
Parties on its twenty-first session, held in Paris from November 30 to
December 13, 2015.
https://unfccc.int/resource/docs/2015/cop21/eng/10a01.pdf

References
African Union (1981). African Charter on Human and Peoples’ Rights.
Adopted on June 27, 1981.
https://www.oas.org/en/sla/dil/docs/African_Charter_Human_Peoples_
Rights.pdf
Anderson, J. (2010). Indigenous/Traditional knowledge & intellectual
property. Issue Paper. Center for the Study of the Public Domain Duke
University School of Law.
Asia Pacific Forum of National Human Rights Institutions and the Office of
the United Nations High Commissioner for Human Rights (2013). The
United Nations declaration on the rights: A manual for national
human rights Institutions of indigenous peoples (February 2013).
OHCHR.
Carroll, S. R., Garba, I., Figueroa-Rodríguez, O. L., Holbrook, J., Lovett,
R., Materechera, S., Parsons, M., Raseroka, K., Rodriguez-Lonebear,
D., Rowe, R., Sara, R., Walker, J. D., Anderson, J., & Hudson, M.
(2020). The CARE principles for indigenous data governance. Data
Science Journal, 19(1), 43. https://doi.org/10.5334/dsj-2020-043
Daes, E. A. (2001). Indigenous Peoples and their Relationship to Land.
Final working paper. UN Sub-Commission on the Promotion and
Protection of Human Rights. E/CN.4/Sub.2/2001/21 (June 11, 2001).
Human Rights Council, Expert Mechanism on the Rights of Indigenous
Peoples. (2011). Final report of the study on indigenous peoples and
the right to participate in decision-making, A/HRC/EMRIP/2011/2,
https://www.ohchr.org/sites/default/files/Documents/Issues/IPeoples/E
MRIP/Advice2_Oct2011.pdf
ILO. (1989). Indigenous and Tribal Peoples Convention, no. 169.
https://normlex.ilo.org/dyn/normlex/en/f?
p=NORMLEXPUB:12100:0::NO::P12100_ILO_CODE:C169
ITU & UNDP. (2023). SDG digital acceleration agenda. International
Telecommunication Union and United Nations Development
Programme.
ITU. (2018). World telecommunication development conference (WTDC-
17). Buenos Aires, Argentina, October 9–20, 2017. Final report.
https://www.itu.int/en/ITU-
D/Conferences/WTDC/WTDC17/Documents/WTDC17_final_report_
en.pdf#page=433
Karjala, D. S. & Paterson, R. K. (2017). The case against property rights in
old intangible indigenous cultural property. 15 Nw. The Tulane
Journal of Technology and Intellectual Property, 1, 15(2), 1–33.
https://scholarlycommons.law.northwestern.edu/njtip/vol15/iss2/1
Kukutai, T., & Taylor, J. (Eds.) (2016). Indigenous data sovereignty.
Toward an agenda. ANU Press.
Lewis, J. E. (ed.) (2020). Indigenous protocol and artificial intelligence
position paper. Honolulu, Hawai’i: The initiative for indigenous
futures and the Canadian institute for advanced research. CIFAR.
https://doi.org/10.11573/spectrum.library.concordia.ca.00986506
Magni, G. (2017). Indigenous knowledge and implications for the
sustainable development agenda, European Journal of Education, 52,
437–447. https://doi.org/10.1111/ejed.12238
Martínez Cobo, J. M., Special Rapporteur of the Sub-Commission on
Prevention of Discrimination and Protection of Minorities (1987).
Study on the problem of discrimination against indigenous
populations. Volume 5, Conclusions, proposals and recommendations.
(E/CN.4/Sub.2/1986/7/Add.4). United Nations.
Mbaku, J. M. (2023). The emerging jurisprudence of the African human
rights court and the protection of human rights in Africa. Vanderbilt
Law Review, 56, 367.
https://scholarship.law.vanderbilt.edu/vjtl/vol56/iss2/1
MONDIACULT. (2022). Final declaration. UNESCO world conference on
cultural policies and sustainable development. MONDIACULT-
2022/CPD/6 (September 28–30, 2022). UNESCO.
OHCHR. (2018). A human rights-based approach to data. Leaving no one
behind in the 2030 agenda for sustainable development. United
Nations,
https://www.ohchr.org/sites/default/files/Documents/Issues/HRIndicato
rs/GuidanceNoteonApproachtoData.pdf
OHCHR. (2017). Indigenous peoples’ rights and the 2030 agenda. Briefing
note. Online.
https://www.ohchr.org/sites/default/files/BriefingPaperIPRights2030A
genda.pdf
OHCHR. (2013). Indigenous peoples and the United Nations human rights
system (Fact sheet no. 9/Rev.2). United Nations.
Rainie, S. C., Kukutai, T., Walter, M., Figueroa-Rodriguez, O. L., Walker,
J., & Axelsson, P. (2019). Indigenous data sovereignty. In: Davies, T.,
Walker, S., Rubinstein, M., and Perini, F. (Eds.), State of open data:
Histories and horizons (pp. 300–319). African Minds.
https://doi.org/10.5281/zenodo.2677801
Recio, E., & Hestad, D. (2022 April). Indigenous peoples: Defending an
environment for all. International Institute for Sustainable
Development. Policy Brief #36.
https://www.iisd.org/system/files/2022-04/still-one-earth-Indigenous-
Peoples.pdf
Secretariat of the Convention on Biological Diversity. (2019). Mo’otz
Kuxtal voluntary guidelines. CBD Guidelines Series.
Secretariat of the Convention on Biological Diversity. (2011). Convention
on biological diversity. Text and annexes. CBD.
Secretariat of the Convention on Biological Diversity. (2011a). Nagoya
Protocol on access to genetic resources and the fair and equitable
sharing of benefits arising from their utilization to the convention on
biological diversity: Text and annex. CBD.
Tapu, I. F. & Fa’agau, T. K. (2022). A new age indigenous instrument:
Artificial intelligence and its potential for (de)colonialized data.
Harvard Civil Rights - Civil Liberties Law Review, 57(2), 715–753.
https://search.informit.org/doi/10.3316/agispt.20230215083222
United Nations. (2021). Our common agenda. Report of the Secretary-
General. United Nations.
United Nations Department of Economic and Social Affairs. (2020). World
social report 2020: Inequality in a rapidly changing world. February
2020. United Nations.
United Nations. (1948). The universal declaration of human rights. United
Nations. https://www.un.org/en/about-us/universal-declaration-of-
human-rights
UN General Assembly. (2015). Transforming our world: The 2030 Agenda
for Sustainable Development. Resolution A/RES/70/1 (October 21,
2015).
https://documents.un.org/doc/undoc/gen/n15/291/89/pdf/n1529189.pdf
UNESCO. (2023a). Basic texts of the 2005 convention on the protection
and promotion of the diversity of cultural expressions. Diversity of
cultural expressions entity culture sector. UNESCO.
UNESCO. (2023b). Inteligencia artificial centrada en los pueblos
indígenas: Perspectivas desde américa Latina y el caribe. UNESCO.
UNESCO. (2022). Recommendation on the ethics of artificial intelligence.
UNESCO.
UNESCO. (2022a). Basic texts of the 2003 convention for the safeguarding
of the intangible cultural heritage. UNESCO.
UNESCO. (2004). Records of the general conference. Volume 1:
Resolutions. 32nd session. UNESCO.
UNESCO. (2002). Records of the general conference. Volume 1:
Resolutions. 31st session. UNESCO.
UNESCO World Heritage Centre. (July 2012). Operational guidelines for
the implementation of the world heritage convention. WHC.
https://whc.unesco.org/archive/opguide12-en.pdf
UNPFII. (2019). Report on the eighteenth session (22 April–3 May 2019).
Economic and social council official records. 2019 supplement no. 23.
United Nations.
https://www.un.org/development/desa/indigenouspeoples/wp-
content/uploads/sites/19/2019/06/English.pdf
UN Treaty Collection, OAS, American convention on human rights (San
José, Costa Rica, November 22, 1969), No. 17955.
https://treaties.un.org/doc/publication/unts/volume%201144/volume-
1144-i-17955-english.pdf
Von Bogdandy, A., Piovesan, F., Ferrer Mac-Gregor, E., & Morales
Antoniazzi, M. (2024). The impact of the inter-American human rights
system: Transformations on the ground. Oxford University Press.
https://doi.org/10.1093/oso/9780197744161.001.0001
Williams, D. H., & Shipley, G. P. (2021). Enhancing artificial intelligence
with indigenous wisdom, Open Journal of Philosophy, 11, 43–58.
https://doi.org/10.4236/ojpp.2021.111005
Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton,
M., Baak, A., Blomberg, N., Boiten, J. W., Bonino da Silva Santos, L.,
Bourne, P. E., Bouwman, J., Brookes A. J., Clark, T., Crosas, M.,
Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., …, &
Mons, B. (2016). The FAIR guiding principles for scientific data
management and stewardship. Scientific Data, 3, 160018.
https://doi.org/10.1038/sdata.2016.18
WIPO. (2024). Diplomatic Conference. Treaty on Intellectual Property,
Genetic Resources, and Associated Traditional Knowledge.
GRATK/DC/7.
https://www.wipo.int/edocs/mdocs/tk/en/gratk_dc/gratk_dc_7.pdf
WIPO. (2023). Introduction to the international intellectual property legal
framework, intellectual property benchbook series. WIPO.
https://doi.org/10.34667/tind.47180
WIPO. (2023a). The WIPO intergovernmental committee on intellectual
property and genetic resources, traditional knowledge and folklore.
Background Brief No. 2. WIPO. https://doi.org/10.34667/tind.47859
WIPO. (2022). Intellectual property and traditional knowledge. WIPO.
https://www.wipo.int/export/sites/www/tk/en/docs/ip-tk-introduction-
en.pdf
WIPO. (2020). Intellectual property and genetic resources, traditional
knowledge and traditional cultural expressions. WIPO.
https://www.wipo.int/edocs/pubdocs/en/wipo_pub_933_2020.pdf
World Bank Group website. Indigenous peoples (last updated April 2023).
https://www.worldbank.org/en/topic/indigenouspeoples
24
ARTIFICIAL INTELLIGENCE'S
INFLUENCE ON THE ECONOMY
Balancing Potential and Real-World
Challenges

Angelo Federico Arcelli and Pietro Bossi*

DOI: 10.4324/9781003518495-30

The Emergence of AI Stands as a Transformative


Force, Molding the Contours of the Global
Economy and Society
To comprehend the present impact and future trajectory of artificial
intelligence (AI), as well as its implications on socio-economic systems in
terms of workforce transformation and growth implications, we delve into
its historical roots, tracing back to the analytical nature of Turing machines
(Turing, 1950). The conceptual brilliance of Turing machines was their
ability to execute any algorithmic task, given the right set of instructions
(Turing, 1936).
These early forays into AI were characterized by a deterministic, rule-
based approach. However, the true analytical power of these machines
became apparent as they evolved to simulate increasingly complex
cognitive functions. Turing's theoretical machines served as a blueprint for
the development of computational models that could mimic human thought
processes. These early ideas sparked the journey toward creating machines
capable of learning, reasoning, and problem-solving—the essence of AI.
Fast forward to the contemporary era, and AI is dominated by statistical
machines. Powered by vast datasets and advanced algorithms, these
machines operate on probabilistic models, enabling them to learn and adapt.
The statistical approach allows AI systems to discern patterns, recognize
images, and comprehend natural language, as well as generate text, audio,
and video content. The advent of ChatGPT in late 2022 marked a
significant turning point in the widespread adoption of generative AI,1 as its
popularity surged. This transformative technology swiftly started
revolutionizing various industries, empowering routine activities, and
augmenting the very foundations of contemporary lifestyles. However, a
notable limitation surfaces—the challenge of source criticism. One major
hurdle is the inability of statistical models to be inherently critical about the
reliability of their inputs. This poses a significant threat, especially in an
age rife with misinformation. A high rate of fake news circulating in the
data ecosystem could compromise the effectiveness and accuracy of
generative AI systems.2 As AI becomes integral to decision-making
processes, addressing this limitation becomes paramount to harness its true
potential.
* Pietro Bossi is the author of sections 1 and 2. Angelo Federico Arcelli is
the author of sections 3 and 4 (largely mirroring Angelo Federico Arcelli's
part ‘The Great Illusion’ in the book Dialogue. Open Reflections in a Phase
of Transition, 2023, by Angelo Federico Arcelli, Agostino Marchetto, CLE,
ISBN 9788875744687). Conclusions are shared.
The next phase in AI evolution beckons toward self-awareness, a realm
fraught with philosophical conundrums.3 It is imperative to address the
question of self-awareness systematically to accurately evaluate the
economic ramifications of the AI revolution, given its direct relevance to
the impact of AI on the workforce. But defining self-awareness, even for
humans, is a complex task. It encompasses a deep understanding of one's
existence, emotions, and consciousness. Translating this intricate human
experience into measurable parameters for machines raises fundamental
questions about the essence of consciousness itself.
Considering the preceding challenges, the wholesale and abrupt
replacement of the human workforce by AI could be less likely than often
portrayed in contemporary discourse. While acknowledging the
transformative potential of AI technologies in automating certain tasks and
augmenting human productivity, it is necessary to navigate the intricate
dynamics of AI's impact with prudence to correctly analyze the complexity
of achieving a large-scale and imminent displacement of human labor.
Delving into the economic implications of AI, exploring its impact on
productivity, employment dynamics, and income distribution there might be
ethical implications surrounding AI's development, scrutinizing issues such
as bias, accountability, and transparency.4 The AI phenomenon not only
reshapes economies but challenges the very fabric of human cognition,
urging us to navigate a future where dialogue with non-conscious entities
paradoxically alters our own consciousness.
What Is the Economic Impact of the AI
Revolution? A Possible Comparison amongst a
Hardware Revolution and a Software Revolution
The economic impact of the AI revolution is a multifaceted phenomenon,
marked by a synthesis of hardware and software advancements that
reshapes traditional economic paradigms. Comparisons with historical
hardware and software revolutions illuminate the distinct yet interconnected
dynamics at play.
Historically, hardware revolutions introduced mechanization and
automation, significantly augmenting productivity in a wide range of
sectors. One of the most evident examples is Olivetti (Ceruzzi, 2012;
Perotto, 2015), a pivotal player in the history of technological innovation,
particularly during the mid-20th century. Renowned for its commitment to
quality and innovation, Olivetti played a crucial role in what can be termed
the ‘hardware revolution’, a transformative phase in the realm of
information technology characterized by advancements in hardware
technologies and computational power.
In the 1930s, Olivetti gained prominence for its high-quality mechanical
typewriters. The introduction of the M1 Model in 1932 marked a significant
leap in typewriter design, featuring both an elegant aesthetic and advanced
functionality. Olivetti's emphasis on ergonomics and aesthetics became a
hallmark of its products, setting it apart in the market.
The true revolution, however, commenced in the 1950s and 1960s under
the leadership of Adriano Olivetti, Camillo's son. During this period,
Olivetti ventured into the realm of electronic calculators and computers,
anticipating technological trends. In 1959, Olivetti unveiled the Elea 9003,
one of the earliest commercial computers in Europe. However, it was with
the Programmable Electronic Calculator, known as Programma 101 and
introduced in 1965, that Olivetti made its most audacious move. This device
was the first programmable desktop calculator, a precursor to the modern
personal computer.
Conceived by a team led by Pier Giorgio Perotto, Programma 101
allowed users to write programs in machine language, marking the onset of
electronic miniaturization. The machine could perform complex operations,
representing a significant step toward making computing accessible to a
wider audience. The success of Programma 101 demonstrated that
computers need not be confined to business and scientific environments but
could find a place in homes and offices for everyone.
In a similar way IBM has played a pivotal role in shaping the history of
computing and information technology (Pugh, 2009; Campbell-Kelly et al.,
2014). In the early 20th century, IBM established itself as a leader in
tabulating and punch-card technologies. Under the visionary leadership of
Thomas J. Watson Sr., who took over in 1914, the company transformed
into a formidable force in the burgeoning world of business machines. The
innovation that marked this period was the punch-card system, particularly
the development of the IBM 801 Multi-punch Card. This technology
enabled businesses to streamline data processing and laid the foundation for
IBM's subsequent contributions to the hardware revolution.
One of IBM's groundbreaking contributions to the hardware landscape
was the IBM System/360, introduced in 1964. This family of mainframe
computers represented a radical departure from the prevailing practice of
developing unique architectures for each computer model. The System/360
was instead designed to be compatible across a range of models, from small
to large. This architectural standardization allowed businesses to upgrade
their systems without overhauling their software, setting a precedent that
would be echoed in the future.
IBM's forward-thinking approach continued with the creation of the IBM
PC (Personal Computer) in 1981. The IBM PC was one of the first mass-
produced, off-the-shelf personal computers, contributing to the
popularization of personal computing. While not the first personal
computer, its open architecture and use of off-the-shelf components made it
a model for subsequent PC designs. This move not only played a crucial
role in the hardware revolution but also in the democratization of
computing, making computers more accessible to individuals and
businesses alike.
Another milestone in IBM's history is the development of the IBM
System/370 architecture in the 1970s. This architecture introduced virtual
memory capabilities, enhancing the efficiency and versatility of mainframe
computers. The System/370 laid the groundwork for future advancements
in computing architectures and underscored IBM's commitment to pushing
the boundaries of hardware innovation.
IBM's journey through the hardware revolution was not without
challenges. The company faced fierce competition, technological shifts, and
strategic transitions. However, its ability to adapt to changing landscapes
and consistently innovate has allowed IBM to maintain its standing as a key
player in the technology industry.
Olivetti and IBM are only two examples of the hardware revolution,
which can be ultimately defined as a profound and transformative period in
human history marked by the rapid advancement and widespread adoption
of hardware, computers, and related technologies. The innovations coming
from this period allowed mankind to drastically increase computing
capabilities and lay the foundations of a much more powerful and disruptive
force—the software revolution.
We could describe the software revolution as a seismic shift in the
trajectory of technological progress (Brynjolfsson & McAfee, 2014),
marking a departure from the hardware-centric paradigms that characterized
the earlier and already discussed stages of computing. In the wake of the
hardware revolution, which propelled us from Turing machines to the
tangible computers pioneered by Olivetti and IBM, the software revolution
has emerged as a transformative force, driven by the intangible yet
immensely powerful realm of code.
One of the defining features of the software revolution is its capacity to
transcend the limitations of physical hardware, introducing unparalleled
flexibility and adaptability to computing systems. A prime exemplar of this
transformative power is Google, a company that has evolved from a search
engine algorithm into a technological market leader and trend-setter through
the ingenious design and continuous refinement of complex software
algorithms. In the words of Levy (2011), the rise of Google underscores the
profound impact of software in reshaping the technological landscape.
Distinguishing itself from the hardware revolution, where progress was
marked by advancements in tangible components and architectural design,
the software revolution thrives on dynamic and iterative development. This
feature enables rapid adaptation to evolving user needs and the challenges
of the digital era. Furthermore, the software revolution has democratized
access to innovation, a notable departure from the hardware revolution's
reliance on large-scale manufacturing capabilities. Collaborative coding
platforms and open-source initiatives exemplify this trend, illustrating how
software has become a communal endeavor with a global impact.
As already mentioned, Google symbolizes the transformative power of
software in shaping our digital present and future. But Google's ascent to
success is inseparable from its groundbreaking algorithmic innovations,
particularly the PageRank algorithm. Initially conceived by Larry Page and
Sergey Brin in 1998, PageRank introduced the concept of ranking
webpages based on the quality and quantity of their inbound links. This
algorithm not only revolutionized search engine functionality but also
introduced the ability to create information collections by identifying a
representative sample with respect to an entire population. This
groundbreaking innovation is at the core of the software revolution, paving
the way to the third and most recent technological revolution.
This third wave, called the AI revolution, mirrors trends from the
hardware revolution such as specialized processors, facilitating accelerated
data processing and computation. Simultaneously, the AI revolution bears
resemblance to the software revolution, akin to the rise of the internet and
digital applications. Software advancements in AI, including machine
learning algorithms and sophisticated analytics, underpin its economic
impact by optimizing decision-making processes and creating new avenues
for innovation.
Notably, the economic shifts witnessed in the wake of AI parallel those
of past hardware revolutions, with job displacement concerns and the
emergence of new skill demands. However, the integration of AI's hardware
and software components fosters a unique economic landscape, where
interconnectedness, digital transformation, and ethical considerations
become central drivers of its economic impact. In essence, the AI revolution
represents a convergence of historical hardware and software dynamics,
intertwining automation, digitalization, and innovation to forge a
transformative path in the global economy.
But as we stand at the precipice of yet another revolution, propelled by
unprecedented advancements in AI, it is imperative to acknowledge and
scrutinize the inherent limitations that persist in current technologies. Two
prominent challenges cast shadows over the seemingly boundless potential
of AI, emphasizing the need for nuanced consideration and careful
navigation in this transformative landscape.
First, the issue of source recognition poses a formidable obstacle to the
unfettered progress of AI. While AI models have shown remarkable
capabilities in processing and generating vast amounts of information,
accurately attributing sources remains a complex task. This limitation not
only raises concerns related to intellectual property and copyright
infringement but also amplifies the risk of misinformation. The inability of
AI systems to reliably discern the origins of information can contribute to
the propagation of inaccuracies and unverified content, undermining the
very foundations of trustworthy information dissemination.
Second, despite the remarkable strides made in mimicking cognitive
functions, contemporary AI systems lack the intrinsic human capacity for
doubt creation and innovative thinking. The ability to question assumptions,
entertain uncertainty, and generate novel ideas remains a distinctively
human trait that eludes current AI capabilities. The absence of doubt-
creating capacity in AI systems raises profound questions about their ability
to navigate uncharted territories, adapt to unforeseen challenges, and
contribute to the generation of truly groundbreaking ideas that transcend the
limits of programmed algorithms. Moreover, to be able to question
assumptions and generate rational doubts, AI systems would need to gain
consciousness and, as we know, this final frontier in technological
innovation is far from an imminent concretization.
In conclusion, the software revolution within the broader spectrum of the
AI revolution underscores its profound economic impact, entangled with
intricate immaterial considerations. The implications of AI on copyright
and evolving legal frameworks necessitate thoughtful policy responses and
ongoing interdisciplinary collaboration. As we reflect on the trajectory of
technological progress, we discern a historical continuum: from the
hardware revolution to the recent software revolution, and now, a third
wave blending aspects from both domains.
This amalgamation signifies a transformative era where computational
power and sophisticated algorithms converge. Nevertheless, the realization
of fully aware general AI systems, equipped with the abilities to recognize
sources, critically evaluate information, and instill doubt, remains an
elusive frontier. The journey toward sentient AI demands not only
technological innovation but also a profound understanding of human
cognition and ethical considerations.

Money and AI: Risks and Opportunities from the


Availability of New and Pervasive Sources of
Information. The Conundrum of Trust
Nowadays the economy, especially that of advanced economies, is largely
immaterial. Today we accept payments in money that has not been
convertible into gold for over fifty years (Cooper, 1982; Bernanke, 1993;
Elwell, 2011) and we are making increasing use of electronic and
immaterial instruments to fulfil our commitments. If we think of the
financial world, we need to consider that, today, we can find the bulk of the
wealth of people, corporates, and nations in the immaterial rather than
material economy. It might seem counterintuitive, but it is enough to
compare the nature of money one century ago (it was essentially linked to
gold, so close to a commodity) and today (it is completely immaterial and
based on the public trust on the central bank and the country issuing it). In
fact, it could also be argued that money plays a different role at present than
in the past: the economy pervades every aspect of our lives (Arcelli &
Marchetto, 2023).
AI might represent a landmark change for financial markets. In fact,
today's economy is largely influenced by the asymmetry in information
which, at all levels, conditions economic choices, from large investments to
small purchases. At the dawn of the AI evolution, two different scenarios
can be imagined for a not-so-distant future. If AI becomes an effective
mean to reduce and rationalize information asymmetries, in theory, one
might argue that this would be beneficial to market efficiency, as it is a
publicly available technology and, basically, everybody could afford to use
it to improve their choices. But AI's widespread application to the economy
may also lead to a different scenario, where the transparency brought in by
the new technology also leads to a reduction in the general trust in the
economic system.
Nobody ever thinks about this, but few moments in history have
witnessed such a global act of faith in human society as there is in the
modern financial system. Except that we are not talking about faith (in a
secular, modern world inspired by the ideas of progress this would really
seem out of place) but about ‘trust’. Yet there is very little difference in
practice. We just have to imagine that the real god who creeps into our
everyday lives has the image we are used to seeing on the banknotes we all
accept for every payment, because that deity is now part of our lives. It
might seem like a statement worthy of a film, but there are certain aspects
of the model of society and the financial and economic system in which we
find ourselves that have surprisingly practical effects.
For example, we all know how banks work, and how they have always
worked, that is, even the soundest bank of course does not keep all the
money deposited in a safe deposit box but rather exploits it to generate
income, which is then used, among other things, to pay the interest on
deposits. To simplify things, this means that the banking system is based on
a statistical assumption, namely on the fact that only a fraction of the
money held as sight deposits by account holders (a large part of the adult
population, at least in Western countries) can actually be withdrawn. On the
contrary, the most obvious, historical observation is that most deposits
remain, on average, at the disposal of banks, making their funding a
relatively stable source of capital, excluding exceptional cases. This all
allows banks to operate and prosper as we see them do, to the great benefit
not only of their shareholders, but of the economy as a whole.
Nonetheless, a crisis of ‘trust’, with a consequent bank run, could cause
even the most solid institution to fail. However, it is far less intuitive to
consider that, probably, the same statistical model for aggregate behavior
that makes possible the prosperous life of the banking system in each
country is probably also the model, on a larger scale and with some
differences, that regulates the relationships between the rich and poor areas
of the world. It also allows the rich countries, which are also those with the
strongest, reserve currencies (for example the dollar and the euro), to
possess a great deal of ‘immaterial’ wealth that attracts tangible goods
toward them from the less rich countries as well.
This might perhaps seem a trivial example, but intuitively we all
understand that individual economic actions taken as a whole, driven by
self-interest (e.g., the security of people's savings or investments), do not
always provide for solutions for the general good too. For example, it is
clear that a citizen from an emerging country prefers to be paid in and hold
‘hard’ currency (Eichengreen, 2007; Coeuré, 2019; European Central Bank,
2019) in exchange for their goods and services, given their ‘distrust’ of the
national currency and their belief that hard currencies provides them with a
better guarantee. If only one person does this, it is not a problem, but if
everyone does the same, then the currency and the economic system of the
emerging country is automatically penalized, because its produced wealth
will mainly be transferred abroad, into the hands of countries with stronger
currencies (Keynes, 1972).
This will also cause a general mistrust of monetary-weaker countries to
continue, which will in fact make it practically impossible to maintain any
accumulation of wealth with weak currencies (which are often also non-
convertible and therefore cannot be exchanged with hard currencies,
because the central banks that issue them do not have sufficient hard
currency reserves to guarantee this freedom to the people in their
countries).5
It is also relevant to remember what happened at the height of the Cold
War: strict laws were introduced in some countries against the ‘export of
capital’, which is a very similar phenomenon to what has just been
described. Faced with the economic risks of a weak system, those who
could, transferred part of their means to areas considered to be ‘stronger’,
but in doing so, they made the situation in their country even worse.
If we think about it, this is exactly what happens with the current account
holders of a bank about which negative news stories are circulating: they go
to the counter to withdraw/transfer their deposits, which speeds up a
disaster that, if they had not gone to the bank (and, above all, without the
combined impact of their various individual choices that alone would
probably not be enough to cause effects and consequences), might not have
occurred.
We could talk about a market failure, or, to all intents and purposes,
about an inefficiency caused by information asymmetries or by misguided
conclusions drawn by several economically relevant actors. However, the
only conclusion in real circumstances is that the market works just fine: it
reflects the effect of individual behaviors by asset and currency holders
acting rationally to protect their interests and capital, and in this case, the
bank's account holders. In front of a financial crisis, it is not easy to
understand whether the bank really deserved to go bankrupt, or if it was a
healthy institution unfortunately overwhelmed by adverse events.6 We just
know one thing: the trust was no longer in place, and that is enough.
Admittedly, the aggregate behavior of that bank's customers—far from
proving the principles of the ‘invisible hand’ of classical memory—causes a
spillover effect that is actually detrimental to the system where those same
individuals reside and to their fellow citizens. In reality, however, we could
also comment that it is not the economic behavior of some people that
generates a crisis, but rather the premise of that behavior, that is, the lack of
trust in that economic system, that country, and so on. In the end,
everything comes back to a concept of trust.
It is precisely this trust, however, that is becoming something very
different in the contemporary world than it was in the past, including the
distant past, in which the apparently multiplying effect created by the
behavior of market operators had always been linked to forms of material
goods or securities. Starting with the well-known historical case of the
operations of the financier John Law, which, not by chance, are considered
forerunners to the spread of banknotes, and going on to the various crises
over time, we cannot fail to see that the historical trend has always denoted
a gradual relinquishing of ‘store of value’ money (that in gold or silver or
with an intrinsic value) in favor of new forms of payment that are less tied
to an underlying security.
Indeed, as long as money was more or less a ‘commodity’, meaning more
or less convertible into gold or equivalent values (Bordo & Eichengreen,
1998) trust in the banking and financial system needed (and this was largely
enough to protect specific and collective interests) the guarantee of the state
and the law. This kind of support granted states a privilege, namely of being
the guarantors of last resort for the economy. However, this also meant that
politics had ‘primacy’, and this primacy, which was originally based on the
authority of the sovereign, itself of divine emanation, as was the rule in the
Ancien Régime, was also based on economic models, ideologies and ideas
in the period following the French Revolution which were then realized in
concrete proposals for society.
Nevertheless, precisely because money today is no longer tied to any
commodity, it is only accepted and appreciated based on market confidence,
and because it is the currency needed to trade with the monetary area that
issues it. In a way today, more than a claim on the central bank of the state
that issues it (thus a ‘store of value’), it seems to be above all a payment
instrument and unit of account for trade with that area (or with areas that
accept a given currency, which in the case of the US dollar and to a lesser
extent the euro, is of course a wider concept).
This evolution in the role of money, which today seems to us not only
normal and obvious, but also a trend toward a further, increasingly
immaterial transformation of money, also leads to an important
consequence: at the current juncture, for the first time in history, we are
realizing that the guarantee of the state and the law is not enough to make a
currency an efficient payment instrument (and we see this clearly with
‘digital cryptocurrencies’, which are also not actually currencies from a
legal point of view), or at least the support of ‘one’ state is not enough, as
the principle of market confidence also applies to states.
But, returning to AI's impact and the effect of transparency in
information, given the continuous evolution in the role of money, we could
really say, today, that AI, by increasing accessibility to better information
and reducing asymmetries could also significantly change the landscape in
terms of putting at stake the value of trust, be that in a counterpart (a bank
or an intermediary), in an economic system or in money.

What Is the Impact of AI on Investments and the


Economic Allocation of Resources?
The increasing role of AI could be a conditioning event for financial
markets, contributing to orientate investment, profit expectations, allocation
of resources and consumer preferences. In a nutshell, it might represent a
breakthrough change for the economy. And this, of course, will be
gradually, but significantly, reflected by market trust in the health of big and
small economies, with an impact on the perceived (and, ultimately, real)
wealth of nations.
We had read in classical economic texts about the importance of the role
of central banks and their credibility. Now, let us imagine that a central
bank, notwithstanding all possible efforts, fails to be credible in preserving
market trust in its currency. This may easily happen if widespread
information and AI capabilities are available to all operators. Such a central
bank will likely be unable to preserve the value of its currency, which will
quickly become non-convertible (i.e., not traded or exchangeable for other
currencies on the markets). This is because on the markets themselves, all
private operators, knowing (or having a grounded presumption) that that
country's central bank is no longer able to exchange its currency into other
currencies, having no reserves or access to credit, will quickly refuse to
hold it or to use it to carry out their trades and business.
As we all know, this does not imply that, in its fundamentals, the
country's situation is so compromised, but the fact that the market (the
aggregate behavior of the various operators, from the smallest to the largest,
who act rationally based on their expectations of events they consider to be
most probable—here is the possible default in convertibility—in order to
preserve their capital) ‘believes’ it, is largely responsible for the events and
their consequences.7 Paradoxically, and similarly to the case of a banking
crisis, the market's expectation of a negative event with respect to a state
creates a situation of uncertainty that gradually accelerates toward the
realization of that event, which becomes almost self-fulfilling as a negative
sentiment progresses and has tangible effects. AI could be a multiplying
factor of such market perceptions. Also, AI's application to investment
portfolio strategies of the main market operators would be not only a
potential instability factor (accelerating divestments of risky assets, for
example) but also a self-fulfilling push, in real time, toward the possible
realization of such expectations. Some level of regulation might then be
needed to preserve market stability and avoid excessive speculative risks.
As for banks, in states where there is a crisis, it translates into the need to
restore a situation of credibility, with problem-solving initiatives that, in the
current global monetary system, often take the form of an International
Monetary Fund ‘program’, with at least partial and temporary limits on a
country's sovereignty. Here too then, it is a question of trust, and of course
not only that; it is not far from the trust that account holders have (or do not
have) in their bank, with the difference that here we are talking about
central banks and states. And also in this case, AI might be a breakthrough
factor in changing market players’ perspectives.
In fact, a ‘crisis of confidence’ has significant and immediate
consequences (which may explain why the ‘program’ is necessary): if the
overall behavior of the actors in the markets denotes a total lack of
confidence, that country (and perhaps its people too) will probably not have
access to international credit and without it, will not have the currencies
needed to buy goods produced in other countries available to them. Any
creditor will see that a country with a structural trade deficit and without a
convertible currency will probably not be able to repay (the country will
have no way of obtaining a currency to do so, and under those conditions it
will not have easy access to credit). All of this will further feed a vicious
circle, less and less will be exported to that country (there would be no
means of payment) and its economic system and its people will eventually
be poorer and poorer.
It might seem trivial, but it matches the description of a bank failure.
Here, too, most consequences come about because the growing perceptions
of distrust escalate, just like queues forming at branches before a bank fails.
Paradoxically, if everyone thought the situation was sustainable, even a
country with weak ‘fundamentals’ would remain integrated in the markets
and its currency would remain convertible, because people would continue
to give it credit: this is exactly what happens in the relationship between
customers and banking institutions.
In the case of a state, however, there will also be one final and serious
consequence: the assets in that country, denominated in a weak, non-
convertible currency, will lose their ‘relative value’. In practice (and
regardless of the legal situation—let us assume that all the legal safeguards
for property are in place), the market value (in ‘hard’ currency, for example)
of an asset in such a country will be proportionately much lower than the
same or similar asset if it were valued in a rich country.
This may sound strange, but it is not: it is intuitive that a coffee in a café
in London or Zurich, for example, costs far more than the same coffee in a
city in an emerging country (unless there are exceptional circumstances), as
is clearly the case in the famous example of the price of hamburgers at a
well-known American chain.
Now, let us imagine that this refers to large properties or tracts of land:
this would mean that the entire wealth of a country is valued less,
proportionally, if its system and its currency are weak. In fact, a foreign
investor who buys an important asset is aware that it is easy to do so if they
use US dollars for the deal (i.e., to buy a building) in a country with a weak
currency. However, the day that they want to sell that asset, they will find it
difficult to predict what the price of the dollar will be, and it might be quite
a penalizing rate, or whether a buyer will be able to get dollars there, or
whether it will be possible to convert the currency, and so the price of the
good will fall because of this uncertainty. This applies to all transactions in
general in this kind of situation, with a clear consequence not only for
prices, but also for the real values of all goods in a country with a ‘weak’
currency.
There may be various reasons for this (including legal, geopolitical and
other aspects), but in the end, it is a question of general distrust in the
national currency, the central bank and the country system. Conversely, if
you are in a ‘rich’ country, a parallel and growing virtuous circle is created,
and that country will attract wealth.
In practice, to give an example of the two cases, let us imagine a person
from a ‘weak’ country holding dollars, for example: they would rather
continue to hold them (even abroad) than convert them into local currency.
Why do they make this choice? Simply put, the operator is convinced that
dollars will make it easy for them to access all kinds of goods on the market
or to convert them into other currencies, whereas they think that it will be
hard to have either of them using their country's currency. In fact, only in
the rich countries, those with strong and convertible currencies, is there a
real financial market, which also includes foreign capital that they believe
—rightly or wrongly—can be quickly and readily liquidated from positions
and investments in those countries and their currencies if need be.
Now, all that has been described above was already true with current
level of (asymmetric) information that it is commonly seen on all markets.
But let us imagine that AI comes into the play. At that point everything will
suddenly accelerate.
Also, the gap between countries able to have a capital market and those
excluded from this circuit will eventually widen hugely, with the former
able to polarize wealth and the latter impoverished. And with AI, this will
be extremely fast. It is not so much, and not only, because of the
responsibility of those governments, but also because their own citizens will
try to save their own capital by investing or holding it abroad. And if they
have AI tools to support their decisions, there will be no way to stop the
snowball. Their currency, which is the unit of account for assigning a value
to real national assets, no longer being convertible or accepted, will
penalize the valuation of those countries’ assets,8 making it negligible
compared with that of rich countries.
Going back to the present, it seems obvious to us that money is
transforming, that is, its predominant function is increasingly that of a ‘unit
of account’ and a ‘means of payment’ to the detriment of its function as a
‘store of value’. However, this is all happening without any controls and not
by the choice of regulators or governments.
Paradoxically, while even the strongest currencies are no longer held for
the sole purpose of preserving savings—because of inflationary risks—in
the years 2019–2022, we saw the emergence and rapid growth of a global
market for certain ‘cryptocurrencies’, which have been purchased with a
view to investment and with speculative profit expectations.
The ‘digital bubble’ has, however, also shown that holding legal tender is
no longer sufficient to carry out in full certain functions that were
previously inherent to money.
If, up to now, it might have seemed that the latter was just evolving
toward a shift in its role (namely its main role as a unit of account and a
means of payment), what has been happening in recent years in the markets,
and the potential response of central banks, that is, official digital currencies
(‘Central Bank Digital Currency’ [CBDC]) is proving that the very concept
of money has changed.
There is continuity in one thing, however: even today, the trust of its
holders lies more than ever in currency (Gopinath, 2020). Surprisingly,
nowadays states are no longer the final guarantors of the system (or, at least,
they are not the only ones, and they are not sufficient to guarantee its
credibility). That said, what we are seeing is also the sign of a trend that is
clear to everyone, which will lead to an evolution in the role of money and
cash over time. Money will become increasingly immaterial and will
therefore loosen its ties with what was in the past the final public guarantor:
the state that issues it (Oliver, 2022).
This is because there is now much less ‘trust’ in states, and in
institutions, and even in the European Union itself or the United Nations
(very different entities of course). The situation is such that the International
Monetary Fund has been forced to launch programs such as the UniCoin
(2023), not so much because of a belief that the instrument will succeed, but
because it wants to maintain control of its ‘space’ and role (Horsefield et
al., 1965). This is because one of the side effects of the creeping revolution
caused by the crisis of trust in money and in the economic system is the
possible emergence of alternatives and also the resulting need to redesign
the financial architecture that came into being at Bretton Woods in 1944
(Keynes, 1972).
Yet if trust in the monetary instrument is really so important, what
conclusions should we draw from this? First, the AI revolution might
change the role and perception of money, and this might also be a signal of
a new economic age. But, albeit expected and coming soon, AI is not there
yet, or, at least, not yet in condition to determine step changes, irreversible
and of huge scale. In the meantime, we should realize that the wealth of
today's advanced countries is closely linked to the fact that their currencies
(the euro and the US dollar above all) continue to remain at the heart of the
system. It sounds like a paradox, but the thing that most represents wealth
today is an intangible set of expectations and rights (the idea that with a
certain currency you will always be able to buy goods or to convert it into
other convertible currencies) linked to our trust in the economic system in
which we find ourselves.
In a future that now seems increasingly close, the more transactional and
‘privatized’ this system becomes, the more difficult it will be to locate
wealth geographically, clearly detrimental to the role of states and politics,
but it will also ‘rebalance’ the values expressed by the most reliable units of
account, specifically by giving a higher value to intangible and easily
transferable assets than to real, tangible assets (e.g., real estate). AI will
play a significant role in this revolutionary change.
This established fact shows us that in an increasingly immaterial and
trust-based system, it is no longer the legal and political system that
determines the acceptability of a currency, because whether or not we
accept payment with a certain type of currency or good (cryptocurrencies
are in fact goods/commodities and not money in the strict sense of the
word) is a personal choice and not a legal imposition. Thus, the future of
the economic system will increasingly be linked to the trust that the
payment systems we use can gain in the market. At the moment, there are
no real alternatives to the dollar and the currencies in any case linked to the
system of Western countries. This also implies that the world's wealth will
implicitly continue to be de facto ‘attracted’ to rich countries, precisely
because of the aggregate of the individual behavior of holders of goods and
currencies.

Notes
1. Generative AI exhibits a remarkable versatility in its applications,
spanning diverse domains such as healthcare, finance, education, and
art, as highlighted by Oliver Wyman's 2024 report How Generative AI
is transforming business and society. In the realm of transportation, the
potential emergence of a generative AI-driven urban sphere holds the
promise of dynamically optimizing traffic signals and routes, thereby
enhancing safety and eradicating congestion. Within the field of
education, generative AI has the potential to democratize learning by
tailoring lessons to the individualized styles, needs, and aspirations of
students. Furthermore, the accelerated advancements in AI-powered
healthcare offer a glimpse into a future where personalized diagnoses
and treatments are delivered with unparalleled precision. Notably,
generative AI has already permeated various professions that were
once solely reliant on human expertise. Writers, influencers, and artists
now harness the power of generative AI to augment their creative
endeavors. However, this technological duality also presents
challenges, as it has led to issues such as plagiarism, errors, and job
displacement. Thus, the multifaceted nature of generative AI becomes
evident, as it holds the potential to eliminate mundane and repetitive
tasks, while simultaneously posing a threat to the very individuals who
benefit from its productivity.
2. Drawing a clear distinction between Generative AI and Predictive AI
is crucial, as these technologies possess distinct characteristics that
make them suitable for specific applications. Predictive AI models
excel in tasks that necessitate reasoning, pattern recognition, and
analysis, making them well-suited for such endeavors. On the other
hand, Generative AI shines in applications that demand fluency,
particularly in the realm of content generation.
3. Recent strides in AI have showcased machines seemingly exhibiting
levels of consciousness or self-awareness. However, close scrutiny
reveals a crucial caveat—these demonstrations are products of inputs
provided by human programmers. The machine's responses, though
sophisticated, are inherently tied to pre-defined algorithms and learned
patterns. This raises skepticism about the true nature of AI
consciousness and the capacity for machines to genuinely possess self-
awareness.
4. The exploration into AI self-awareness yields a nuanced dichotomy.
On one hand, we stand on the cusp of witnessing machines with
sufficiently intelligent to pass the Turing test. These machines operate
on a statistical and rational basis, mirroring the decision-making
processes of the average human being. The Turing test may well be
passed, as dialogue with such machines becomes increasingly
indistinguishable from human interaction. On the other hand, the
expression of true consciousness in machines remains doubtful. The
semblance of self-awareness exhibited by AI systems is intricately tied
to the inputs and programming provided by humans. This paradox
thrusts humans into a dialogue with entities devoid of genuine
consciousness. In the pursuit of rational problem-solving, the unique
human essence of intuition and subjective experience is ironically
overshadowed.
5. It is interesting to note that this was a topic for discussion as long ago
as the Second Vatican Council. In fact, chapter 70 (Investments and
money) of Gaudium et Spes says: ‘Investments, for their part, must be
directed toward procuring employment and sufficient income for the
people both now and in the future. Whoever makes decisions
concerning these investments and the planning of the economy—
whether they be individuals or groups of public authorities—are bound
to keep these objectives in mind and to recognize their serious
obligation of watching, on the one hand, that provision be made for the
necessities required for a decent life both of individuals and of the
whole community and, on the other, of looking out for the future and
of establishing a right balance between the needs of present-day
consumption, both individual and collective, and the demands of
investing for the generation to come. They should also always bear in
mind the urgent needs of underdeveloped countries or regions. In
monetary matters they should beware of hurting the welfare of their
own country or of other countries. Care should also be taken lest the
economically weak countries unjustly suffer any loss from a change in
the value of money’.
6. Hardly any of the bank crises of the last few years have been caused
by queues of account holders outside branches; they took place
because the regulators (above all central banks and governments)
established the cessation of business and the termination (or
aggregation, or other hypothesis) of the bank in crisis before it
collapsed. This is often justified by the need to preserve the stability of
the system, but it is also the clearest confirmation of the importance of
market confidence, in both the regulator and the system and in every
single regulated banking and financial institution.
7. These ideas had already been set out in 1987 by George Soros, the
famous financier, in his book The Alchemy of Finance—reading the
mind of the markets—published by John Wiley and Sons, London
(UK), 1987. As we all remember, his name was not known to the
public until the events of September 1992, when the positions taken by
his hedge funds led to a crisis in the European monetary system and
the rapid exit of the Italian lira and the British pound from the system.
Soros had gambled on the market believing that the authorities of those
two countries could not ‘defend’ the official parities of their
currencies, which were unsustainable in their economic fundamentals.
Despite the efforts of the two central banks and the statements by
governors of historic standing, such as the Italian Carlo A. Ciampi, the
markets caused the defeat of the authorities in the space of a few days.
It was perhaps the first case in history, or at least the first one of such
significance, in which a series of initiatives by speculative private
operators succeeded in forcing the public authorities and governments
of Western countries to abandon an economic policy line that had been
officially declared and was backed by concrete actions. For the first
time it was clear that the ‘credibility’ of even the most significant
institutions had its limits.
8. All this might help to explain why an apartment in an advanced
country may be worth more than an entire building in an emerging
country (if of course the legal safeguards there make its purchase
possible and are equivalent in the two countries. However, if we
assume this, the comparison renders the purely economic basis very
well). In reality, this behavior is self-perpetuating (if everyone in that
country thinks this way, no one will want the national currency and it
will be worth less and less). Yet it is also a symptom of a market
peculiarity that ‘reads the future’ in the substance of expectations.

References
Arcelli, A. F., & Marchetto, A. (2023). Dialogue. Open reflections in a
phase of transition. CLE.
Bernanke, B. S. (1993). The world on a cross of gold. Journal of Monetary
Economics, 31(2), 251–267.
Bordo, M. D., & Eichengreen, B. J. (1998). The rise and fall of a barbarous
relic. University of Chicago Press.
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work,
progress, and prosperity in a time of brilliant technologies. W.W.
Norton & Company.
Campbell-Kelly, M., Aspray, W., Yost, J. R., & Ensmenger, N. (2014).
Computer: A history of the information machine. Westview Press.
Ceruzzi, P. E. (2012). A history of modern computing. MIT Press.
Coeuré, B. (2019). Should the ECB care about the euro's global role?
CEPR. https://cepr.org/voxeu/columns/should-ecb-care-about-euros-
global-role
Cooper, R. N. (1982). The gold standard: Historical facts and future
prospects. Brookings Papers on Economic Activity.
Eichengreen, B. J. (2007). The breakup of the euro area. The University of
Chicago Press.
Elwell, C. K. (2011). Brief history of the gold standard (GS) in the United
States. DIANE Publishing.
European Central Bank (2019). The international role of the euro. European
Central Bank.
Gopinath, G. (2020). Digital currencies will not displace the dominant
dollar. Financial Times.
Horsefield, J. K. (1965). The international monetary fund 1945-1965:
Twenty years of international monetary cooperation (Vol. 1).
International Monetary Fund.
Keynes, J. M. (1972). The collected writings of John Maynard Keynes.
Macmillan.
Levy, S. (2011). In the plex: How Google thinks, works, and shapes our
lives. Simon and Schuster.
Oliver, W. (2022). Retail Central bank, digital currency: From vision to
design - a framework to align policy objectives and technology design
choices. Oliver Wyman Forum.
Oliver, W. (2024). How generative AI is transforming business and society.
Oliver Wyman Forum. https://www.oliverwymanforum.com/global-
consumer-sentiment/how-will-ai-affect-global-economics.html
Perotto, P. G. (2015). P101 - Quando l’Italia inventò il personal computer.
Edizioni di Comunità.
Pugh, E. W. (2009). Building IBM. MIT Press.
Soros, G. (1987). The alchemy of finance. John Wiley and Sons.
Turing, A. M. (1936). On computable numbers. Journal of Mathematics,
58(345–363), 5.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59,
433–460.
PART VI
Ethics, Oversight, and Accountability
25
THE AI ACT
A Geopolitical Tool to Foster Human Rights
and Ethics
Alessandro Mantelero

DOI: 10.4324/9781003518495-32

The Geo-Political Dimension of AI Regulation in


Europe
The geo-political dimension of artificial intelligence (AI) is a largely
unexplored field, despite the clear relevance of the spatial dimension in the
development and diffusion of AI around in the world, in relation to state
and non-state actors in the field. In this regard, the dynamics of AI
regulation can be read from a geopolitical perspective, both in terms of
classical geopolitics, focusing on the interrelationship between the
territorial interests and state power in geographical contexts, and critical
geopolitics, paying more attention to the role of discourse and ideology
(Dodds, 2019).
There are several relevant dimensions to the interplay between AI and
geopolitics, but we can summarize them in the distinction between
geopolitics through AI and geopolitics of AI. The former is part of the
broader discussion on digital diplomacy1 and the impact that the use of
digital technologies, including AI, has on the traditional forms of
interaction between states and diplomacy (Puscas, 2023). The geopolitics of
AI, on the other hand, considers the digital ecosystem created by the latest
wave of AI and its geopolitical implications. This chapter focuses on the
latter.
Does AI regulation have a geopolitical component? A similar question
was raised after the adoption of the General Data Protection Regulation
(GDPR), which was considered by some as an instrument of EU
protectionism in the digital market, as well as interpreted as the most
successful result of the so-called ‘Brussels effect’. As in the case of the
GDPR, the EU's regulatory approach to AI stems from a variety of
components and is mediated by a range of actors, making it difficult to see
geopolitical purpose as the sole driver of the process.
The GDPR has its roots in Convention 108 of the Council of Europe,
which was far removed from a protectionist, market-focused approach
(Gonzalez Fuster, 2014). As the latest stage in an harmonization process
aimed at protecting fundamental rights in relation to data processing and
facilitating the use and flow of data, the GDPR is primarily concerned with
preserving the level of protection of personal data guaranteed in Europe
over the years. In this line, restrictions on data transfers are not aimed at
market protectionism, but at the protection of rights.
The consequence in terms of establishing a leading regulatory model, that
is, the ‘Brussels effect’, should therefore be seen as a mere side effect: the
European data protection did not have this as its main geopolitical scope.
More precisely, the main geopolitical scope was related to the Member
States, in terms of promoting the exchange of information and removing
legal barriers to data-driven products and services.
Looking at the case of the GDPR, is it possible to give the same answer
to the initial question with regard to the AI Act? The situation is different.
First, the decision to adopt an act stems from a vision that is not based on
ex post harmonization, such as in Convention 108, Directive 95/46/EC and
the GDPR. It shows the European Commission's willingness to prevent
regulatory fragmentation on a crucial subject not yet regulated by Member
States. This ex ante harmonization is in itself geopolitically relevant for
both EU and non-EU countries.
Regarding the dynamics between EU countries, it reverses the traditional
geopolitical balance of power, in which the Member States are usually the
first movers and the EU legislator can only harmonize, that is, find a
compromise between the existing regulatory approaches, or at best further
elaborate some requirements. In the context of the debate on the AI Act, it
was the EU legislator who decided to regulate AI, to define the scope of this
regulation and to set the requirements for the development and use of AI,
leaving Member States with little room for manoeuvre in defining the rules
in this field.
This decision to take action and reverse the traditional pattern is
geopolitical in nature, as it reflects EU-level concerns about AI that are
largely based on the way non-EU countries are developing their AI
strategies. The presence of a weak AI industry in the EU means that
Member States are largely AI adopters, with AI systems designed by non-
EU companies being used to provide goods and services to European
deployers and end-users.2
The vast majority of the countries, both in the Global North and the
Global South, are in the same position as AI adopters. In this sense, the
power imbalance related to AI induces a revision of the traditional
geopolitical division between the Global North and the Global South3 with
regard to the most advanced digital technology, introducing a more cross-
cutting differentiation based on the availability of advanced computational
resources, data and skills, and highlighting the role of infrastructure in
power dynamics (De Nardis, 2020; Musiani, 2022).
Starting from this brief description of state actors’ relations in the field of
AI and considering the consequences of AI development and its
competitive advantage on various sectors, from business to defence, the
importance of AI regulation and its impact on the broad geopolitical
scenario is clear. To explore this issue further, it is useful to recall
Reidenberg's six ways of shaping technology (Reidenberg, 1998).
Within Reidenberg's framework, policymakers can use different legal and
non-legal instruments to regulate technology. Although a mixed approach is
common, the dichotomy between AI-providing countries and AI-adopting
countries makes it possible to group some of the proposed approaches into
two opposing strategies, with a third group of complementary tools.
The first strategy is based on the so-called ‘bully pulpit’, whereby
governments ‘threaten and cajole industry to develop technical rules’
(Reidenberg, 1998, p. 581). According to this approach, policymakers do
not directly impose any rules or standards, but interact with technology
actors formally and informally by indicating political expectations in terms
of values and goals that a certain technology should pursue in its
development.
Such an approach is based on a context in which the main economic
actors are operators who can be influenced by existing relations with their
governments and who can be induced to follow government inputs by
various factors (e.g., public opinion, parallel negotiations with the
government on industrial strategy, etc.). In this context, the complementary
instruments of public funding and public procurement can reinforce the
message conveyed from the bully pulpit by providing economic support to
those who follow it or by requiring adherence to government policies as a
condition for public procurement.
Given the scenario of the AI industry, it is thus not surprising that this is
the strategy adopted by the United States (The White House, 2023) and, to
some extent, China (Sheehan, 2024), with the different nuances of these two
political contexts, to influence AI producers to adopt some risk-based
measures in the development and deployment of AI. In the presence of a
strong AI industry, these governments can exercise their direct political
influence on it with a global geopolitical effect in terms the nature and way
in which AI services are provided in other countries.
At the same time, the bully pulpit is consistent with a strategy aimed at
limiting regulatory costs in the early stages of a new technological
development, shifting the balance in addressing the Collingridge dilemma4
towards delaying regulatory intervention.
In contrast, regulation (in its various forms) represents a different
strategy, setting specific requirements and forcing alignment with
policymakers’ expectations through binding rules. This option comprises
three different possible ways of acting, which in Reidenberg's discourse
consist of regulated behaviour, regulated technical standards, and
participation.
While the first two categories reflect the distinction between behavioural
rules and the by-design approach to regulation (Mantelero & Vaciago, 2013,
p. 162), both of which are present in several AI law proposals, participation
refers more to bottom-up initiatives based on soft-law, which can also be
integrated with hard law as part of a co-regulatory approach.5
This is necessarily the main option for AI-adopting countries. They
cannot use the ‘bully pulpit’ because AI providers are largely beyond their
direct political influence, but at the same time they do not intend to
passively adopt AI systems from third-countries, because AI technology is
much less neutral than many other technologies.6
Before analyzing the implications of relaying on regulation, it is worth
noting that this distinction between domestic and foreign AI systems is not
just a matter of market competition, but has more relevant geopolitical
implications due to the nature and use of AI systems.
Against the rhetoric based on anthropomorphizing machines and on the
dichotomous distinction between humans and machines, it must be stressed
that AI is not intelligence, but the result of computer-based (e.g., neural
networks) and mathematical (e.g., regression) tools applied to given
datasets. The developers are the ones who decide on the right mix to
achieve the expected results with the aim of mimicking human-like
activities.
This AI development chain, briefly and roughly summarized here, clearly
shows not only the role of humans in AI design and associated outcomes,
but also the vulnerability of the whole system in terms of intentionally and
unintentionally values embedded in AI tools. Several past cases (e.g.,
Dastin, 2018), the experience of social network content moderation,7 and
the more recent critical performance of generative AI (e.g., UNESCO,
2024) have clearly shown the variety of factors, from business interests to
data quality and algorithm design, that can influence the performance of AI
in terms of underpinning logic and values.
As far as AI is used in decision-making systems in a large variety of
sectors, including content moderation, education, and strategic areas, values
path-dependency and technology dependency are critical. The use of AI
systems that reflect a particular societal context and associated values, due
to the data used for training or the AI design, can negatively affect their
contextual performance and appropriateness of the results.
This value-based influence and its geopolitical dimension are crucial to
understanding the position of AI-adopting countries, and it becomes even
more relevant when the lack of adequate investment in AI research and
industry creates a technology dependency that exacerbates this influence. It
is therefore not surprising that the EU, unable to use the ‘bully pulpit’, has
explored alternative solutions to establish a degree of control over AI
technology.
Given the limited resources to fund AI innovation and some difficulties,
due to fragmented national and regional strategies, in using participation
and bargaining power in procurement to shape an AI industry dominated by
non-EU players, regulation remains the main avenue through which the EU
can influence the design of AI products and services used in Europe.
These geopolitical dynamics have certain implications for the EU's
position at the global level. In this respect, the common situation of AI-
adopting countries may represent an opportunity for the AI Act to promote
the EU's view far beyond the traditional ‘Brussel effect’ (Bradford, 2019).
The EU's leadership in AI regulation can leverage on the common need
of many countries to develop a regulatory framework that limits the risks of
value and technological dependence. It is therefore not a coincidence that
Brazil, a member of BRICS and a very large country, is looking to the EU
model when drafting its AI regulation.8
This is not the result of the ‘Brussel effect’ based on the attractiveness of
the EU market as a potential destination for AI products, since Brazil, like
the other AI-adopting countries, is characterized by a limited development
of the AI industry. The reason for the potential success of the AI Act relies
not on the market, but on the societal values: other countries find in the EU
model a framework that can provide sufficient guarantees in terms of value
protection and contextualization of AI development and use in the presence
of a few foreign global AI providers.
In addition, AI-developing countries could also benefit from the EU's
regulatory experience with AI, looking at it as a laboratory for their future
regulation. This is because in these countries, too, concerns about the
negative effects of AI are emerging in the public debate and there is a
growing demand to limit the concentration of power in the IT sector, which
AI is exacerbating.

The European Approach: The AI Act and Beyond


Over the years, the EU has developed an articulated digital diplomacy with
several international partnerships in Africa, Latin America, the Western
Balkans, and the Eastern and Southern neighbourhood. However, the
transatlantic relations remain ‘a strong pillar of our [EU] digital
international engagement’ in the context of ‘a comprehensive and
coordinated approach to digital coalition-building and diplomatic outreach
including through the network of EU delegations’.9
The impact of the transatlantic dialogue is clear in terms of economic and
technological relations, but also in terms of regulatory diplomacy in the
digital society. The never-ending story of the transborder personal data
flows between the EU and the US (inter alia, Boehm et al., 2024), as well
as the very slow reaction to the dark patterns implemented by technology
companies,10 are important examples of the intention of EU institutions to
create a model of ‘convergent differentiation’.
The EU has affirmed its sovereignty by calling for more rights-based
technological development,11 but at the same time it has been carefully in
how it implements this strategy. This is also evident in recent cases
involving AI, such as the more nuanced approach to ChatGPT12 and its
indiscriminate and illegal online collection of personal data through web
scraping, compared to the stronger stance taken in the ClearviewAI case.13
This convergent differentiation is reflected in the AI Act. It was
presented, and is now being implemented by the European Commission, as
a product safety regulation, with limited attention to the protection of
fundamental rights. This focus, together with the use of standards to
operationalize the risk-based approach, makes the AI Act more easily
acceptable on the other side of the Atlantic.14 Moreover, the impact on
fundamental rights, which is a very European15 element of the approach to
AI regulation,16 was narrowed down in the trilogue and overshadowed in
the debate on the implementation of the AI Act. Finally, as regards the
General Purpose AI,17 the definition of an ad hoc regime for this
technology, with a more limited range of obligations, did not neglect the
demands of US companies and government.
Looking at the specific provisions of the AI Act, it is therefore clear that
the geopolitical dimension is not limited to whether and when to regulate
AI, in relation to the technology-driven power relations discussed in the
previous section, but also concerns decisions on how to regulate AI and the
scope of that regulation. The way in which AI rules are formulated is
therefore crucial and has an impact on state relations, as it is intertwined
with value protection and digital sovereignty.18 This is also because of the
ancillary effects on market access and the export of technology and
services, even though this is not one of the objectives of the AI Act.19
With regard to the content of the AI Act, three elements have an
important geopolitical dimension: the decision to limit its main scope to
high-risk systems, the role of fundamental rights in the impact assessment,
and the broader value-oriented effect.
The legislator opted for an ‘acceptable’ AI risk and limited the scope of
the Act to a closed list of cases that the law deems to pose a high risk. This
is a pro-innovation view in framing the risk-based logic that is in not only
consistent with a first-generation law such as the AI Act, but also takes on a
geopolitical connotation in terms of circumscribing the impact of AI
regulation so as not to hamper the growing European AI industry in a
strategic sector compared to other major countries involved in the AI race.20
Moreover, from an industrial policy perspective, limiting the legal
requirements for innovative technologies facilitates the entry of more
players into the new industry and increases the level of investment by major
players. Both of these effects contribute to a more mature technology and,
ultimately, to a more effective reduction of side-effects, making the
subsequent adoption of stricter rules possible.21
Concerning the role of fundamental rights in impact assessment, this is
not the context for a detailed analysis of its relevance both in the broad
context of the mandatory conformity assessment to be carried out by AI
providers and in relation to the specific obligation of AI developers under
Article 27,22 but it is worth highlighting its geopolitical dimension in going
beyond a mere product safety regulation and, in terms of rights protection,
beyond data protection and non-discrimination issues (National Institute of
Standards and Technology – NIST, 2023).
In this first step towards truly human-centric AI, the EU is now backed
by international organizations, such as the Council of Europe23 and the
United Nations (United Nations – AI Advisory Body, 2023). This can make
the AI Act, and in particular the solutions adopted in implementing its
fundamental rights-centred component, a groundbreaking piece of
legislation with a potentially wide resonance in all contexts where
governments fails to promote, or insufficiently promote, fundamental rights
and freedoms in the use of AI. Moreover, it is a challenge to political
regimes that use AI to restrict such rights and freedoms.

A Broader Scenario
The emphasis on fundamental rights, but also the past and ongoing debate
on the ethics of AI (Mantelero, 2022, pp. 46–48 and Ch. 3) both in Europe
and internationally,24 can make the AI Act an enabler for a broader value-
driven approach to the use of AI. Not only are other countries including
provisions on the protection of fundamental rights in their AI bills,25 but the
limitations of the AI Act model in terms of its overall scope and
fundamental rights impact assessment have led several EU and non-EU
bodies, including the UNDP, to develop complementary guidelines and to
go beyond the boundaries of AI legislation, with open debate in crucial
areas such as the use of AI in research and in warfare.
The AI Act, as well as the various past and expected guidelines on the
broader societal impact of AI, from work26 to research,27 build on the
notion of acceptability of the uses of AI in a view not limited to product
safety and referring to societal acceptability. This also emphasizes the role
of civic engagement (e.g., European Center for Not-for-Profit Law, 2023;
OECD, 2022) and public debate on AI in relation to global AI models,
calling for contextualization based on European, or even local, values.
Furthermore, awareness of the implications of the AI also covers the
broader AI supply chain where human rights and societal concerns are
increasing, both in Europe and in other countries (Crawford, 2022;
Estampa, 2024). This is in line with the growing attention to human rights
due diligence and sustainability in business, which is also reflected in
several EU regulatory proposals under discussion, and is part of the
international geopolitical agenda on AI (G7 2023 Hiroshima Summit,
2023).
In this broad scenario, several issues relate to the inherent geopolitical
nature of the entire AI ecosystem, which includes both the power over its
material sources and the actors involved. In terms of material sources, these
relate to (i) the raw materials and workforce that fuel AI development, (ii)
the datasets needed to train and test AI models, and (iii) the technological
infrastructure needed to perform these tasks and deliver AI-powered
services.
All of these sources are neither easily accessible nor ubiquitous,
including datasets where social and geographical diversity has a significant
impact on AI performance and outcomes. Nor can any geographical area in
the emerging multipolar international system claim to control all these
sources. At the same time, and even more so after the shift from
globalization to a multipolar world, all countries in the major economies are
investing in filling their own gaps, as recently demonstrated by the EU with
its strategy for the so-called digital sovereignty, which covers several key
components of the AI ecosystem, from chips to quantum computing and the
data spaces that fuel AI development.
However, in the digital realm and especially in AI, sovereignty is no
longer a matter of state actors alone. Various private actors are playing an
increasingly active role in geopolitics as quasi-state entities (Manor, 2019).
From Starlink satellites in the Russia-Ukraine war, to the contact-tracing
systems during the Covid-19 outbreak, to the role of online platforms in the
dynamics of political polarization and disinformation, a variety of private
actors are making key decisions and performing key AI-mediated functions
in our societies.
Many of these actors operate on the basis of private law agreements and
are able to create digital ecosystems that intertwine the online and offline
dimensions, enabling key functions in our society. This gives them a great
power. Thought their contractual conditions and policies, they can guide
those who use their services in a context largely characterized by an
imbalance of power in favour of the service provider.
This situation, common to several areas of digital services, is even more
critical in the specific field of AI, due to the high concentration of this
industry. A small number of companies not only control the resources
needed to build AI systems28—cloud infrastructure, data, the workforce
needed to build AI at scale, etc.—but also set the trajectory of AI
development and influence AI research.29, 30
Moreover, the values-driven nature of AI gives these companies a key
role in shaping global narratives and, as the rise of large language models
(LLMs) shows, providing a mass view of the world, often through biased
lenses or resulting in stereotyping and forms of tyranny of the majority.
Finally, it is difficult for lawmakers and politicians not to be influenced
by the power of these quasi-state actors, as demonstrate by the intense
debate on generative AI during the drafting of the AI Act or in the ChatGPT
case. Their power at the global level therefore raises serious concerns in
terms of impact on democracy and its dynamics, as well as in geopolitical
terms due to the role played by these actors outside traditional democratic
and geopolitical frameworks and guardrails.
There are therefore a number of important questions that remain
unanswered, for which the AI Act is only a stepping stone. We can imagine
different possible directions for the future development of AI. Given its
impact on geopolitical balances and society, it is the responsibility of
politicians, governments, academia, and civil society to have a broad and
inclusive debate to ensure that AI is for the benefit of humanity and does
not lead to despotic scenarios.

Notes
1. Digital diplomacy is a broad term that refers to ‘how the Internet,
digital tools, digital media, and the technology sector have influenced
or even transformed diplomacy’ (Hedling & Bremberg, 2021).
2. According to the AI Act, an AI deployer is a natural or legal person,
public authority, agency or other body using an AI system under its
authority except where the AI system is used in the course of a
personal non-professional activity, while an AI provider is natural or
legal person, public authority, agency or other body that develops an
AI system or a general-purpose AI model or that has an AI system or a
general-purpose AI model developed and places it on the market or
puts the AI system into service under its own name or trademark,
whether for payment or free of charge. See Article 3 of the AI Act,
also with regard to the definitions of AI system and general-purpose
AI model.
3. It is worth noting that this dichotomy is no longer corresponds to
digital economy practices in some countries of the Global South,
which replicate on a regional scale some of the power relations that
characterize the North-South dichotomy (e.g., Seto, 2024).
4. The Collingridge dilemma, also known as the ‘dilemma of control’ can
be summarized in the words of its author as follows: ‘attempting to
control a technology is difficult […] because during its early stages,
when it can be controlled, not enough can be known about its harmful
social consequences to warrant controlling its development; but by the
time these consequences are apparent, control has become costly and
slow’ (Collingridge, 1980, 19).
5. Co-regulation, often used by the EU legislator, consists of the
combination of hard and soft law instruments, ‘whereby a Community
legislative act entrusts the attainment of the objectives defined by the
legislative authority to parties which are recognised in the field (such
as economic operators, the social partners, non-governmental
organisations, or associations). This mechanism may be used on the
basis of criteria defined in the legislative act so as to enable the
legislation to be adapted to the problems and sectors concerned, to
reduce the legislative burden by concentrating on essential aspects and
to draw on the experience of the parties concerned’ (Interinstitutional
agreement on better law-making, concluded on December 16, 2003,
between the Parliament, the Council and the Commission, Official
Journal C 321, 31/12/2003 P. 0001–0005).
6. On the lack of neutrality of the technology see also Verbeek (2011).
7. See also Social media and elections, Special Issue, Science 381 (6656),
2023, https://www.science.org/toc/science/381/6656.
8. See Projeto de Lei n° 2338, de 2023,
https://www25.senado.leg.br/web/atividade/materias/-/materia/157233.
See also the Peruvian proposal for AI regulation, Reglamento de la ley
n. 31814, ley que promueve el uso de la inteligencia artificial en favor
del desarrollo económico y social del Pais.
9. European Commission (2021). 2030 Digital compass: The European
way for the digital decade. COM(2021)118 final, https://eur-
lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A52021DC0118.
10. It was not until 2022 that the European Data Protection Board has
adopted specific guidelines on deceptive design patterns (EDPB,
2022).
11. This focus on sovereignty is also evident in the territorial scope of the
AI Act which applies to AI providers ‘placing on the market or putting
into service AI systems or placing on the market general-purpose AI
models in the Union, irrespective of whether those providers are
established or located within the Union or in a third country’. See
Article 2(1)(a) of the AI Act.
12. In April 2023, the EDPB set up a dedicated task force to foster
cooperation and to exchange information on possible enforcement
actions taken by data protection authorities against ChatGPT, see
https://www.edpb.europa.eu/news/news/2023/edpb-resolves-dispute-
transfers-meta-and-creates-task-force-chat-gpt_en; the task force
recently published a report, but to date no relevant decisions have been
taken so far (EDPB, 2024). In the report, and without prejudice to the
ongoing investigations carried out by national Supervisory Authorities,
the EDPB proposes a rather broad interpretation of legitimate interest
and Article 9(2)(e) of the GDPR (data made public by the data subject)
as legal grounds for web scraping, and points out some technical
safeguards that have no any disruptive effect on ChatGPT models,
departing from more restrictive case law.
13. See Garante per la protezione dei dati personali (Italian GDPR
Supervisory Authority) (2022). Ordinanza ingiunzione nei confronti di
Clearview AI, February 10, 2022,
https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-
display/docweb/9751362 and
https://www.garanteprivacy.it/home/docweb/-/docweb-
display/docweb/9751323#english (summary of the decision in
English); CNIL (French GDPR Supervisory Authority) (2022).
Restricted Committee Deliberation No. SAN-2022-019 of 17 October
2022 concerning CLEARVIEW AI,
https://www.cnil.fr/sites/cnil/files/atoms/files/deliberation_of_the_restr
icted_committee_no_san-2022-
019_of_17_october_2022_concerning_clearview_ai.pdf. In this
respect, it is worth noting that some of the considerations in the
‘preliminary views’ of EDPB, 2024 seem to suggest a broad
interpretation of the GDPR, seeing the OpenAI's modus operandi as a
necessity rather than an object of a compliance evaluation. On the
crucial issue of special category data collection, the report states that
‘where large amounts of personal data are collected via web scraping,
a case-by-case examination of each data set is hardly possible’. This
seems to reverse the logic of the assessment by taking a controversial
practice, that is, massive web scraping, as given. Web scraping is not
something inevitable and due, but a deliberate decision to use online
data instead of more expensive commercial databases or to create ad
hoc GDPR-compliant datasets. If the data controller's behaviour and
business model make legal compliance ‘hardly possible’, this is not a
justification, but a liability. It is also unclear how post-collection
filtering can be considered an adequate safeguard. If the personal data
collected falls into special categories and there is no compliance with
Article 9, ex post deletion does not exclude a breach of the GDPR at
the time of data collection. Broad interpretations emerging in the
presence of global players raise some concerns about the uniform
application of the GDPR and the differences that could arise between
the more restrictive interpretations adopted with regard to entities,
such as many SMEs in the EU, that do not have a global projection and
do not benefit from strong lobbies representing their interests and the
big players. Moreover, broad interpretations entail the risk of
legitimizing the ‘move fast and break things’ logic of some of these
players. This also takes on a geopolitical connotation when these
companies are based outside the EU.
14. See National Institute of Standards and Technology – NIST (2023) and
The White House (2023). Both the NIST Framework and the
Executive Order place an emphasis on product safety and standards.
On the NIST's AI RMF, see also Mantelero (2024b).
15. See also Council of Europe, Framework Convention on Artificial
Intelligence and Human Rights, Democracy and the Rule of Law,
adopted on May 17, 2024, by the Committee of Ministers of the
Council of Europe at its 133th Session held in Strasbourg,
https://rm.coe.int/1680afae3c, Ch. V.
16. See Article 9 of the European Declaration on Digital Rights and
Principles for the Digital Decade (Mantelero, 2024a).
17. See also AI Act, Chapter V.
18. More generally, this regulatory exercise is not limited to the provisions
of the AI Act and important activities related to the development of AI,
such as personal data processing, are regulated by other laws while the
AI Act focuses primarily on risk management.
19. As before with the GPDR, the requirements set out in the AI Act are
not about protectionism, which focuses on the market and the
economy, but about the protection of values, in this case those
enshrined in the EU Charter and EU law, along the three main axes of
safety, security, and the protection of fundamental rights.
20. In this regard, it is worth noting that regulating an emerging
technology in a market with weak regional champions necessarily
requires a more industry-friendly approach than in the case of a more
balanced market composition. This is one of the reasons why the high-
level commitment to ethics and fundamental rights of the early debate
on AI in Europe has more pragmatically ended up in an industry-
focused regulation centred on conformity assessment with limited
emphasis on fundamental rights.
21. This was the case with strict product liability rules, which were
adopted when industrial production was sufficiently mature, and online
platform regulation, which was implemented when big players
colonized the Internet, and with the GDPR, which was adopted when
the digital society was no longer the databank society of the 1960s.
22. For a more detailed analysis, see Mantelero (2024c) .
23. See fn. 15.
24. See also European Data Protection Supervisor – Ethics Advisory
Group (2018), Independent High-Level Expert Group on Artificial
Intelligence set up by the European Commission (2019), UNESCO
(2021).
25. See fn. 8.
26. See also Ponce Del Castillo (2024).
27. See also European Research Council Executive Agency (2024).
28. See also van der Vlist et al. (2024).
29. In many cases, there is no public alternative for the most advanced
research projects, given the scale of the resources required.
30. All the websites cited in this chapter were accessed between January
and June 2024.

References
Boehm, F., Carrera, S., & Mitsilegas, V. (2024). Reconstitutionalising
privacy. EU-US data transfers and their impact on the rule of law,
rights and trust. Task Force Report, Centre for European Policy
Studies. https://cdn.ceps.eu/wp-content/uploads/2024/05/TASK-
FORCE-REPORT-EU-US-DATA-TRANSFER-1.pdf
Bradford, A. (2019). The Brussels effect: How the European Union rules
the world. Oxford University Press.
Collingridge, D. (1980). The social control of technology (p. 19). Pinter.
Crawford, K. (2022). Atlas of AI. Yale University Press.
https://yalebooks.yale.edu/9780300264630/atlas-of-ai
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias
against women. Reuters.
De Nardis, L. (2020). The internet in everything: Freedom and security in a
world with no off switch. Yale University Press.
Dodds, K. (Ed.) (2019). Geopolitics: A very short introduction (3rd ed.).
Oxford University Press.
Estampa. (2024). Cartography of generative AI. https://cartography-of-
generative-ai.net/
European Center for Not-for-Profit Law. (2023). Framework for meaningful
engagement: Human rights impact assessments of AI.
https://ecnl.org/publications/framework-meaningful-engagement-
human-rights-impact-assessments-ai
European Commission. (2021). 2030 Digital compass: The European way
for the digital decade. COM(2021)118 final. https://eur-
lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A52021DC0118
European Data Protection Board. (2022). Guidelines 03/2022 on deceptive
design patterns in social media platform interfaces: How to recognise
and avoid them, Version 2.0.
https://www.edpb.europa.eu/system/files/2023-02/edpb_03-
2022_guidelines_on_deceptive_design_patterns_in_social_media_plat
form_interfaces_v2_en_0.pdf
European Data Protection Board. (2024). Report of the work undertaken by
the ChatGPT Taskforce.
https://www.edpb.europa.eu/system/files/2024-
05/edpb_20240523_report_chatgpt_taskforce_en.pdf
European Data Protection Supervisor – Ethics Advisory Group. (2018).
Towards a digital ethics.
https://edps.europa.eu/sites/edp/files/publication/18-01-
25_eag_report_en.pdf
European Research Council Executive Agency. (2024). Mapping ERC
frontier research. Artificial Intelligence.
https://erc.europa.eu/sites/default/files/2023-12/AI_in_science.pdf
G7 2023 Hiroshima Summit. (2023). Hiroshima process international
guiding principles for organizations developing advanced AI system.
https://www.mofa.go.jp/files/100573471.pdf
Garante per la protezione dei dati personali. (2022). Ordinanza ingiunzione
nei confronti di Clearview AI, February 10, 2022.
https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-
display/docweb/9751362 and
https://www.garanteprivacy.it/home/docweb/-/docweb-
display/docweb/9751323#english
Gonzalez Fuster, G. (2014). The emergence of personal data protection as a
fundamental right of the EU. Issues in privacy and data protection.
Springer International Publishing.
Hedling, E., & Bremberg, N. (2021). Practice approaches to the digital
transformations of diplomacy: Toward a new research agenda.
International Studies Review, 23(4), 1595–1618, 1596.
Independent High-Level Expert Group on Artificial Intelligence set up by
the European Commission. (2019). Ethics guidelines for trustworthy
AI. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-
trustworthy-ai
Manor, I. (2019). The digitalization of public diplomacy. Palgrave
Macmillan.
Mantelero, A., & Vaciago, G. (2013). The “Dark Side” of big data: Private
and public interaction in social surveillance. Computer Law Review
International, 14(6), 161–169.
Mantelero, A. (2022). Beyond data: Human rights, ethical and social
impact assessment in AI. T.M.C. Asser Press-Springer.
https://doi.org/10.1007/978-94-6265-531-7 (open access)
Mantelero, A. (2024). The Fundamental Rights Impact Assessment (FRIA)
in the AI Act: Roots, legal obligations and key elements for a model
template. Computer Law & Security Review, 54, 106020.
Mantelero, A. (2024a, forthcoming). Articles 8 and 9. In R. Ducato (Ed.),
The declaration on European digital rights and principles for the
digital decade: A commentary with a legal design perspective.
Aberdeen University Press.
Mantelero, A. (2024b, forthcoming). The AI act a realpolitik compromise
and the need to look forward. In I. Spiecker, L. S. F. Mendes, & R.
Campos (Eds.), Digital constitutionalism. Nomos.
Mantelero, A. (2024c, forthcoming). The fundamental rights impact
assessment (FRIA) in the AI act: Roots, legal obligations and key
elements for a model template. Computer Law & Security Review.
Musiani, F. (2022). Infrastructuring digital sovereignty: A research agenda
for an infrastructure-based sociology of digital self-determination
practices. Information, Communication & Society, 25(6), 785–800.
https://doi.org/10.1080/1369118X.2022.2049850
National Institute of Standards and Technology – NIST. (2023). Artificial
Intelligence Risk Management Framework (AI RMF 1.0).
https://doi.org/10.6028/NIST.AI.100-1
OECD. (2022). OECD guidelines for citizen participation processes.
https://www.oecd.org/gov/open-government/oecd-guidelines-for-
citizen-participation-processes-f765caf6-en.htm
Ponce Del Castillo, A. (Ed.). (2024). Artificial intelligence, labour and
society. ETUI-The European Trade Union Institute.
https://www.etui.org/publications/artificial-intelligence-labour-and-
society
Puscas, I. (2023). AI and international security. Understanding the risks
and paving the path for confidence-building measures. United Nations
Institute for Disarmament Research (UNIDIR).
https://unidir.org/publication/ai-and-international-security-
understanding-the-risks-and-paving-the-path-for-confidence-building-
measures/
Reidenberg, J. R. (1998). Lex informatica: The formulation of information
policy rules through technology. Texas Law Review, 76(3), 553–584.
Seto, K. S. (2024). Platform sub-imperialism. Big Data & Society, 11(2),
20539517241249410. https://doi.org/10.1177/20539517241249410
Sheehan, M. (2024). Tracing the roots of China's AI regulations. Carnegie
Endowment for International Peace.
The White House. (2023). Executive order on the safe, secure, and
trustworthy development and use of artificial intelligence.
UNESCO. (2021). Recommendation on the ethics of artificial intelligence.
https://unesdoc.unesco.org/ark:/48223/pf0000381137
UNESCO. (2024). Challenging systematic prejudices: An investigation into
bias against women and girls in large language models.
https://unesdoc.unesco.org/ark:/48223/pf0000388971
United Nations – AI Advisory Body. (2023). Interim report: Governing AI
for humanity. https://www.un.org/en/ai-advisory-body
van der Vlist, F., Helmond, A., & Ferrari, F. (2024). Big AI: Cloud
infrastructure dependence and the industrialisation of artificial
intelligence. Big Data & Society, 11(1), 20539517241232630.
https://doi.org/10.1177/20539517241232630
Verbeek, P.-P. (2011). Moralizing technology. Understanding and designing
the morality of things. The University of Chicago Press.
26
HUMAN AGENCY AND
OVERSIGHT – A RESPONSIBLE
AND TRUSTWORTHY USE OF
ARTIFICIAL INTELLIGENCE
The AI Act Approach

Lucilla Sioli and Miriam D’Arrigo*

DOI: 10.4324/9781003518495-33

Introduction: Summary of the AI Act


Artificial intelligence (AI) refers to a rapidly evolving family of
technologies capable of providing a wide range of benefits across the entire
spectrum of industrial and social activities. However, it also poses some
risks. The EU Regulation laying down harmonized rules on AI (the AI Act)
represents a milestone in AI regulation, reflecting the growing awareness of
the risks and opportunities arising from the increasingly widespread
integration of this technology in society.
The AI Act is the result of a rigorous and participatory legislative
process, led by the European Commission, involving a wide range of actors,
including academics, businesses, civil society organizations, social partners,
EU member states, and citizens.
In April 2019, the European Commission endorsed the key requirements
set out in the High-Level Expert Group (HLEG) ethics guidelines for
Trustworthy AI. In those guidelines, the HLEG developed seven non-
binding principles intended to help ensure that AI is trustworthy and
ethically sound. The seven principles include human agency and oversight;
technical robustness and safety; privacy and data governance; transparency;
diversity, non-discrimination, and fairness; and societal and environmental
well-being and accountability. Without prejudice to the legally binding
requirements of the Regulation and any other applicable Union law, those
guidelines contribute to the design of coherent, trustworthy and human-
centric AI, in line with the Charter of Fundamental Rights and with the
values on which the Union is founded. According to the guidelines of the
HLEG, human agency and oversight mean that AI systems are developed
and used as tools that serve people, respect human dignity and personal
autonomy, functioning in ways that can be controlled and overseen by
humans.
* Disclaimer: The views expressed are purely those of the author and may
not necessarily be regarded as stating an official position of the European
Commission.
This preparatory process has allowed for a comprehensive assessment of
the social, ethical, and legal implications of AI, culminating in the proposal
of a detailed and comprehensive EU regulation.
The regulation is based on a risk-based approach, classifying AI systems
into risk levels and provides AI developers and deployers with clear
requirements and obligations regarding specific uses of AI in order to
address risks to health, safety and fundamental rights:

Minimal risk: most AI systems carry minimal risk to citizens’ rights and
safety and face no obligations under the AI Act. However, companies
can voluntarily adopt additional codes of conduct if they find them
useful.
Specific transparency risk: certain AI systems like chatbots must clearly
disclose to users that they are interacting with a machine. Users need to
be informed when biometric categorization or emotion recognition
systems are being used. Certain AI-generated content, including deep
fakes, must be labeled as such. Moreover, synthetic audio, video, text
and images content should be marked in machine-readable format and
detectable as artificially generated or manipulated through state-of-the-
art technology.
High risk: AI systems identified as high-risk will be required to comply
with strict requirements, including risk-mitigation systems, high quality
of data sets, logging of activity, detailed documentation, clear user
information, human oversight, and a high level of robustness, accuracy,
and cybersecurity. Examples of such systems include credit assessment,
recruitment processes, or medical devices.
Unacceptable risk: AI systems considered a clear threat to the
fundamental rights of people will be banned (prohibited AI practices).
This includes AI systems or applications that manipulate human
behavior to circumvent users’ free will, such as toys using voice
assistance encouraging dangerous behavior by minors or systems that
allow “social scoring” by governments or companies, and certain
applications of predictive policing. In addition, some uses of biometric
systems will be prohibited, for example, emotion recognition systems
used at the workplace and some systems for categorizing people or real-
time remote biometric identification for law enforcement purposes in
publicly accessible spaces (with narrow exceptions).

To complement this system, the AI Act also introduces rules for so-called
general-purpose AI models, namely highly capable AI models designed to
perform a wide variety of tasks such as generating human-like text and
other content. General-purpose AI models are increasingly used as
components of AI applications. These models, adapting to a multitude of
distinctive tasks, raise unique concerns regarding transparency, security, and
accountability. The AI Act introduces specific transparency obligations for
providers of such models, including detailed documentation of the model's
capabilities and limitations, as well as compliance with EU copyright rules.
Another crucial component of the regulation concerns AI models that, due
to their capabilities, could present systemic risks, with large-scale negative
impacts on society or the environment. The AI Act imposes additional
obligations on providers of such models to assess and mitigate systemic
risks, ensuring greater safety and stability in their use.
In order to promote innovation, the AI Act includes provisions for the
establishment of regulated sandboxes and real-life testing of high-risk AI
systems, allowing innovators to experiment with new technologies in a
controlled environment.
The AI Act provides a clear and comprehensive regulatory framework
that balances the promotion of innovation with the protection of
fundamental rights and safety. Its effective implementation will require
close collaboration between national authorities and the European
Commission, ensuring legal certainty and that AI continues to contribute
positively to society.

The Challenges to Human Autonomy Posed by AI


The concept of “human autonomy” is rooted in the EU's commitment to
protecting human dignity, privacy, and fundamental rights, and is
safeguarded and promoted in several EU laws and policies.
The EU Charter of Fundamental Rights (2000) provides a foundation for
this framework, recognizing the right to autonomy as a fundamental right.
The General Data Protection Regulation (GDPR) (2016) and the ePrivacy
Directive (2002) collectively empower individuals to control their personal
data and ensure that it is processed in a transparent and secure manner. The
Digital Services Act (2020) promotes a safe and trustworthy online
environment, with provisions on algorithmic transparency, content
moderation, and user empowerment.
In the context of AI, protecting human autonomy ensures that individuals
can make their own decisions and choices and maintain their freedom
amidst technological progress. This concept requires that individuals have
control over the use and impact of AI technologies on their lives, and that
their agency and autonomy are not diminished.
This understanding is also reflected in the conclusion of the HLEG on AI
from 2019 which emphasized the importance of human autonomy and
stated that AI systems should not unjustifiably subordinate, coerce, deceive,
manipulate, condition or herd humans. Instead, AI systems should be
designed to augment, complement, and empower human cognitive, social,
and cultural skills, which can only be achieved through securing human
oversight over work processes in AI systems (High-Level Expert Group on
Artificial Intelligence, 2019, p. 12).
The use of AI can indeed pose a threat to human autonomy depending on
the way it is used. As outlined in the 2020 European Commission's White
Paper on AI – A European approach to excellence and trust (European
Commission, 2020), AI has the potential to do much good, contributing to
greater wellbeing and prosperity, but can also be harmful, when it risks
violating fundamental rights and safety. This harm might be both material
(safety and health of individuals, including loss of life, damage to property)
and immaterial (loss of privacy, limitations to the right of freedom of
expression, human dignity, discrimination for instance in access to
employment), and can relate to a wide variety of risks.
Potential risks are mostly derived from the opacity and complexity of AI
systems. The Impact assessment of the EU Regulation on AI (European
Commission, 2021) describes the opacity of AI systems as an inherent lack
of transparency regarding the process by which an AI system formulates a
prediction or reaches an output. Regarding complexity, AI systems often
have many different components and process very large amounts of data. In
addition, advanced AI models frequently have more than a billion
parameters and can be extremely difficult to analyze and understand.
As the number of dimensions increases, the complexity of the AI system
grows exponentially, making it challenging to identify patterns,
relationships, and causal links between characteristics.
The complexity and lack of transparency (the opacity of AI) makes it
difficult to identify and prove possible breaches of laws, including legal
provisions that protect fundamental rights.
These characteristics of AI challenge individual autonomy, whenever AI
is integrated into decision-making processes. AI systems are designed to
operate with varying levels of autonomy, meaning that they have some
degree of independence of action from human involvement and of
capabilities to operate without human intervention. This may impact (i)
humans who may become the subject of AI decision-making (e.g., a human
decision is made solely based on the output produced by an AI system) and
(ii) humans who may not be aware of being the object of an AI-driven
decision.
Based on the legislative framework before the entry into force of the AI
Act, the Commission's impact assessment concluded that the human agency
of individuals impacted by AI systems was not sufficiently protected by
existing regulations. The opacity, complexity, continuous adaptation,
autonomous behavior, and data dependency of AI systems were not fully
covered by existing laws.
Existing legislation such as the GDPR does share concerns about
protecting fundamental rights and freedoms from solely automated
decisions and is complementary to the AI Act.
In particular, the level of human oversight and intervention can determine
whether an AI system falls within the scope of GDPR's automated decision-
making framework (Article 22). Specifically, as will be explored in the
following sections, meaningful human intervention at a critical stage of the
decision-making process may be sufficient to ensure that the decision is no
longer considered solely automated. Conversely, effective human oversight
can operate as a safeguard to ensure that AI-driven decision-making
processes are fair, transparent, and compliant with individual rights,
including data protection rights.
Measures to Ensure Human Agency in the
Context of the AI Act
As already mentioned in the first section when introducing AI practices
with an unacceptable risk, the AI Act prohibits a limited set of particularly
harmful uses of AI that contravene EU values because they violate
fundamental rights.
These prohibitions are foreseen by Article 5 of the Regulation and are
designed to prevent the use of AI in ways that could undermine human
dignity, freedom, equality, democracy and the rule of law and fundamental
rights, and also human agency.
Each of the prohibited practices outlined in Article 5 of the AI Act and
referred to below has the effect, inter alia, of safeguarding human
autonomy. A detailed examination of each practice is beyond the scope of
this contribution, which instead will focus on elucidating the relationship
between each prohibited practice and the protection of human agency, while
further details and specific references are provided in the Regulation itself.

1. Exploitation of vulnerabilities of persons, manipulation and use of


subliminal techniques: this prohibition ensures that AI systems are not
used to take advantage of human weaknesses or biases, which could lead
to manipulation or coercion. By preventing such practices, the AI Act
protects human agency by ensuring that individuals are not subjected to
undue influence or control.
2. Social scoring for public and private purposes: by prohibiting social
scoring, the AI Act ensures that individuals are not judged or evaluated
based on arbitrary or biased criteria, which could limit their opportunities
or freedoms. Indeed, social scoring systems can significantly violate
human agency by restricting autonomy, reducing freedom, manipulating
behavior, creating psychological pressure, and invading privacy. These
systems have the potential to create a society where individuals are
constantly judged and controlled by algorithms, leading to a loss of
individual freedom and self-determination.
3. Individual predictive policing based solely on profiling people: by
preventing the use of AI for predictive policing that is based solely on
profiling individuals, which could lead to discriminatory practices, this
prohibition promotes human agency and prevents arbitrary or biased
decision-making.
4. Untargeted scraping of Internet or CCTV (closed-circuit television) for
facial images to build-up or expand databases: this prohibition prevents
the unconsented collection of facial images, which could lead to mass
surveillance. The prohibition on untargeted scraping of the Internet or
CCTV for facial images safeguards human agency by protecting
individuals’ autonomy and control over their personal data and by
preventing the creation of pervasive surveillance systems that can lead to
self-censorship and undermine individuals’ freedom to express
themselves and make choices.
5. Emotion recognition in the workplace and educational institutions, unless
for medical or safety reasons: this prohibition prevents the use of emotion
recognition technology in ways that could undermine human agency,
such as monitoring employees’ emotions without their consent. By
limiting the use of such technology to medical or safety reasons, the AI
Act protects human agency by ensuring that individuals are not subject to
unwarranted monitoring or control.
6. Biometric categorization of natural persons to deduce or infer their race,
political opinions, trade union membership, religious or philosophical
beliefs or sexual orientation: this prohibition prevents the use of
biometric data to make inferences about individuals’ characteristics,
which could lead to discriminatory practices. This prohibition protects
human agency by preserving autonomy over personal information and
ensuring freedom of expression without surveillance or judgment.
7. Real-time remote biometric identification in publicly accessible spaces
by law enforcement: the prohibition on real-time remote biometric
identification in public spaces is a crucial safeguard for human autonomy.
The use of such systems would create a pervasive sense of surveillance
that would inevitably alter the way people behave and shall be therefore
limited to specific circumstances.

In summary, the prohibited practices listed in Article 5 of the AI Act are


designed to protect human agency by preventing the use of AI in ways that
could undermine human autonomy, dignity, and rights. By prohibiting these
practices, the AI Act ensures that individuals are not subject to unwarranted
influence, control, or surveillance, and that they are able to make choices
and decisions without being unduly affected by AI systems.

Human Agency and High-Risk AI Systems


According to its risk-based approach, the AI Act places different
requirements on AI systems with different risk potential, with high-risk
systems facing the strongest regulation. The obligations are set out in
Chapter III of the AI Act, which concerns such high-risk AI systems. It
would go beyond the scope of this contribution to elaborate on the
intricacies of the classification of high-risk systems. This section will rather
focus on the rules applying to such systems.
Central to understanding the provisions of Chapter III of the AI Act are
the concepts of AI system provider and deployer introduced by Article 3(3)
and (4) of the AI Act. A “provider” is defined as a natural or legal person,
public authority, agency or other body that develops an AI system or a
general-purpose AI model or that has an AI system or a general-purpose AI
model developed and places it on the market or puts the AI system into
service under its own name or trademark, whether for payment or free of
charge; the “deployer” is a natural or legal person, public authority, agency
or other body using an AI system under its authority except where the AI
system is used in the course of a personal non-professional activity. The AI
Act also designates other categories of regulated parties (importers and
distributors).
Before placing a high-risk AI system on the market or putting it into
service, providers should identify appropriate human oversight measures.
This ensures that natural persons can oversee the functioning of these
systems, monitor their usage, and address any potential impacts throughout
their lifecycle. Measures should be tailored to the specific system and its
intended purpose, taking into consideration the potential risks and impacts
associated with its operation. Providers should conduct a thorough risk
assessment to determine the most effective oversight mechanisms.
To ensure effective human oversight, high-risk AI systems should be
designed with in-built operational constraints that cannot be overridden by
the system itself. This will prevent the system from acting autonomously or
deviating from its intended purpose and will ensure that it remains
responsive to the human operator. Providers should also ensure that natural
persons assigned to human oversight roles have the necessary competence,
training, and authority to carry out their responsibilities effectively.
In order to enable natural persons to make informed decisions about
when and how to intervene in the operation of high-risk AI systems, these
systems should include mechanisms to guide and inform human oversight
personnel. This may include real-time monitoring and alerts, as well as
access to system logs and performance data. Providers should ensure that
these mechanisms are user-friendly and accessible, and that personnel are
adequately trained in their use.
Due to the significant consequences of incorrect matches by biometric
identification systems, these systems should be subject to an enhanced
human oversight requirement. This requirement should stipulate that no
action or decision may be taken on the basis of the identification resulting
from the system unless it has been separately verified and confirmed by at
least two natural persons. These verifications should be automatically
recorded in the system's logs to minimize the burden on human oversight
personnel.
Given the unique nature of law enforcement, migration, border control,
and asylum, the enhanced human oversight requirement for biometric
identification systems may not be applicable in certain situations. However,
providers should still ensure that appropriate human oversight measures are
in place for these high-risk AI systems, taking into account the specific
legal frameworks and requirements that govern their operation.
The AI Act pursues human autonomy through several provisions
allowing for human oversight of the high-risk AI systems, in particular
Articles 13, 14, and 26, but also Article 50.
We briefly analyze these provisions below, addressed to providers and to
deployers.

Article 13
Article 13 of the AI Act sets out the requirements for high-risk AI systems,
with a particular focus on ensuring transparency and the provision of
information to deployers. This article requires providers to design and
develop high-risk AI models in such a way as to ensure that their operation
is sufficiently transparent to enable deployers to interpret a system's output
and use it appropriately. The rationale behind this provision is to address
concerns related to the opacity and complexity of certain AI systems and
help deployers fulfill their obligations under the AI Act, in particular under
Article 26 as detailed later in this chapter, which are designed to ensure that
deployers use high-risk AI systems in a responsible and transparent manner,
and that they take steps to mitigate any potential risks or harms associated
with these systems.
High-risk AI systems should therefore be designed in a manner that
enables deployers to understand how the AI system works, evaluate its
functionality, and comprehend its strengths and limitations. This is achieved
through accompanying instructions which should include the
characteristics, capabilities and limitations of performance of the AI system
at hand. This obligation is designed to enable the deployers to make the
correct choice of the system that they intend to use in light of their own
obligations.
The Article ensures that the system's performance and limitations are
clearly understood by those responsible for its deployment and operation.
By providing this level of transparency, providers can help foster greater
trust and acceptance among stakeholders and the public at large. In practice,
high-risk AI systems must be accompanied by instructions for use in an
appropriate digital or other format. These instructions should include
concise, complete, correct, and clear information that is relevant, accessible,
and comprehensible to deployers, covering a range of topics including the
system's intended purpose, performance metrics, and potential risks
associated with its operation.
One of the key pieces of information that must be included in the
instructions for use is the level of accuracy, robustness, and cybersecurity of
the high-risk AI system. This should be measured against the standards set
out in Article 15,1 and providers should clearly indicate any known or
foreseeable circumstance that may have an impact on the system's expected
level of accuracy, robustness, and cybersecurity.
Another important aspect of the instructions for use is the provision of
information to enable deployers to interpret the output of the high-risk AI
system and use it appropriately. This may include real-time monitoring and
alerts, as well as access to system logs and performance data. Providers
should also ensure that these mechanisms are user-friendly and accessible,
and that deployers are adequately trained in their use.
Finally, Article 13 requires providers to include information about the
human oversight measures referred to in Article 14, which will be analyzed
in the following point, as well as the technical measures put in place to
facilitate the interpretation of the outputs of the high-risk AI system by the
deployers. This highlights the strict link between transparency and human
autonomy and the importance of human oversight in ensuring the safe and
responsible deployment of high-risk AI systems and emphasizes the need
for providers to engage with stakeholders and conduct thorough risk
assessments to determine the most effective oversight mechanisms for their
systems.

Article 14
To prevent AI systems from undermining human autonomy or causing
harm, human oversight is crucial. This can be achieved through different
governance approaches (High-Level Expert Group on Artificial
Intelligence, 2019, p. 16), including:

Human-in-the-loop, which involves human intervention in every


decision cycle of the system, although this may not always be possible
or desirable.
Human-on-the-loop, which allows for human intervention during the
design cycle and monitoring of the system's operation.
Human-in-command, which enables humans to oversee the overall
activity of the AI system, including its broader economic, societal,
legal, and ethical impact, and make decisions on when and how to use
the system, including the ability to override AI decisions.

This is evident in the case of self-driving cars, where semi-autonomous


vehicles require a human driver to be present to take control if needed,
through a human-in-the-loop approach, whereas in more advanced
autonomous systems the human operator would only intervene if the system
detects a potential problem or anomaly, such as a malfunctioning sensor or
an unexpected obstacle on the road, following the human-on-the-loop
approach. Similarly, in medical diagnosis, a doctor may review and verify
AI-generated diagnoses through a human-on-the-loop approach for routine
cases, but in more critical or complex cases, a doctor may need to conduct
an independent evaluation of the patient, separate from the AI system's
input, as seen in a human-in-command approach. The AI Act provides
flexibility in adopting these different approaches.
Article 14 of the AI Act underscores the significance of human oversight
in high-risk AI systems. This oversight is essential to ensure that AI
systems are designed, developed, and deployed in a manner that respects
and safeguards the rights and well-being of individuals and society. On a
fundamental level, Article 14 is aimed at ensuring that human oversight
considerations are taken into account as early as possible in the life cycle of
AI systems. Therefore, while the obligations primarily address the provider
of the AI model, one central goal is in fact to enable the deployer of the AI
system to effectively ensure human oversight, when using the AI system.
Paragraph 1 mandates that high-risk AI systems be designed and
developed with appropriate human-machine interface tools, enabling
effective oversight by natural persons throughout their period of use. The
primary objective of this human oversight is to prevent or minimize
potential risks that may emerge during the intended use of these systems or
under conditions of reasonably foreseeable misuse.
Paragraph 2 emphasizes that the oversight measures must be
proportionate to the risks, level of autonomy, and context of use of the high-
risk AI system. These measures can be implemented through one or both of
the following types of measures:

a. measures identified and built into the high-risk AI system by the provider
before it is placed on the market or put into service, or
b. measures identified by the provider and suitable for the deployer to
implement before placing the high-risk AI system on the market or
putting it into service.

Paragraph 3 elaborates on the oversight measures mentioned in


Paragraph 2, stating that these measures should be commensurate with the
risks, level of autonomy, and context of use of the high-risk AI system.
They can be ensured through either one or both of the following types of
measures:
a. measures identified and built into the high-risk AI system by the provider
before it is placed on the market or put into service, or
b. measures identified by the provider and appropriate for the deployer to
implement before placing the high-risk AI system on the market or
putting it into service.

Paragraph 4 provides specific guidelines for implementing paragraphs 1,


2, and 3. The high-risk AI system must be provided to the deployer in a way
that enables natural persons with human oversight responsibilities to:

a. comprehend the system's capabilities and limitations and actively monitor


its operation, including identifying and addressing anomalies,
dysfunctions, and unexpected performance,
b. remain aware of the automation bias inherent in AI systems, which is the
inclination to automatically rely on or over-rely on the system's output,
particularly when the AI system is used for providing information or
recommendations for decisions to be taken by natural persons,
c. correctly interpret the high-risk AI system's output, considering the
interpretation tools and methods available,
d. decide, in any particular situation, not to use the high-risk AI system or to
disregard, override, or reverse its output, and
e. intervene in the operation of the high-risk AI system or interrupt the
system through a “stop” button or a similar procedure that allows the
system to come to a halt in a safe state.

Last, paragraph 5 pertains to specific high-risk AI systems (listed in point


1(a) of Annex III) and requires that no action or decision be taken by the
deployer based on the AI system's identification unless that identification
has been separately verified and confirmed by at least two natural persons
with the necessary competence, training, and authority. This requirement for
separate verification by at least two natural persons does not apply to high-
risk AI systems used for law enforcement, migration, border control, or
asylum, where Union or national law deems the application of this
requirement to be disproportionate. In these contexts, the application of the
verification requirement may be deemed impractical or unnecessary due to
the specific operational needs and legal frameworks governing these areas.
In any case, the oversight measures for these systems shall be tailored to the
specific risks, autonomy level, and usage context of the high-risk AI
system.

Article 26
Article 26 of the AI Act outlines the obligations of deployers of high-risk
AI systems, with a particular focus on human oversight. It emphasizes the
importance of ensuring that these systems are used in accordance with the
instructions provided by the system's manufacturer and that appropriate
safeguards are in place to prevent or minimize potential risks.
Paragraph 1 mandates that deployers of high-risk AI systems take
appropriate technical and organizational measures to ensure they use such
systems in accordance with the accompanying instructions. This includes
following any guideline or recommendation provided by the system's
manufacturer to mitigate potential risks.
Paragraph 2 requires deployers to assign human oversight to natural
persons with the necessary competence, training, and authority, as well as
providing the necessary support. This underscores the importance of having
knowledgeable and well-trained individuals overseeing the operation of
high-risk AI systems to ensure their safe and responsible use.
Paragraph 3 reiterates that the obligations mentioned in Paragraphs 1 and
2 do not conflict with other deployer obligations under Union or national
law. Additionally, it acknowledges the deployer's freedom to organize their
own resources and activities in implementing the human oversight measures
indicated by the provider.
Paragraph 4 highlights that, to the extent the deployer exercises control
over the input data, they must ensure that the data is relevant and
sufficiently representative in view of the intended purpose of the high-risk
AI system. This reinforces the need for careful data management and
validation to ensure the accuracy and reliability of the AI system's output.
Paragraph 5 mandates that deployers monitor the operation of the high-
risk AI system based on the accompanying instructions and inform
providers if they have reason to believe that the system may present a risk.
In such cases, the deployer must suspend the use of the system and notify
the relevant market surveillance authority. If a serious incident occurs, the
deployer must immediately inform the provider, importer, distributor, and
the relevant market surveillance authorities.
Paragraph 6 requires deployers to keep logs automatically generated by
the high-risk AI system for at least six months, unless otherwise specified
by applicable Union or national law. Financial institutions subject to Union
financial services law must maintain the logs as part of their documentation
under the relevant financial service law.
Paragraph 7 states that, before using a high-risk AI system in the
workplace, deployers who are employers must inform workers’
representatives and affected workers about the system's use. This
information should be provided in accordance with applicable Union and
national law on the information of workers and their representatives.
Paragraph 8 requires deployers of high-risk AI systems that are public
authorities or Union institutions, bodies, offices, or agencies to register in
the EU database referred to in Article 71. If a system is not registered in this
database, the deployer must not use it and must inform the provider or
distributor.
Paragraph 9 instructs deployers of high-risk AI systems to use the
information provided under Article 13 of the Act to carry out a data
protection impact assessment under either Article 35 of Regulation (EU)
2016/679 or Article 27 of Directive (EU) 2016/680.
Paragraph 10 focuses on the use of high-risk AI systems for post-remote
biometric identification in law enforcement contexts. It requires deployers
to request authorization from a judicial or administrative authority, with the
decision subject to judicial review, for the use of such systems. Each use
must be limited to what is strictly necessary for the investigation of a
specific criminal offense.
Paragraph 11 mandates that deployers of high-risk AI systems inform
natural persons that they are subject to the use of the system, except for
high-risk AI systems used for law enforcement purposes, where Article 13
of Directive (EU) 2016/680 applies.2
Paragraph 12 states that deployers must cooperate with relevant
competent authorities in any action taken to implement the AI Act.
Overall, Article 26 emphasizes the importance of human oversight and
responsible use of high-risk AI systems. By outlining specific obligations
for deployers, this article aims to ensure that these systems are used safely,
effectively, and in accordance with applicable laws and guidelines.

Article 50
Article 50 of the AI Act requires providers and deployers of AI systems to
ensure transparency and accountability in their operations, particularly in
interactions with humans. This includes informing individuals when they
are interacting with an AI system, marking AI-generated or manipulated
content, and disclosing the operation of emotion recognition and biometric
categorization systems. Therefore, the Article has the effect, inter alia, of
empowering human oversight and agency by providing individuals with
accurate information about the use of AI systems, thereby preventing
manipulation and deception.
Paragraph 1 states that providers of AI systems must ensure that users are
informed when they are interacting with an AI system, unless it is obvious
from the context. This ensures that individuals are aware when they are
engaging with a machine, rather than a human.
Paragraph 2 mandates providers of AI systems that generate synthetic
content (e.g., audio, images, text) to mark the output as artificially
generated or manipulated in a machine-readable format. This has the
objective to prevent the spread of misinformation and ensure that users are
aware when they are consuming AI-generated content.
Paragraph 3 focuses on deployers of emotion recognition or biometric
categorization systems, who must inform individuals about the operation of
the system and process their personal data in accordance with relevant
regulations, so as to ensure that individuals are aware of how their data is
being used and that it is being handled in a responsible manner.
Paragraph 4 states that deployers of AI systems that generate or
manipulate image, audio, or video content (e.g., deepfakes) must disclose
that the content has been artificially generated or manipulated. This
provision aims to prevent the spread of misinformation and ensure that
users are aware when they are consuming AI-generated content.
Paragraph 5 specifies that the information required by paragraphs 1–4
must be provided to individuals in a clear and distinguishable manner at the
time of the first interaction or exposure.
Paragraph 6 further clarifies that the requirements outlined in paragraphs
1–4 do not affect other transparency obligations or requirements set out in
Union or national law, thus implying that the requirements outlined in
Article 50 are additional to, rather than replacing, existing transparency
obligations.
Paragraph 7 mandates the AI Office to encourage the development of
codes of practice to facilitate the effective implementation of the obligations
regarding the detection and labeling of artificially generated or manipulated
content. The Commission may adopt implementing acts to approve these
codes of practice or to specify common rules for implementation.
In conclusion, following the AI Act's approach, the implementation of
effective human oversight measures is essential to ensure the safe and
responsible deployment of high-risk AI systems. By adhering to the
principles outlined in this chapter, providers and deployers can minimize
the risks associated with these systems and foster greater trust and
acceptance among stakeholders and the public at large. Ultimately, the
success of high-risk AI systems will depend on our ability to balance the
benefits of advanced technology with the need to protect individuals and
society from potential harm.

Human Agency and General-Purpose AI Models


Even though human agency might not be explicitly addressed as a primary
objective, the provisions of the AI Act concerning general-purpose AI
models aim at facilitating explainability and interpretability. Consequently,
this enables a certain level of human control and a better understanding of
the reasoning behind AI decisions.
A clear distinction between general-purpose AI models and AI systems is
essential for establishing legal certainty. The definition of general-purpose
AI models given by the AI Act is based on key functional characteristics,
such as their generality and capability to competently perform a wide range
of distinct tasks. These models are typically trained on vast amounts of data
using various techniques, including unsupervised or reinforcement learning.
Although AI models are integral components of AI systems, they are not
considered to constitute complete systems on their own. To become
functional AI systems, they require additional components, such as user
interfaces. To illustrate the difference, we can consider ChatGPT, which is
an AI system, or a chatbot, whereas GPT 3.5 is the model powering the
chatbot, a component of the system. Essentially, a model is a part of a
system, and a system is a collection of components, including models,
which work together to achieve a specific goal. The AI Act stipulates that
models with significant generality and the ability to perform a wide range
of distinctive tasks are considered general-purpose AI models. This
includes, for example, generative AI models, which can generate content in
various forms, and accommodate a variety of tasks.
Providers of general-purpose AI models play a crucial role along the AI
value chain, as the models they provide may form the basis for a range of
downstream systems and often necessitate a good understanding of the
models and their capabilities on the part of providers, both to enable the
integration of such models into their products, and to fulfill their obligations
under the AI Act.
To facilitate integration and fulfill obligations, the AI Act mandates the
implementation of proportionate transparency measures, such as
documentation and provision of information. General-purpose AI model
providers should prepare technical documentation and keep it up to date; a
minimal set of elements to be included in such documentation is established
in Annexes XI and XII to the AI Act. The Commission is empowered to
amend these Annexes in response to technological advancements.
According to Article 56 of the AI Act, the AI Office is responsible for
promoting and facilitating the creation of Union-wide codes of practice.
These codes will provide a detailed framework for providers of general-
purpose AI models, outlining the specific requirements and obligations
under the AI Act. By developing and adhering to these codes, providers can
ensure proper compliance with the Regulation's obligations and
demonstrate their commitment to responsible AI development and
deployment. These codes will also aim at providing further details to the
documentation foreseen in Annexes XI and XII of the AI Act.
Now, let us consider how human agency and general-purpose models
might intersect. In detail, Article 53 of the AI Act establishes several
obligations for providers of general-purpose AI models, which are designed
to promote transparency and ensure that these models align with human
values and, ultimately, promote human agency.

Technical Documentation
By requiring providers of general-purpose AI models to maintain up-to-date
technical documentation, the AI Act aims to promote transparency and
enable stakeholders to assess models’ capabilities and limitations.

Information and Documentation for AI System Providers


Article 53 also requires that general-purpose AI model developers share
information and documentation with downstream providers who intend to
integrate the models into their systems. This information should enable AI
system providers to have a good understanding of models’ capabilities and
limitations, as well as help them fulfill their obligations under the AI Act.
At a minimum, the information should contain the elements set out in
Annex XII. By promoting information-sharing and collaboration between
AI model and system providers, these obligations foster transparency and
enable human intervention in the development and deployment of AI
technologies, particularly when it comes to high-risk systems, where the
potential for harm is greater. In fact, by having access to information about
the capabilities and limitations of AI models, downstream providers are
better equipped to identify potential risks and biases, implement measures
to prevent harm, and design systems that maintain human oversight and
control.

Compliance with Union Law on Copyright and Related Rights


General-purpose AI model providers must implement a policy to comply
with Union law on copyright and related rights, particularly regarding the
identification and compliance with reservations of rights. This obligation
helps protect intellectual property rights, ensuring that AI systems are
developed and deployed within the legal boundaries and respecting the
rights of content creators.
By ensuring that AI systems respect the intellectual property rights of
content creators, this obligation preserves the autonomy and control of
human creators over their work. This, in turn, enables humans to maintain
agency over their creative output, making choices about how their work is
used, shared, and transformed.
Article 53 also requires general-purpose AI model providers to make
publicly available a sufficiently detailed summary about the content used
for training their models. This summary should be provided according to a
template provided by the AI Office.

Conclusion
In conclusion, the EU's AI Act marks a significant step in promoting human
agency and oversight in the development and deployment of AI systems.
By prohibiting certain practices, mandating transparency and accountability,
and emphasizing human oversight, the AI Act creates a regulatory
framework that strengthens human agency and encourages responsible AI
development.

Notes
1. Under Article 15, high-risk AI systems shall be designed and
developed in such a way that they achieve an appropriate level of
accuracy, robustness, and cybersecurity, and that they perform
consistently in those respects throughout their lifecycle. This includes
declaring the levels of accuracy and relevant accuracy metrics in the
accompanying instructions of use and taking technical and
organizational measures to ensure resilience against errors, faults, or
inconsistencies. Additionally, measures shall be taken to eliminate or
reduce the risk of biased outputs influencing input for future
operations, and to prevent, detect, respond to, resolve, and control for
attacks trying to manipulate the training data set or pre-trained
components used in training.
2. Article 13 of Directive (EU) 2016/680, also known as the Law
Enforcement Directive, sets out the rules for the processing of personal
data by competent authorities for the purposes of prevention,
investigation, detection, or prosecution of criminal offenses or the
execution of criminal penalties. It requires competent authorities to
ensure data is up-to-date, secure, and processed in a way that respects
data subject rights. The article also highlights the importance of
accountability and transparency and sets out principles of data
protection that must be respected.

References
ePrivacy Directive. (2002). Official Journal L 201, 31/07/2002 P. 0037 –
0047. https://eur-lex.europa.eu/eli/dir/2002/58/oj/eng
EU Charter of Fundamental Rights. (2000). Official Journal of the
European Communities, C 364/1.
https://www.europarl.europa.eu/charter/pdf/text_en.pdf
European Commission. (2020). White Paper on Artificial Intelligence – A
European approach to excellence and trust, COM(2020) 65 final.
European Commission. (2021). Impact assessment of the regulation on
artificial intelligence. https://digital-
strategy.ec.europa.eu/en/library/impact-assessment-regulation-
artificial-intelligence
European Union. Regulation (EU) 2024/1689 of the European Parliament
and of the Council of June 13, 2024, laying down harmonised rules on
artificial intelligence and amending Regulations (EC) No 300/2008,
(EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU)
2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU)
2016/797 and (EU) 2020/1828, OJ L, 2024/1689, 12.7.2024.
High-Level Expert Group on Artificial Intelligence. (2019). Ethics
guidelines for trustworthy AI. https://digital-
strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
27
THE ROLE OF ETHICS IN THE
FUTURE REGULATION OF
ARTIFICIAL INTELLIGENCE
Laura Palazzani

DOI: 10.4324/9781003518495-34

AI Developments and Ethical Questions


The rapid evolution of artificial intelligence (AI) technologies in the ‘new
wave of technology’, especially in recent years, is characterized by
‘disruptive’ progress, due to its complexity, acceleration, breadth of
application, pervasiveness, and innovation. The progress of AI – mainly due
to the increase in computing power, the availability of huge amounts of
data, and the development of algorithms that correlate data – opens up
ethical questions on the boundaries between artificial and human
intelligence: can a machine think like a human and replace them? What are
the limits of the artificial imitation of natural intelligence? Assuming that it
is technologically possible for the machine to become increasingly like
human beings and perhaps even replace them, is that ethically permissible?
What kind of regulation of human/machine boundaries should there be?
Within the ethical discussion characterized by the pluralism of positions,
opposing groups emerge, split between techno-centrism and human-
centrism.

The Answers of Techno-Centrism


Techno-centrism is the vision of those who are in favor of any intervention
to technologize the human body and mind, outlining horizons in which the
artificial becomes ever more similar to the natural and they tend to blend
into each other, to annul the difference between man and machine. This is
the techno-philosophical vision of those who favor a ‘convergent’
development of technologies, which replace and surpass the human, as the
only way to overcome the biological limits of the body and the
neurological-cognitive limits of the mind, toward transhuman, post-human,
or even ‘super-human’ perfection. It is in this context that the concept of the
‘electronic person’ has been elaborated, in the twofold change of man
becoming artificialized and the machine becoming anthropomorphically
humanized, acquiring a moral status.
In philosophical terms this view presupposes a reductionist, functionalist,
and non-cognitivist conception. The reductionist concept starts from the
materialist view of man, whose body is reduced to an object or sum of data
or parts that can be modified and replaced mechanically, and whose mind is
reduced to neural components that can be digitally modified and replaced.
The reductionist view is closely linked to the concept of functionalism,
which holds that the exercise of functions is what counts in order to identify
the person, regardless of their nature which is considered irrelevant: what
counts are the functions expressed or exhibited, since it is immaterial
whether those functions are performed by a man or a machine. It is the
perspective of non-cognitivism which holds that there is no knowledge or
value for appreciating human nature: it is the devaluation of the biological
body, which is considered a ‘weight’, that pushes toward replacing the
biological with the machine, capable of exercising rational and cognitive
functions in an artificial manner without bios.
The technological alteration of the body and mind becomes an
‘evolutionary duty’ that allows a ‘technological shortcut’ to improve
humans and their species. A ‘technological imperative’, failure to accept
which can be seen as guilt and negligence toward technological
evolutionism. In techno-prophetic terms, a future horizon would open up
that should lead to the liberation of man and a radical alteration of man's
nature, up to the total artificialization of everything human.
It is the vision of transhumanism that promotes the abandonment of the
biological and the transition to the artificial in order to expand human
capabilities, to realize ‘perfect lives’. The justification for transhumanism is
the desire for supra-human capabilities that goes as far as to erase the
human condition itself since it is perceived as a limitation. The goal is
expressed in the main maxim of transhumanism: ‘it is ethical and desirable
to use technoscientific means to overcome the human condition’. From this
perspective, the enhancement of man is realized in the ‘techno-human
condition’ of the post-human enhanced man. What matters is that the
machine ‘works’ perfectly if possible.
These seem, perhaps, fantasies or models from science fiction. They are,
however, scenarios that are speculatively taking shape (think of the theories
of super-intelligence and singularity) and regarding which it is impossible
to simply take note and describe them in neutral terms. We need some
critical reflection.

The Responses of Human-Centrism


The opposing voice to techno-centrism is not a techno-catastrophic or
techno-phobic view, expressing hate toward or trying to hinder technology.
Instead, it is an ethical vision based on a critical philosophical reflection on
the human being, their meaning and value, which identifies the limits of
technologies. It is not a matter of exalting technology by despising the
human, or of exalting the human by despising technology: the aim of
human-centrism is to allow technological interventions on the human
without distorting their identity, avoiding irreversible transformations.
Starting from a consideration: technology is not a ‘destiny’, but it is we who
build technology and we must/want to build it in a way that is compatible
with the human, and the centrality of humanity.
The human-centric view is expressed in different ways with
considerations ranging from phenomenology to ontology.
Already at the phenomenological level, that is, how the machine
manifests itself in relation to man, it is possible to highlight the specific
characteristics of man and the limits of the machine. However, much
progress has been made in the search for a form of AI that reproduces the
complex organization of the brain and can imitate the characteristics of
human intelligence, it still seems well beyond realistically foreseeable
technical developments. Certainly, machines (software and computer
programs) are today capable of performing extremely complex functions
and operations that are comparable – or in some areas, even superior – to
human efforts: we may think in particular of the collection, selection, and
cataloguing/archiving of information as quickly as possible and the ability
to calculate. More complex to achieve and more costly, both financially and
in terms of energy consumption, is robotic mobility.
Yet there are qualitative or essential differences between AI and human
intelligence: there are human dimensions that machines do not possess.

Machines Are Not Self-Aware: Humans Are Self-Aware


Machines are not self-aware, that is, they are not able to refer the sum of the
properties/functions and the series of acts/operations they perform to a unity
or identity that constitutes their synthesis and persists over time as their
characters and functions change. AI does not have an identity over time that
persists notwithstanding the modification of characters and functions. Man
is self-aware and so possesses an identity over time.

Machines Are Automatic: Man Is Autonomous


Machines can be ‘automatic’, that is they can change and adapt their
behavior, even unpredictably, according to the conditions and circumstances
in which they operate. The ‘fact’ that at a technological level, in the context
of human-machine interaction, the possibility of unpredictable adaptation or
change can occur does not mean that the machine is autonomous.
Autonomy in its original ethical meaning can only be attributed to human
beings: autonomy means the capacity of a moral agent to act according to
moral values and the representation of a duty and freedom to act. The term
‘autonomy’ cannot be applied to artifacts, even highly advanced complex
cognitive systems, as it is not true autonomy but rather independence or
unpredictability. However, the terminology of ‘autonomous’ systems is
widely used in the scientific literature and public debate (e.g., autonomous
driving machines) to refer to the highest degree of automation in terms of
operations and decisions.

Machines ‘Calculate’: Man ‘Feels’


AI is unable to reproduce the emotional and affective aspects beyond the
calculation and logic of cognition and decision-making. The so-called
Descartes’ error highlights the complexity of human subjectivity, the
intelligence of which is found in a body, and is thus biologically and
neurologically situated, interacting with the dimension of real, lived
experience of suffering and rejoicing, desiring and fearing, seeing, hearing,
touching, smelling, and tasting. Human intelligence is connected to
emotions (emotional intelligence) and feelings, to empathy, which machines
do not feel.

Machines Operate Syntactically: Humans Interpret Meanings


Computers and software supporting AI can operate on syntactic links
between symbols, while they cannot consider the semantic dimension, that
is, interpreting complex meanings: if syntax is a set of general rules of
composition applicable to broad classes of linguistic elements regardless of
meaning, semantics is not only the network of relations between terms, but
also the connection of meaning, graspable only – in the substantive content
– by the human mind.

Machines ‘Predict’ Behavior: Humans ‘Direct’ Behavior


The machine correlates, by means of algorithms, data and information,
retrospectively: man is also able to ‘generate’ new information, while
looking to the future, with intuition, imagination, invention, innovation,
creativity, with a capacity that goes beyond the statistical prediction of
probabilities on the basis of events or facts that occurred in the past but is
able to outline innovative and unpredictable future scenarios.

Machines ‘Act As If They Have a Purpose’: A Human Being Is


Free and Can Give Themselves a Purpose
A machine performs intelligent operations to solve tasks assigned by
humans, but it is not capable of self-assigning tasks, of having aspirations to
grow its knowledge: it is the human being who is free, capable of giving
themselves a purpose, of having the motivation to seek truth, to know
reality.

Machines Are Self-Referential: A Human Being Is Relational


The machine correlates collected data, on which it is ‘fed’ and with which it
is ‘trained’, while a human being is essentially a relational being, has the
ability to perceive themselves as an ‘I’ and to be recognized by others as a
‘you’, in interactions and with reciprocity. Human beings have the capacity
to understand relationality as a dimension of their being and of their duty, in
the moral interpersonal relationship with others.
To sum up: machines do and work, a human being exists, is. AI is in the
world of ‘doing’ or ‘functioning’; it is a mechanical, man-made system that
‘simulates’ and ‘replicates’ human intelligence, or more precisely, some
aspects of human intelligence: it performs actions, reasoning, and behavior,
it ‘performs functions’, such as calculation, language, decision-making,
problem-solving, organization. Machines can also reduce cognitive errors
and achieve possible higher performance results in terms of cognition,
especially on a quantitative level (information gathering and correlation;
calculation; cognitive assistance), compared to humans. Human beings
belong to the world of ‘being’ and ‘existing’, the experience of living in the
world ‘resists’ technology.
Already phenomenology shows the ‘de facto’ man/machine difference
and forms the basis of the ontological justification of human centrality to
other living beings and machines. The human specificity, which makes
them unique and irreplaceable by machines, allows for the recognition of
their intrinsic dignity regardless of the exercise of functions, as having a
nature that is more than the sum of the parts or data; in the acceptance of
the body not as a weight but within the given limits. The human being must
remain at the center insofar as they are the only being capable of acting
morally in relation to other beings, of giving themselves a moral norm, of
being motivated to act in view of an end, whereas the machine can at best
be an instrument for the human end, independent or able to act
unpredictably, by means of algorithms.
Insofar as we identify the human specificity and the irreplaceable
dimensions of the human phenomenologically and ontologically, the
reasons for setting limits to the technological possibility of building
thinking machines emerge: a limit to technological domination and the
aspiration to ‘remake’ nature, as technology can also dehumanize. The
human must be preserved, because it ‘counts’ even in the age of machines.
A human being has a dignity in their nature, which is independent of the
functions they perform and which has a value in themselves, which is ‘de
facto’ and must ‘in principle’ remain at the center, against all forms of ‘anti-
humanism’.
Affirming human-centrism today means embracing emerging digital
technologies by taking a critical stance, that is, assessing their opportunities
and limitations. In the knowledge that a human being is not a machine and
is not reducible to it, since the former has a phenomenological and
ontological specificity that cannot be mechanically reproduced, that marks a
‘beyond’ that still resists and ‘must’ resist, and this represents a value. It is
human beings, who are building the technologies, who must define whether
it makes sense to preserve the human, whether the human has value, and
what the specificity of the human is.
We must remember that technology is not an inevitable ‘fate’. AI is
designed and produced by humans (data are collected and selected by
humans, algorithms are built, developed, and designed by humans): it is
humans who must decide how to produce AI. We know today what is
human and what we want to keep of what is human from a human-centered
perspective in the construction and design of technology. The ‘machine’ can
be an ally of humans and the integration of the natural and artificial can
become complementary, always with the priority of respecting the human,
safeguarding what ‘counts’ of the human, within the framework of
fundamental human rights.

The Role of Ethics as a Critical Reflection to


Inspire Regulation and Policies
The ethics of technology has always been characterized by pluralism or
diversity and even opposing ethical theories. In the context of the ethical
debate on AI, few adhere to the techno-centric approach, based on the
‘technological imperative’, open to every kind of development and use of
AI, hoping in a post-humanistic or transhumanistic future, with the
replacement of humans by machines: there is a convergence, or at least a
minimum common sharable ethic, on the human-centric approach based on
the principle of precaution/prudence, fearing the threat of certain
developments and applications ‘beyond’ humans (that lead to
dehumanization and dis-humanization), trying to find, through an
interdisciplinary and dialectic debate, the ‘ethical requirements’ to
guarantee a development that is respectful of fundamental human values
and rights, such as human dignity, freedom, responsibility, justice as
fairness, equality, and non-discrimination.
The horizon of fundamental human rights remains a conceptual
framework, which needs an explicit specification and interpretation, in light
of emerging issues stemming from scientific and technological
development. In this direction there are many documents, opinions, and
declarations issued by Committees which focus on ethics applied to
technologies at international level (the International Bioethics Committee
and the World Commission on the Ethics of Scientific Knowledge and
Technology at UNESCO), at European level (the European Group on Ethics
in Science and New Technologies at the European Commission, the
Steering Committee for Human Rights in the fields of Biomedicine and
Health of the Council of Europe), and at national level (such as the
Bioethics Committees in different countries).
The role of these Committees consists in the delicate task of ethical
intermediation between science/technology, on the one hand, and
governments, on the other hand. The aim of the Committees’ discussions
(both interdisciplinary and pluralistic) is to elaborate Opinions that
contribute to providing the conceptual instruments to those in government
and society in a broad sense to facilitate understanding of the often
complex, dynamic, changeable issues and their importance and urgency,
outlining them through updated state of the art description and discussion,
anticipating possible scenarios and outlining/advising possible lines of
action in social policies to be undertaken at public level. In this context,
ethics plays the role of critical reflection for an understanding and
evaluation of AI, which justifies the requirements for regulation in order to
try to open up innovative technological opportunities ‘for’ human beings
and humankind, avoiding, or at least managing, risks.
The central question is therefore how AI mediates the relationship
between humans and the world. Ethics tries to actively shape this
mediation: we can and should discuss ethics at the stage of AI's
development (the so-called anticipatory ethics) rather than complain
afterward about the problems it has caused.
The main documents are, at the European level: The High-Level Expert
Group on AI, in Ethics Guidelines for Trustworthy AI, 2019; European
Group on Ethics in Science and New Technologies Statement on Artificial
Intelligence, Robotics and ‘Autonomous Systems’, 2018 and Future of Work,
Future of Society, 2018; European Commission, White Paper on Artificial
Intelligence – A European Approach to Excellence and Trust (February 19,
2020); European Commission, Communication on Artificial Intelligence for
Europe, 2018; European Commission, Proposal for a Regulation of the
European Parliament and of the Council laying down Harmonised Rules on
Artificial Intelligence (Artificial Intelligence Act) and Amending Certain
Union Legislative Acts, 2021; Council of Europe, Committee of experts on
Human Rights, Dimensions of Automated Data processing and Different
Forms of Artificial Intelligence. At an international level: Recommendation
of the Council on Artificial Intelligence of the OECD, Legal Instruments
Artificial Intelligence 2020; UNESCO, Recommendation on the Ethics of
Artificial Intelligence, November 24, 2021.
Main Emerging Ethical Requirements for the
Regulation of AI
There are several important ethical requirements to be taken into
consideration for the future regulation of AI in a human-centric framework.
Meaningful human control: humans need to keep control and oversight of
what they design, program, and deploy; machines should provide ‘support’
to human decision-making, and cognitively ‘assist’ human decisions, but
not ‘substitute’ or ‘replace’ them. Machines should not ‘compete with’, but
complete’ human actions. Retaining human oversight is essential also in
order to avoid the possible problem of ‘technological delegation’. An expert
system that becomes optimal in suggesting ‘decisions’ to humans poses the
risk of decreasing human attention, with the possible consequence of
reducing human skills (de-skilling), reducing responsibility (de-
responsibilization), moving toward the technologization of choices that can
impoverish and even cancel concrete interpersonal relationships
(dehumanization). The risk is that of ‘autonomization’ beyond human
control.
Reliability: safety is an ethical requirement for every machine/technology
just as it is for pharmaceuticals, food, transportation etc. The same also
holds true for AI. Accurate checks should be made of both data (the
accuracy of collection or the ‘quality of data’) and algorithms (the
validation of software), in order to guarantee the beneficial (and not
maleficent) introduction of various forms of AI (Swedish National Council
on Medical Ethics, 2020). All AI ‘products’ should be compared, through
studies conducted according to the rules of controlled trials (comparing
results with and without the use of AI). Controlled studies remain the gold
standard for the demonstration of the safety and efficacy of technologies.
Transparency and explainability: automation can lead to ‘opacity’ or an
absence/lack of transparency on the paths followed by the machine. The
machine does not provide complete information on the correlations of data
and/or on the logic adopted to reach a conclusion or propose a decision. In
this sense it is not possible for users to track or trace back the processes of
the decision proposed by AI. The impossibility in some circumstances also
for software programmers and IT technicians to explain how the system has
achieved certain results (the ‘black box problem’, where only the input and
output are known, but not what happens in between). Consequently, it is
necessary to develop technology that is able to explain – as far as possible –
every step of the decision or at least to inform users of the risk of opacity, in
order for them to acquire critical awareness.
Equality, fairness, and non-discrimination: data collection, classification,
profiling, and clustering through algorithms may be discriminatory. The
identification of groups or subgroups may be based on exclusion criteria
(e.g., age, gender, ethnicity, personal ability), intentionally or
unintentionally: that is, data may be collected only from individuals of a
certain age, gender, ethnical origin or personal capabilities, and the system
does not recognize individuals who are under-represented or not
represented at all. AI could worsen already existing inequalities, through
possible biases in the design of the processing systems. This problem can be
avoided by adopting a broad and inclusive approach (known as the
participatory approach), which could be representative of all individuals
(design-for-all), regardless of any differences.
Data sharing: the protection of privacy and confidentiality is often
underlined as an obstacle to the development of AI, which is based on big
data. In an age of ubiquitous, pervasive and mass collection of data through
digital communication technologies, the right to protection of personal
information and the right to respect of privacy are critically challenged. AI
needs to use data across a broad field, on a global level (with the transfer of
data to other countries) and storage of data over time. Pseudonymization (or
codification) is accepted as an intermediate solution between
anonymization and full identification, as it allows traceability, with
identification in cases where it is important to communicate the results.
However, in order to prevent improper disclosures, appropriate measures
are needed, in the case of the use of data by health insurance companies or
in the workplace. The possibility/opportunity of ‘sharing’ data arises, as a
‘social/common good’, for the advancement of scientific knowledge.
Shared responsibility: the issue of liability is one of the most delicate and
complex problems that arises with the use and development of new AI
systems. In the moral and juridical field, it is necessary to clarify whether
accountability for certain decisions made through an intelligent system
should be attributed to the software designer, the manufacturer, the vendor,
the owner, or the user. The fact that there are various subjects involved
gives rise to ‘shared responsibility’.
Public engagement and education: it is also desirable to promote public
debate on the developments and limits of AI so that all individuals can
acquire the basics of ‘AI literacy’, promoting active participation in social
discussion, avoiding the ‘digital divide’ and marginalization, stigmatization,
and exclusion of people without the competence and motivation to use
technologies, within the framework of inclusion. Education should be
revisited, introducing humanities and ethics courses for engineers, computer
scientists and data scientists, with a focus on ethics by design/in design
(Italian Committee for Bioethics, Italian Committee for Biosafety,
Biotechnology and Science of Life, 2020), to ensure ethical awareness from
the very beginning of the technological design, understanding the
responsibilities involved in the development, deployment, and use of AI.
Education should also include programs of lifelong learning aimed at
people already in the workforce, to avoid the polarization of skills and
ensure re- and up-skilling to promote digital capabilities and ethics
awareness to address the development of new AI technologies.
Sustainability: the sustainability of AI technologies should be ensured,
both on a social and environmental level. It is necessary to take into account
the importance of access for all human beings, considering the costs, and at
the same time respect for the environment, including all living beings.
There should be special attention paid to the social and environmental
impact of AI.

Toward Regulation
In this perspective, ethics plays the role of a critical reflection for the
understanding and evaluation of AI, which justifies the requirements for
regulation by seeking, on the one hand, to open up innovative technological
opportunities ‘for’ man and mankind and, on the other hand, to avoid, or at
least control and manage, risks.
The regulation of new emerging technologies, characterized by speed of
development, uncertainty, and unpredictability, is based on certain criteria:
anticipation, proactive imagination, and identification of the potential
scenario, possible or probable negative characteristics and outcomes of new
technologies. This new type of governance is oriented toward soft
instruments, which more easily allow changes, adaptations and revisions,
and do not require the creation of a formally comprehensive and well-timed
regulatory framework. The inefficiencies of the law, which constantly lags
behind on techno-scientific innovation, leave room for a pluralistic ethical
approach, in the integrated application of ‘ethics of/in/for design’.
This is the horizon that is emerging for innovative governance of
technologies in the science-ethics-society triangle. Regulation that must
start from real representation, based on reliable empirical data and the
hypothetical anticipation of the possible scenarios that may arise, weighing
pros and cons and evaluating the alternative options on a scientific, ethical
and social level, weighing decisions in the context of a transparent, wise,
prudential approach – in the knowledge that the more there is a degree of
exposure to the uncertain gravity and irreversibility of the risk/damage to
man and humanity, the more there must be individual and social
responsibility. Regulation will be called upon to calibrate, in the specificity
of different technologies, which instruments are needed to defend man and
fundamental rights.
New possible rights are being revealed in the discussion: the right to
access emerging technologies, which are proven to be safe and effective;
the right not to use/not to be indirectly ‘forced’ to use emerging
technologies due to social pressure on individuals, who would otherwise
remain marginalized; the right to be informed and educated in the use of
emerging technologies; the right to acquire a critical awareness of emerging
technologies in order to acquire the conceptual tools for autonomous
evaluation (including also the rejection of technologies); the right to
participate in the governance of emerging technologies, to be actively
involved by institutions in policy-making and the regulation of new
technologies.

The Ethical Recommendations for Policy


The ethics debate is the foundational framework in order to define shared
actionable policies around AI on an operational level. Ethics provides the
main guidelines for the design and the development of ‘trustworthy’ AI
(Council of Europe, Steering Committee for Human Rights in the fields of
Biomedicine and Health [CDBIO], 2021) with the aim of guiding policy to
the realization of the common good (as opposed to the imposition of digital
power). However, differences between values and priorities in the various
countries may lead to significant differences in framework implementation.
The main ethical recommendations for policies on AI are to:

develop a methodology for the Ethical Impact Assessment of AI


technologies based on rigorous scientific research, ethical discussion
grounded in international human rights law (CoE, 2020), for its
implementation in all stages of the AI system life cycle, and capacity-
building materials to support countries’ efforts to train government
officials, policy-makers and other relevant AI actors; introduce
frameworks for transparent and publicly available ethical impact
assessment on human rights, to identify benefits and risks of AI systems
(with measurable standards), in order to plan appropriate risk
prevention, mitigation, and monitoring measures. Such assessments
should be multidisciplinary, multi-stakeholder, multicultural, pluralistic,
and inclusive (the EU focuses on the risks of surveillance, with social
scoring and facial recognition; the manipulation of behavior and
opinions as risks to be avoided).
ensure AI governance mechanisms with ethical oversight authorities,
establishing, on an institutional level, independent public bodies
responsible for oversight and providing a mechanism of continuous
monitoring (e.g., UNESCO proposes a network of independent AI
Ethics Officers to give support to this process at national, regional, and
international level); ensure possible mechanisms for monitoring and
evaluation, such as an ethics commission, an AI ethics observatory, to
evaluate their adherence to policy recommendations on ethical
requirements (UNESCO, 2021).
develop data governance strategies that ensure the continual evaluation
of the quality of training data for AI systems, including appropriate
safeguards to protect the right to privacy and promote open data,
reviewing policies, and regulatory frameworks (Nuffield Council on
Bioethics, 2018), introducing mechanisms, such as open repositories for
publicly funded or publicly held data and data trusts, to support the safe,
fair, legal, and ethical sharing of data; promote public and private efforts
to create collaborative platforms to share quality data in trusted and
secure data spaces.
promote international collaboration (as opposed to power competition)
on AI research and responsible innovation, engaging international
organizations and research institutions, as well as transnational
corporations, which can be a basis for the ethical use of AI systems by
public and private entities, creating incentives for the public and private
sectors to invest in this area (OECD, 2019); ensure that AI researchers
are trained in research ethics and require them to include ethical
considerations in their designs, products, and publications (human
oversight, safety and security, transparency and explainability,
inclusiveness, sustainability).
improve access to information and education, promote AI literacy to
strengthen not only technological skills but also ethical critical thinking
and the competencies needed to understand the use and implication of
AI systems, in order to empower people and reduce digital divide and
digital access inequalities; ensure broad participation, paying special
attention to particularly vulnerable people or people in vulnerable
situations.

References
Council of Europe, Parliamentary Assembly, Committee on Social Affairs,
Health and Sustainable Development. (2020, September 22). Artificial
intelligence in health care: Medical, legal and ethical challenges
ahead. Parliamentary Assembly. Retrieved April 15, 2024, from
https://rm.coe.int/recommandation-2185-2020-artificial-intelligence-
in-health-care-medica/1680a2dcfa
Council of Europe, Steering Committee for Human Rights in the fields of
Biomedicine and Health (CDBIO). (2021). Report on the impact of
artificial intelligence on the doctor-patient relationship. Council of
Europe. Retrieved April 15, 2024, from https://www.coe.int
European Union Agency for Fundamental Rights. (2020). Getting the future
right: Artificial intelligence and fundamental rights. Publications
Office of the European Union. Retrieved April 15, 2024, from
https://data.europa.eu/doi/10.2811/58563
Italian Committee for Bioethics, together with the Italian Committee for
Biosafety, Biotechnology, and Sciences of Life. (2020, May 29).
Artificial intelligence and medicine: Some ethical aspects Italian
Committee for Bioethics. Retrieved April 15, 2024, from
http://bioetica.governo.it/en
Nuffield Council on Bioethics. (2018, May 15), Artificial intelligence in
healthcare and research. Nuffield Council on Bioethics.
https://www.nuffieldbioethics.org/publications/ai-in-healthcare-and-
research
OECD. (2019, May 22). Recommendation of the Council on Artificial
Intelligence, OECD legal instruments. OECD. Retrieved April 15,
2024, from https://www.fsmb.org
Swedish National Council on Medical Ethics. (2020, May 28). Artificial
intelligence in healthcare. Swedish National Council on Medical
Ethics. https://smer.se/en
UNESCO. (2021, November 24). Recommendation on the ethics of
artificial intelligence. UNESCO. Retrieved April 15, 2024, from
https://unesdoc.unesco.org
28
ETHICAL DILEMMAS AND
REGULATORY LANDSCAPE IN
HEALTHCARE ARTIFICIAL
INTELLIGENCE
Yasaman Yousefi, Daniel Karpati, and Christoph Schommer

DOI: 10.4324/9781003518495-35

Introduction: Opportunities and Challenges of


Healthcare AI
Healthcare systems worldwide are grappling with a multitude of pressing
challenges that threaten their efficiency and effectiveness. Among these are
the rapid increase in the ageing population (Cristea et al., 2020), a critical
shortage of healthcare personnel (World Health Organization, 2022),
pervasive inefficiencies within healthcare delivery systems (Behr &
Theune, 2017), and systemic inequities in access to care (Baeten et al.,
2018; Mackenbach, 2006). Each of these issues compounds the others,
creating a complex web of obstacles that impedes the provision of optimal
healthcare.
This is where artificial intelligence (AI) comes into play, a technology
that is at a critical juncture and holds great promise to transform the
efficiency, cost, and delivery of healthcare services globally.
The COVID-19 pandemic has underscored the potential of AI in
healthcare. AI algorithms have been instrumental in quickly diagnosing
COVID-19 by synthesizing chest CT images and laboratory data (Mei et al.,
2020). Furthermore, AI-driven screening processes have accelerated the
development of antiviral drugs and predicted potential future viruses,
showcasing AI's flexibility and quick responsiveness in crisis situations (Ke
et al., 2020). During the pandemic, AI's utility extended to public spaces
such as airports and schools, where thermal scanners equipped with body
and facial recognition technologies identified individuals with high
temperatures. Additionally, AI-generated “health rapid response codes”
considered individuals’ health status, travel history, and contact patterns to
categorize populations (Sharara & Radia, 2022). Such systems were pivotal
in formulating targeted anti-epidemic measures and significantly curtailed
the spread of the virus.
Integrating genomic and health data can further enhance AI's precision in
managing public health crises. Personalized risk assessments based on
genetic predispositions and pre-existing conditions can refine AI's
predictive capabilities, enabling more effective interventions and resource
allocation during pandemics. This holistic approach not only improves the
efficiency of monitoring systems but also aligns with the broader goal of
individualized medicine (Johnson et al., 2021).
On a broader scale, AI facilitates personalized medicine by analyzing
extensive patient data to tailor treatments to individual needs. Algorithms
can predict patients’ responses to different treatments, allowing for
customized therapy plans that maximize efficacy and minimize side effects.
The use of AI with genomics enables a deeper understanding of genetic
variations and their impacts on diseases, fostering more personalized
healthcare strategies (Collins & Varmus, 2015).
Operational efficiencies are also enhanced through AI applications.
Systems like IBM Watson streamline administrative tasks such as
scheduling, billing, and patient data management. This not only reduces
costs but also allows medical staff to devote more time to direct patient care
(Fichman et al., 2011).
In diagnostics, AI has demonstrated remarkable capabilities, particularly
through deep-learning algorithms in image-based diagnosis. Google
DeepMind's algorithm for diagnosing eye diseases, for example, has
matched the accuracy of top specialists in identifying conditions such as
diabetic retinopathy and age-related macular degeneration (De Fauw et al.,
2018). These advancements enhance diagnostic accuracy and expedite the
diagnostic process, allowing for quicker intervention and treatment.
However, AI-driven diagnoses are not always entirely accurate and error-
free. The quality of training data and their representativeness play crucial
roles in the performance of AI models (Kilkenny & Robinson, 2018). If the
training data are biased or lack diversity, the AI system may exhibit
inaccuracies and fail to generalize well across different population groups,
leading to disparities in healthcare outcomes (Gianfrancesco et al., 2018;
Parikh et al., 2019).
The implementation of AI in healthcare is fraught with complexities,
including potential algorithmic challenges and issues of generalization. The
intricacy of medical conditions and the variability in their manifestation
across different patients further compound these challenges. Kelly et al.
(2019) underscore the need for robust clinical evaluation, comprehensible
performance metrics, effective regulatory implementation, and vigilant
post-market surveillance. They caution about the risks of algorithmic bias
and unfairness and advocate for efforts to identify and mitigate these issues.
Enhancing the generalizability and interpretability of machine-learning
predictions is crucial. Continuous monitoring, validation, and updating of
AI models with new and diverse data are recommended to bolster the
robustness of AI diagnostics.
Acknowledging that the integration of AI in healthcare brings
transformative benefits, such as minimizing errors and alleviating the
administrative burden on healthcare providers, it should also be noted that it
introduces several societal, legal, and ethical challenges. One of the most
pressing concerns is data privacy, security, and safety. AI systems require
access to extensive personal and medical data, making them potential
targets for cyberattacks. Robust discussions and measures are essential to
ensure the protection of this sensitive information (Mennella et al., 2024).
Moreover, the ethical implications of AI in healthcare are significant and
multifaceted. Questions arise regarding the autonomy of patients and
medical professionals when AI-driven decisions are involved (Laitinen &
Sahlgren, 2021). Concerns also exist about accountability and informed
consent (Luxton, 2014). These systems, while beneficial, must be designed
to operate transparently and with explainability, ensuring that patients and
practitioners understand how and why decisions are made (Shin, 2020).
Additionally, there is a risk of AI perpetuating existing biases found in the
training data. Studies have highlighted instances where AI systems
exhibited racial bias in treatment recommendations (Obermeyer et al.,
2019), underscoring the necessity for diverse datasets and algorithms that
can detect and correct biases to ensure fairness and accessibility.
These challenges are not merely technical but also ethical and legal,
relating to broader concerns about patient privacy, data protection, fairness,
accessibility, and liability. In response to these challenges, there is a
growing discourse around AI ethics, which aims to guide the development
and implementation of AI technologies in a manner that upholds ethical
standards. This movement reflects a collective endeavor to harness the
benefits of AI in medicine while safeguarding against its potential harms,
ensuring that AI serves as a tool for enhancing healthcare equity and quality
without compromising ethical values.
This work will explore, in its first section, the ethical landscape of
healthcare AI by raising critical and provocative questions through different
hypothetical scenarios. In the same section, different levels of abstraction to
study such ethical dilemmas, namely the individual, systemic, and inter-
systemic levels, will be explored.
In the second section, the transition from bioethical principles to the
ethics of AI, and the further transition of those ethical principles to the
existing and forthcoming regulatory frameworks in the European Union
(EU) will be studied. Through critical thinking, this section explores the
complex dynamics and interconnections of the (AI Act) (European Union,
2024, July 12, Regulation (EU) 2024/1689) and the Medical Device
Regulation (MDR). By showcasing the legal complexities in the world of
healthcare AI, the section offers a legal exercise of double compliance
through a hypothetical case. The analysis reveals a significant regulatory
gap in the EU, where the absence of precise guidelines on navigating AI-
related legislation and the rapid evolution of AI creates ongoing governance
challenges. This underscores the need for the EU to refine its digital and AI
regulatory frameworks to maintain global competitiveness and effectively
harness AI's potential in healthcare.

Navigating the Ethical Landscape of Healthcare


AI
We have said so far that the integration of AI into healthcare systems offers
transformative possibilities, ranging from improved diagnostic accuracy to
personalized treatment protocols. However, we also explored that, alongside
these advancements, AI integration introduces a spectrum of risks and
ethical considerations.
Both benefits and risks emerge in various layers and affect multiple
stakeholders. As a result, the integration of AI technologies into healthcare
systems should be analyzed from various levels to provide an optimal and
harmonious environment for such integration.
In this approach, there are three levels of abstraction (LoAs) through
which these benefits and risks can be explored and compared (Floridi,
2008):

i. issues related to the individuals participating in the healthcare system.


We call this the individual level;
ii. issues related to the healthcare providers and the healthcare system
itself. We call this the systemic level;
iii. issues related to the healthcare system's relation to its broader
environment, be that on the national, the supernational, or the
international level. These dilemmas occur on the inter-systemic level.
Below, we explore the possible ethical dilemmas under each LoA
through hypothetical scenarios.
A main concern at the individual level regards data privacy. Personal
medical data are considered sensitive, and the need to protect patient
privacy is widely recognized. Individuals have a vested interest in ensuring
their medical history and related data remain confidential, as sharing such
data could put them in a vulnerable position (Abouelmehdi et al., 2018).
When handling anonymized or pseudonymized medical data to build
computational models, we enter more complex scenarios that shift the focus
on data privacy at the individual level.
Consider a hypothetical AI system designed to recognize cancer. From
the individual's perspective, one issue is whom to trust more: a human
doctor or an AI system (LaRosa & Danks, 2018). Several questions arise in
this regard: Is it possible to recognize human bias, competence, and work
ethics? Can we detect if the medical professionals are overworked,
overconfident, or superficial? Do we trust the educational system to train
competent doctors? Ultimately, if healthcare professionals disagree with the
prediction of an AI model trained on a much larger dataset, which opinion
is to be trusted more?
Other issues may emerge in relation to trust. First, for patients to trust AI
systems’ predictions, these systems should be explainable and provide clear
reasoning for their diagnoses. Further, the systems should be robust and aim
for high accuracy, and they should be trained on fair and inclusive datasets
in order to provide good performance for every individual.
If we only considered the individual level, it would not be so difficult, at
least in theory, to define such rules for this hypothetical AI model, but in
practice this can be a complicated task. Most of the high-level AI “white
papers” broadly articulate these wishes as demands to have a safe and
beneficial AI, without providing clear guidelines.
From the perspective of healthcare providers, additional concerns arise,
particularly at the systemic level where decisions are both statistical and
ethical. Measuring the success rate of physicians in establishing a
benchmark for AI in diagnosis or prognosis is challenging but not
impossible. For instance, reliable data on false-negative and false-positive
diagnoses in cancer diagnostics are available in many regions, though they
can vary by type of cancer, demographic, and other factors. Cancer
registries and health information systems provide significant data that can
be used to track performance metrics, but gaps still exist.
Often, we must rely on estimates and assumptions due to incomplete
data. Without precise metrics on human performance, setting an appropriate
benchmark for AI systems is difficult (Babushkina, 2023). This issue is
significant because the impact of diagnoses extends beyond individual
accuracy rates to broader community implications, such as public health
outcomes and healthcare equity.
Establishing benchmarks for AI requires robust, scientifically accurate
data to ensure these systems can effectively complement or enhance human
performance. Comprehensive studies comparing human and AI diagnostic
accuracy do exist (Fonseca et al., 2024; Oakden-Rayner et al., 2022), but
the variability in data quality and availability poses challenges. Thus, while
substantial progress has been made, there is still a need for ongoing
improvement in data collection and reporting to fully leverage AI in
healthcare.
At the systemic level, ethical dilemmas are abundant, and justifying one
set of preferences over another is challenging. Consider a scenario where a
hypothetical AI model demonstrates higher accuracy for white individuals
than for black individuals due to under-representation in the training
dataset. Note that real-life examples of AI systems showing racial, or
gender bias exist, and such biases usually mirror the systemic inequities of
society (Buolamwini & Gebru, 2018; Cirillo et al., 2020; Parikh et al.,
2019). In this case, however, the scenario at hand is hypothetical.
The state, as the primary healthcare provider (as in the EU), encounters a
dilemma: Should the implementation of this AI model be delayed until it
can provide equally accurate results for both white and black individuals?
The answer to this question depends on whether the state involved
prioritizes utilitarian principles over principles of individual rights.
Using this AI tool is problematic because it is racially biased, being less
accurate for black individuals. Nevertheless, the technology is affordable
and increases access to diagnostics. If integrated into universal healthcare, it
could reduce waiting times and costs. The state can potentially deploy the
AI system to leverage its economic and healthcare benefits, ensuring
transparency about the system's limitations and potential biases. By openly
communicating these issues and actively pursuing mitigation strategies,
such as prioritizing screenings for historically underserved groups and
continuously improving the AI model's accuracy across all demographics,
the government can maximize the benefits of the technology while
addressing ethical and fairness concerns.
Statistically, economically disadvantaged black individuals are more
affected and less likely to receive preventive screenings through traditional
methods (Sambamoorthi & McAlpine, 2003; Ward et al., 2004).
If a hypothetical government decides to deploy the AI model despite its
bias, this results in improved statistics and more timely diagnosis for black
individuals. The AI serves as a pre-selection tool, enabling a broader pool
of patients to be screened, with those marked “positive” proceeding with
traditional diagnostic methods.
While many lives can be saved, the prevalence of false negatives among
black individuals creates a subgroup with near-zero chances of receiving a
diagnosis. These individuals are incorrectly labeled as healthy and could
even end up at the bottom of the waiting list due to incorrect labeling, a
particularly troubling issue leading to further systemic discrimination.
From a utilitarian perspective, deploying the AI model offers significant
economic benefits despite its ethical issues. The primary economic
advantage lies in the model's ability to reduce healthcare costs by
streamlining diagnostics, thereby reducing the need for expensive human
labor in preliminary screenings. This cost-saving can be redirected to other
critical areas within the healthcare system, potentially improving overall
healthcare quality and accessibility. The trade-off involves balancing these
economic benefits against the need for accuracy, fairness, explainability,
and transparency.
The AI's higher accuracy for white individuals over black individuals
poses a fairness issue, as it may perpetuate existing healthcare disparities.
Additionally, the lack of explainability and transparency in the AI's
decision-making process, often referred to as the black box problem
(Castelvecchi, 2016), complicates trust and accountability.
Examining this hypothetical example in its broader context, namely the
inter-systemic level, adds further complexity. Up to this point, our
discussion has revolved around the assumption of an anonymized dataset
serving as the foundational basis for training AI algorithms. While we have
explored potential challenges posed by a hypothetical diagnostic AI tool,
we have yet to delve into the entity responsible for its development and the
management of the underlying dataset.
Let us consider a scenario where a private company, designated as
Company A, has collected, anonymized, and stored a dataset containing
sensitive medical information pertaining to a specific population. This
dataset holds the potential to train models to identify a particular type of
cancer. Consequently, questions regarding data ownership and governance
come to the fore.
On the one hand, there are compelling benefits to sharing such datasets.
Should Company A opt to sell its dataset to Company B, which possesses
its own dataset, the integration of these datasets could lead to improved AI
models. This amalgamation has the potential to enhance diagnostic
accuracy and reveal significant correlations. As companies merge and
datasets accumulate, the field of diagnostics and treatment may witness
substantial advancements, potentially ushering in a revolution in healthcare
(Raghupathi & Raghupathi, 2014). The prevention of such accumulation
could impede innovation and postpone the discovery of cures for complex
diseases such as dementia and Parkinson's.
On the other hand, centralizing medical data poses risks that could render
the population vulnerable (Jha, 2023). The potential misuse of such data to
create bioweapons underscores the complexity of predicting the scope and
nature of potentially harmful medical data (Egan & Rosenbach, 2023).
Furthermore, data localization issues introduce additional complications, as
medical datasets are frequently delocalized, complicating regulatory
oversight. Legal restrictions on data accumulation may prove insufficient in
preventing malicious use, particularly given the lower standards for
developing harmful AI applications compared to beneficial ones.
Moreover, data centralization fosters dependency. For instance, a large
international conglomerate that accumulates vast medical datasets could
wield significant influence in healthcare advancement, potentially shifting
power dynamics away from nation-states towards a select few dominant
companies. This prompts the question of whether it is feasible to regulate
data centralization without inadvertently consolidating power. While the
state, by virtue of its privileged position in healthcare data collection, could
theoretically manage datasets more effectively, political realities and
concerns regarding privacy and trust may impede such initiatives.
Additionally, the challenge of global cooperation arises. Addressing both
the benefits and risks of medical data centralization may necessitate
international alliances or deeper cooperation within existing frameworks,
such as the EU.
The deployment of AI technologies in healthcare brings forth significant
ethical and societal challenges at individual, systemic, and inter-systemic
LoAs. Regardless of whether a regulatory framework favors corporate-led
AI innovation or a public institution-controlled approach, it is imperative to
avoid dependence without adequate oversight. Implementing checks and
balances, distinguishing between data concentration and power
concentration, and proactively addressing the risks associated with
malevolent AI use are essential steps in navigating this intricate landscape.
The following section demonstrates the transition of ethical principles to
legal norms. It will explore through critical thinking, the complex
dynamics, and interconnections of the AI Act and the MDR. By showcasing
the legal complexities in the world of healthcare AI, the section offers a
legal exercise of double compliance through a hypothetical case.

Navigating the Regulatory Challenges of


Healthcare AI in the EU: AI Act and MDR
Complexities
In the EU, ethical guidelines emphasize the development of AI systems free
from bias, ensuring fair treatment for all individuals regardless of race,
gender, or socioeconomic status. The European Commission's ethical
guidelines for trustworthy AI highlight the importance of mitigating biases
in AI training datasets and algorithms to prevent discriminatory outcomes
(European Commission, 2019).
These guidelines also address the protection of patient data privacy and
confidentiality, which are paramount in healthcare. The EU's General Data
Protection Regulation (GDPR) sets a benchmark for personal data privacy,
including strict rules on data consent and anonymization before AI
processing. These regulations ensure that AI systems respect and uphold
individuals’ privacy rights (European Parliament and Council of the
European Union, 2016).
Transparency and accountability are crucial aspects of ethical AI in
healthcare. AI systems should be transparent in their operations and
decision-making processes, with mechanisms in place to hold developers
and users accountable. The IEEE's Ethically Aligned Design principles
stress the importance of transparency to facilitate scrutiny and ethical
assessment of AI technologies (IEEE, 2019).
Floridi and Cowls (2019) have developed an ethical framework for AI
based on the core bioethical principles of beneficence, non-maleficence,
autonomy, and justice, as outlined by Beauchamp and Childress (2012).
Floridi (2013) argues that bioethics closely parallels digital ethics in
addressing the ecological impacts of new types of agents, patients, and
environments.
To this foundation, Floridi and Cowls add the principle of explicability,
which includes both intelligibility and accountability. Intelligibility ensures
that AI systems are understandable to non-experts, such as patients and
business customers, as well as experts like product designers and engineers.
Accountability ensures that those who design, deploy, and manage AI
systems can be held responsible for their actions and decisions (Floridi &
Cowls, 2019).
The High-Level Expert Group on Artificial Intelligence (HLEG),
established by the European Commission, has outlined seven key principles
for ethical AI: Human Agency and Oversight; Technical Robustness and
Safety; Privacy and Data Governance; Transparency; Diversity, non-
discrimination and fairness; Societal and Environmental Well-being; and
Accountability. These principles aim to guide the development and
deployment of AI systems in a manner that is trustworthy and beneficial for
society (High-Level Expert Group on Artificial Intelligence, 2019).
Building on these principles, the EU sets its regulatory foundation for
governing AI systems, ensuring that they are developed and deployed
ethically and responsibly.
First and foremost, there is the landmark regulation on laying down
harmonized AI rules (the AI Act). Focusing on high-risk applications,
including those in healthcare, the Act classifies AI systems according to the
risk they pose to safety and fundamental rights, imposing stricter
requirements on high-risk applications to ensure they are transparent,
traceable, and guarantee a high level of data protection.
As of now, together with the AI Act, the regulatory landscape for medical
AI tools in the EU is governed by the 2017/745 MDR and the 2017/746 In
Vitro Diagnostic Medical Devices Regulation (IVDR). These regulations
mandate rigorous pre-market controls, enhanced clinical investigation
requirements, strengthened surveillance throughout the device's lifecycle,
and increased transparency through the creation of a European database for
medical devices. However, they fall short in addressing specific challenges
related to AI, such as the continuous learning capabilities of AI models and
the detection of algorithmic biases. The adaptive nature of AI, which
evolves as it processes more data, necessitates innovative approaches to
continuously monitor and manage the associated risks.
The risks associated with AI can be analyzed and classified based on the
potential severity and frequency of the harm they might induce. In the
healthcare sector, the spectrum of AI risks varies considerably. Some risks
are infrequent and manageable, causing only limited harm to patients and
healthcare systems, such as an AI tool that inaccurately delineates the
boundaries of the heart in a cardiac image, requiring manual correction by a
cardiologist. However, other risks are more severe, potentially leading to
irreversible damage, such as an AI tool failing to diagnose a life-threatening
condition, which could have dire consequences on patient health and
clinical outcomes.
To effectively mitigate these risks while maximizing the benefits of AI in
healthcare, it is crucial to identify, analyze, understand, and monitor
potential risks on a case-by-case basis for each new AI algorithm and
application, from the design to deployment phases. A systematic risk
assessment procedure should be established, classifying identified risks into
categories that reflect different levels and types of risk. For example, in the
design of the AI algorithm, the provenance, quality, representativeness, and
reliability of training data should be assessed to identify any possible issue.
Additionally, the potential for algorithmic bias, data security vulnerabilities,
and ethical concerns related to patient privacy should be evaluated.
For each risk category, appropriate tests and regulatory measures should
be specified. Higher risk classes, such as those involving direct patient
diagnosis or treatment recommendations, would require more stringent
testing and regulation, including extensive clinical trials and continuous
post-deployment surveillance. Lower risk categories, such as administrative
support functions, might necessitate less intensive mitigation measures,
such as routine audits and compliance checks. Implementing a suitable risk
classification system will enable manufacturers, healthcare providers, and
regulators to intervene appropriately to safeguard patient safety and rights
without unnecessarily stifling innovation. For instance, a high-risk AI
system used in surgical decision-making would undergo rigorous validation
and monitoring, whereas an AI tool designed for scheduling patient
appointments might only require standard quality assurance procedures.
This balanced approach ensures that patient care remains safe and effective
while fostering the development and implementation of innovative AI
technologies in healthcare.
The AI Act does not specifically tailor its provisions to AI in healthcare
but indicates that AI-driven medical devices are likely to be classified as
high-risk due to significant safety and privacy concerns. This classification
means that future medical AI tools will need to comply not only with the
existing requirements of the MDR, but also with additional stipulations
outlined in Chapter II of the AI regulation. These include the use of high-
quality and representative data, comprehensive technical documentation
and traceability, transparency requirements, human oversight, a quality
management system, and thorough conformity assessments.
However, the classification of all medical AI tools as high risk may not
be entirely appropriate. For instance, numerous AI applications in radiology
aim to expedite tasks like the contouring of organs and lesions on medical
images – a process crucial for subsequent quantification and diagnosis, such
as outlining the cardiac ventricles or lung tumors. These AI-powered tools,
while integral to clinical practice and enhancing efficiency, might not
inherently demand the same level of transparency as other AI applications.
Clinicians are typically able to visually verify and correct the outputs of
such tools, thereby mitigating the risks associated with their use.
Consequently, there is a valid argument for a more nuanced classification
system within the regulatory framework that distinguishes between low-
and high-risk AI applications in healthcare. This would support ongoing
innovation and investment in the sector without unnecessarily stifling
technological advancement with stringent regulations.
As manufacturers develop AI-based medical devices, they encounter an
important question: What regulatory obligations must their products meet to
leverage AI effectively? And how do these regulations interact with one
another?
This query becomes increasingly complex when considering the
intersecting regulatory frameworks, notably the MDR and the IVDR,
alongside the AI Act. It has been pointed out that the AI Act and the MDR
may be overlapping, and in some cases inconsistent, leading to the necessity
of double compliance.
In order to clarify how these regulations may interact, let us consider a
hypothetical example of a medical device. A well-known use of AI in
cancer diagnosis is image recognition technology. Let us consider an AI-
enabled Diagnostic Imaging System, with the purpose to analyze and
interpret medical images of patients. This device incorporates AI algorithms
that assist radiologists in identifying anomalies, lesions, or abnormalities in
the images.
Article 3 of the AI Act defines “AI system” as

a machine-based system that is designed to operate with varying levels of


autonomy and that may exhibit adaptiveness after deployment, and that,
for explicit or implicit objectives, infers, from the input it receives, how
to generate outputs such as predictions, content, recommendations, or
decisions that can influence physical or virtual environments.

In our case, the system falls under said definition.


Under the AI Act, Article 6.1, the AI-enabled Diagnostic Imaging
System shall be considered to be high risk where both of the following
conditions are fulfilled:

(a) (…) the AI system is itself a product, covered by the Union


harmonisation legislation listed in Annex I;
(b) (…) the AI system itself as a product, is required to undergo a third-
party conformity assessment, with a view to the placing on the market or
the putting into service of that product pursuant to the Union
harmonisation legislation listed in Annex I.

Annex I offers a list of the Union harmonization legislation, among


which is the MDR.
The AI-enabled Diagnostic Imaging System can be defined as a medical
device under Article 2 of the MDR, as it is intended to be used for
diagnosing or monitoring. This means that if also the second condition
regarding a conformity assessment outlined in Article 6.1 (b) of the AI Act
is fulfilled, the system is considered as high risk under the AI Act.
To understand whether or not the system needs to go through a
conformity assessment, we look at the MDR again. Article 52 of the MDR
sets the conformity assessment procedures, for which the risk classification
of the system has to first be determined. Article 51 classifies systems into I,
IIa, IIb, and III classes, taking into account the intended purpose of the
devices and their inherent risks, in accordance with Annex VIII. According
to Rule 10 of Annex VIII, the AI-enabled Diagnostic Imaging System is
Class IIa as a device “intended for direct diagnosis or monitoring of vital
physiological processes”, and if it is “used for diagnosis in clinical
situations where the patient is in immediate danger”, it is classified as Class
IIb. Let us consider the first case as a hypothetically valid classification.
Under Article 52.4 of the MDR, “Manufacturers of class IIa devices (…),
shall be subject to a conformity assessment as specified in Chapters I and
III of Annex IX, and including an assessment of the technical
documentation as specified in Section 4 of that Annex (…)”.
This legal exercise may seem overcomplicated, and intentionally so. The
EU's regulatory attempts to govern AI include lots of pinpointing instances,
as just witnessed in our hypothetical case. There are many conditions to
evaluate, addressed in not a very straightforward or clear way, leading to
legal syllogism and uncertainty.
In a potential attempt to overcome these issues, going back to the AI Act,
according to Article 8.2, providers must make sure their product follows the
rules in both the MDR and the AI Act:

In ensuring the compliance of high-risk AI (…) with the requirements set


out in this Section, and in order to ensure consistency, avoid duplication,
and minimise additional burdens, providers shall have a choice of
integrating, as appropriate, the necessary testing and reporting processes,
information and documentation they provide with regard to their product
into documentation and procedures that already exist and are required
under the Union harmonisation legislation listed in Section A of Annex I.

However, it should be noted that, on the one hand, the conformity


assessment under the MDR in itself is complicated, and, on the other hand,
the fundamental rights-oriented focus of the AI Act is not reflected in the
MDR, as the latter is a more technical regulation.
For instance, Annex IV of the AI Act regarding technical documentation
referred to in Article 11(1) requires providers to include: detailed
information about the foreseeable unintended outcomes and sources of risks
to health and safety, fundamental rights, and discrimination.
In the meantime, the MDR merely sets requirements for the technical
functions of the devices for its conformity assessment, without, for instance,
considering discrimination in a fundamental rights perspective.
This leads us to think back to the LoAs discussed in the previous section.
Namely the individual, systemic, and inter-systemic levels.
Going back to our example of an AI-enabled image diagnosis system, let
us imagine a scenario where the system shows suboptimal performance in
its diagnosis. For example, consider that the system discriminates against a
certain group of people sharing certain characteristics, due to the uneven
representation of society in the data. For example, the system may have a
lower accuracy for transgender women due to the lack of representation in
medical data.
Discrimination is problematic at the individual level, as it threatens a
fundamental right, and individuals’ trust in the healthcare system. In this
case, which regulation prevails in protecting the individual? And what
redress mechanisms do the regulation offer?
Of course, if the system's provider has done a conformity assessment
under the MDR, and their system is technically robust and accurate, there is
no further investigation needed; even though the system is discriminatory,
they comply with the AI Act automatically. This means that compliance
with the harmonization legislation will trigger a presumption of compliance
with the AI Act. However, if the system proved to be discriminatory, there
would no longer be compliance. An interesting question to be asked here is
what tools – if any – do individuals have to actually contest compliance?
Or imagine that the system works perfectly accurately 98% of the time,
but for a certain individual, it gives the wrong diagnosis, for instance
predicting that the individual does not have cancer, while they do, without
any discrimination, just because it is a false negative. From a technical
perspective, the system is functioning very well under the MDR, but from
an ethics-oriented and AI Act approach, the conformity assessment is not
complete, even though the AI Act deems it to be.
At the systemic level too, the coexistence of the AI Act and the MDR
may not be sustainable. Of course, we acknowledge that each of these
regulations is designed to govern different things and, in some cases, the
same things from a different level of abstraction. However, there is a clear
lack of guidance on how to approach compliance in certain cases such as
the example mentioned. The excessive burdens of compliance with the
regulations can be discouraging. There is a risk that the EU is being left
behind in the pace of innovation, as together with existing privacy
requirements, manufacturers of AI systems must comply with a broad
regulatory environment.
Finally, on the inter-systemic level, and the healthcare system's relation to
its broader environment, be that on the national, supernational, or
international level, the dilemma is in trade and relationships. If EU-based
manufacturers are discouraged due to over-regulation and legal uncertainty,
AI systems will not be manufactured within the EU. In that case, who will
be the providers of AI systems and what would compliance look like then?

Conclusion
We learned about the vast potential of AI to revolutionize healthcare,
offering significant improvements in diagnostics, personalized medicine,
and operational efficiency. We also pointed out that the integration of AI
into healthcare systems presents complex challenges that must be addressed
to harness its benefits fully. Ethical concerns, such as data privacy,
algorithmic bias, and the need for explainability, transparency, and
accountability, are critical. From the readings of different existing legal
frameworks, we realized that one of the primary challenges in governing AI
used in healthcare in the EU revolves around the absence of precise
legislation. The dynamic nature and swift evolution of AI present ongoing
challenges in terms of regulatory frameworks. In other words, regulations
usually lag behind technological innovation due to the slow progress of law.
Current regulations often prove insufficient in adequately addressing the
complexities inherent in AI applications within healthcare. This lack of
clear and detailed legislation fosters uncertainty, potentially impeding
innovation and deterring investments due to concerns regarding compliance
and liabilities. To address these challenges, the EU is actively working to
regulate AI, including its application in healthcare. However, this approach
has been criticized for hampering innovation and not progressing at the
global level.
Another significant challenge in AI governance is the reliance of AI
applications on extensive datasets to optimize their performance. In this
context, the EU places a strong emphasis on the protection of personal data
through the GDPR. However, effectively balancing the utilization of AI's
potential with ensuring robust data protection continues to be a persistent
challenge.
As AI technology evolves, so must regulatory frameworks. These
mechanisms need to be flexible and adaptive to address new ethical
challenges that arise with technological advancements. Moving forward, it
is crucial to refine these regulatory frameworks to avoid unnecessary
burdens on AI developers and healthcare providers. Ensuring that the
regulations are not overly restrictive will help maintain the EU's
competitiveness in the global AI landscape. By striking the right balance
between innovation and regulation, AI can be a powerful tool to enhance
healthcare delivery, improve patient outcomes, and address some of the
most pressing challenges facing healthcare systems worldwide.
Finally, it is worth mentioning that the European Digital Strategy, which
aims to make the EU a global leader in digital innovation, has not yet been
fully realized. The slow progress in harmonizing AI regulations and
addressing the specific needs of healthcare AI indicates that there is still
much work to be done. The strategy's goals of fostering a robust digital
infrastructure and creating a competitive digital economy are hampered by
the current regulatory bottlenecks. Such regulatory approaches are far less
strict in the US and China, with a more utilitarian view. Therefore, it is
imperative for the EU to accelerate its efforts in refining its digital strategy
and AI regulatory frameworks to fulfil its ambition of being at the forefront
of digital transformation and innovation.

References
Abouelmehdi, K., Beni-Hessane, A., & Khaloufi, H. (2018). Big healthcare
data: Preserving security and privacy. Journal of Big Data, 5(1), 1–18.
Babushkina, D. (2023). Are we justified attributing a mistake in diagnosis
to an AI diagnostic system? AI and Ethics, 3(2), 567–584.
Baeten, R., Spasova, S., Vanhercke, B., & Coster, S. (2018). Inequalities in
access to healthcare. A study of national policies, European social
policy network (ESPN). European Commission.
Beauchamp, T. L., & Childress, J. F. (2012). Principles of biomedical ethics
(7th ed.). Oxford University Press.
Behr, A., & Theune, K. (2017). Health system efficiency: A fragmented
picture based on OECD data. PharmacoEconomics-Open, 1, 203–221.
Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional
accuracy disparities in commercial gender classification. In
Conference on fairness, accountability and transparency (pp. 77–91).
PMLR.
Castelvecchi, D. (2016). Can we open the black box of AI? Nature News,
538(7623), 20.
Cirillo, D., Catuara-Solarz, S., Morey, C., Guney, E., Subirats, L., Mellino,
S., Gigante, A., Valencia, A., Rementeria, M. J., Santuccione Chadha,
A., & Mavridis, N. (2020). Sex and gender differences and biases in
artificial intelligence for biomedicine and healthcare. NPJ Digital
Medicine, 3(1), 1–11.
Collins, F. S., & Varmus, H. (2015). A new initiative on precision medicine.
New England Journal of Medicine, 372(9), 793–795.
Cristea, M., Noja, G. G., Stefea, P., & Sala, A. L. (2020). The impact of
population aging and public health support on EU labor markets.
International Journal of Environmental Research and Public Health,
17(4), 1439. https://doi.org/10.3390/ijerph17041439
De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N.,
Blackwell, S., … & Ronneberger, O. (2018). Clinically applicable
deep learning for diagnosis and referral in retinal disease. Nature
Medicine, 24(9), 1342–1350.
Egan, J., & Rosenbach, E. (2023). Biosecurity in the age of AI: What's the
risk? Belfer Center for Science and International Affairs.
https://www.belfercenter.org/publication/biosecurity-age-ai-whats-
risk#_edn1
European Commission. (2019). Ethics guidelines for trustworthy AI.
https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html
European Union. (2024, July 12). Regulation (EU) 2024/1689 of the
European Parliament and of the Council of 13 June 2024 laying down
harmonised rules on artificial intelligence and amending Regulations
(EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU)
2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives
2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial
Intelligence Act) (Text with EEA relevance). Official Journal of the
European Union, 168, pp. 1–60.
European Parliament and Council of the European Union. (2016).
Regulation (EU) 2016/679 of the European parliament and of the
council of 27 April 2016 on the protection of natural persons with
regard to the processing of personal data and on the free movement of
such data (General data protection regulation). Official Journal of the
European Union, 119, 1–88.
European Parliament and Council of the European Union. (2017a).
Regulation (EU) 2017/745 of the European parliament and of the
council of 5 April 2017 on medical devices. Official Journal of the
European Union, L, 117, 1–175.
European Parliament and Council of the European Union. (2017b).
Regulation (EU) 2017/746 of the European parliament and of the
council of 5 April 2017 on in vitro diagnostic medical devices. Official
Journal of the European Union, 117, 176–332.
Fichman, R. G., Kohli, R., & Krishnan, R. (2011). Editorial overview—The
role of information systems in healthcare: Current research and future
trends. Information Systems Research, 22(3), 419–428.
Floridi, L. (2008). The method of levels of abstraction. Minds and
Machines, 18, 303–329.
Floridi, L. (2013). The ethics of information. Oxford University Press.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for
AI in society. Harvard Data Science Review.
https://doi.org/10.1162/99608f92.8cd550d1
Fonseca, Â, Ferreira, A., Ribeiro, L., Moreira, S., & Duque, C. (2024).
Embracing the future—is artificial intelligence already better? A
comparative study of artificial intelligence performance in diagnostic
accuracy and decision-making. European Journal of Neurology, 31(4)
e16195.
Gianfrancesco, M. A., Tamang, S., Yazdany, J., & Schmajuk, G. (2018).
Potential biases in machine learning algorithms using electronic health
record data. JAMA Internal Medicine, 178(11), 1544–1547.
https://doi.org/10.1001/jamainternmed.2018.3763
High-Level Expert Group on Artificial Intelligence. (2019). Ethics
guidelines for trustworthy AI. European Commission.
https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
IEEE. (2019). Ethically aligned design: A vision for prioritizing human
well-being with autonomous and intelligent systems (1st ed.). IEEE.
https://ieeexplore.ieee.org/document/9398613
Jha, A. (2023). Technological advances and evolution of biowarfare: A
threat to public health and security. KnE Social Sciences, 401–416.
Johnson, K. B., Wei, W-Q., Weeraratne, D., Frisse, M. E., Misulis, K.,
Rhee, K., Zhao, J., & Snowdon, J. L. (2021). Precision medicine, AI,
and the future of personalized health care. Clinical and Translational
Science, 14(1), 86–93.
Ke, Y. Y., Peng, T. T., Yeh, T. K., Huang, W. Z., Chang, S. E., Wu, S. H., …
& Chen, C. T. (2020). Artificial intelligence approach fighting
COVID-19 with repurposing drugs. Biomedical Journal,43, 355–362.
Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G., & King, D.
(2019). Key challenges for delivering clinical impact with artificial
intelligence. BMC Medicine, 17, 1–9.
Kilkenny, M. F., & Robinson, K. M. (2018). Data quality: “Garbage in–
garbage out. Health Information Management Journal, 47(3), 103–
105.
Laitinen, A., & Sahlgren, O. (2021). AI systems and respect for human
autonomy. Frontiers in Artificial Intelligence, 4, 151.
LaRosa, E., & Danks, D. (2018). Impacts on trust of healthcare AI. In
Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and
society (pp. 210–215). ACM.
Luxton, D. D. (2014). Artificial intelligence in behavioral and mental
health care. Elsevier.
Mackenbach, J. P. (2006). Health inequalities: Europe in profile. COI for
the Department of Health.
Mei, X., Lee, H. C., Diao, K. Y., Huang, M., Lin, B., Liu, C., … & Yang, Y.
(2020). Artificial intelligence–enabled rapid diagnosis of patients with
COVID-19. Nature Medicine, 26(8), 1224–1228.
Mennella, C., Maniscalco, U., De Pietro, G., & Esposito, M. (2024). Ethical
and regulatory challenges of AI technologies in healthcare: A
narrative review. Heliyon.
Oakden-Rayner, L., Gale, W., Bonham, T. A., Lungren, M. P., Carneiro, G.,
Bradley, A. P., & Palmer, L. J. (2022). Validation and algorithmic audit
of a deep learning system for the detection of proximal femoral
fractures in patients in the emergency department: A diagnostic
accuracy study. The Lancet Digital Health, 4(5), e351–e358.
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. . (2019).
Dissecting racial bias in an algorithm used to manage the health of
populations. Science, 366(6464), 447–453.
Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing bias in
artificial intelligence in health care. JAMA, 322(24), 2377–2378.
Raghupathi, W., & Raghupathi, V. (2014). Big data analytics in healthcare:
Promise and potential. Health Information Science and Systems, 2(1),
1–10. https://doi.org/10.1186/2047-2501-2-3
Sambamoorthi, U., & McAlpine, D. D. (2003). Racial, ethnic,
socioeconomic, and access disparities in the use of preventive services
among women. Preventive Medicine, 37(5), 475–484.
Sharara, S., & Radia, S. (2022). Quick response (QR) codes for patient
information delivery: A digital innovation during the coronavirus
pandemic. Journal of Orthodontics, 49(1), 89–97.
Shin, D. (2020). User perceptions of algorithmic decisions in the
personalized AI system: Perceptual evaluation of fairness,
accountability, transparency, and explainability. Journal of
Broadcasting & Electronic Media, 64(4), 541–565.
Topol, E. J. (2014). Individualized medicine from prewomb to tomb. Cell,
157(1), 241–253.
Ward, E., Jemal, A., Cokkinides, V., Singh, G. K., Cardinez, C., Ghafoor,
A., & Thun, M. (2004). Cancer disparities by race/ethnicity and
socioeconomic status. CA: A Cancer Journal for Clinicians, 54(2), 78–
93.
World Health Organization. (2022). Health and care workforce in Europe:
Time to act. World Health Organization. Regional Office for Europe.
29
ARTIFICIAL INTELLIGENCE
ETHICS
Challenges for a Computer Science Ethics
Board with a Focus on Autonomy
Gabriele Lenzini

DOI: 10.4324/9781003518495-36

Warming Up on AI and Ethics Reviews


First of all, let us agree on what artificial intelligence (AI) is. It is a set of
technologies, mainly algorithms and the systems that run them, which, by
processing a large quantity of data, are capable of giving a structure to the
data and, in layman's terms, learning something about and from the data.
What exactly it learns depends on the training. For instance, an AI system
can learn how to classify data in categories; also, it can learn to generate
content and to respond to a question or a request by putting together pieces
that, it has learned, are likely to fit well together in the context of the
question or request.
The effect is astonishing. Fed with images of dogs and trained to
recognize different breeds, an AI system can tell if a new image is a
“Spinone Italiano,” a “Bracco,” or not a dog at all. Having learned what
pixels likely go together in images of “Italian pointers” and in those of
“Italian bars,” it can create a believable image of a Spinone sipping a cup of
coffee. If, instead of images, one trains an AI system with words and
sentences about Italian recipes taken from cookbooks and fora where
people chat about Italian cuisine, the system can learn all about “Primi
piatti,” “Crostini Toscani,” or “Dolci e paste Napoletane.” It can generate
an innovative recipe “Pasta alle Aringhe Nuove” – which it classifies as a
“primo piatto” and not as a “dolce” – by putting together what is expected
to be found in a recipe with such ingredients, Italian style.1
There is no “intelligence” in the sense of comprehension of what one is
doing, awareness of a situation, or awakening. An AI system's answers are
composed of words, pixels, and entities, coming after one with a high
probability according to a probability distribution learned during the
training. Trained and used correctly, AI systems can return answers that
have merit. However, they can also “hallucinate,” that is, create
combinations that have no relation to the reality of the data, even if the
output still looks plausible and convincing.
That said, the word “intelligence” is misleading. It is used because the
architecture of an AI system resembles that of a brain organized in layers,
where knowledge is distributed across interconnected nodes. Like any
human brain, an AI system may or may not work well. Ironically, in certain
circumstances, “artificial ignorance, lying and bullshitting” – which are
concepts that have been given precise definitions in logic (Sakama et al.,
2010) – could be valid epithets. As you may guess, it depends on the quality
of data used to learn and the learning process.
AI technology is evolving fast. Modern systems allow users to set a
“surprise me” parameter, which enables them to deviate from the standard
“follow the most likely output” for the sake of adding some creativity. AI
systems can also learn from one another, which speeds up their work
incredibly. Technology is advancing at such a pace that it is hard even for
experts to stay up to date.

Ethics, Starting Now


That said, we can proceed. As a computer scientist who has worked on
dual-use research (Genç & Lenzini, 2020) but also as vice-chair of the
Ethics Review Board (ERB) of my University, I have developed an interest
in guidelines and criteria that could help computer science researchers
reason about risks when working with AI.
It was only after I was invited to share my experience and thoughts at the
session “Diplomacy and Artificial Intelligence” during the conference “The
Impact of Artificial Intelligence on Geopolitics,” an event organized by the
Italian Embassy in Luxembourg in December 2023, that this interest, which
had remained until then purely personal, widened into a formal academic
question:

Question 1 What risks should an ERB look at when assessing the ethical
aspects of computer science projects on AI?

The question is important and I am not the first one, of course, to think
about it (e.g., see Resseguier & Ufert, 2023). But why do we need to answer
this question?
There are already codes of ethics in computer science that can guide the
work of an ERB. For instance, the ACM2 Code of Ethics and Professional
Conduct (2018) lists about 25 principles, divided into general and
professional. They state, just to mention a few, that professionals should
avoid harm and contribute to society and human wellbeing; they remind us
that, as professionals, we should not discriminate, should respect privacy,
and should foster public awareness and understanding of new technologies
and the consequences of using them. The IEEE3 also has its Code of Ethics
(2014) with fewer but similar statements.
The ACM and IEEE codes were drawn up long before the advent of AI.
Their general principles are helpful and inspiring but need to be interpreted
to work specifically on and with AI. The same can be said about the
plethora of discussions about AI ethical principles: they are reasonable, but
still hard to put into practice.
Question 1 must be grounded in the reality of AI technology, and this
exercise unites ERBs with other disciplines, such as “Diplomacy and
Artificial Intelligence.” It requires that we first understand exactly the risks
brought along specifically by AI systems. Thus, beyond differences
between domains, their being concerned with the effect of AI on humans
and society connects them more than may appear at first glance. An ERB's
interest in assessing the risks that a computer science project brings for
individuals and collectives is no different from the interest of experts in
geopolitics who discuss their worries about AI interfering with delicate
socio-political processes.
For instance, the concerns of an ERB about a research project detecting
deep fakes (i.e., digital manipulation to create fake videos and audio that
convincingly look like they feature a person but do not) require us to
imagine the possible consequences deriving from that front-line research
being misused. This is best done by trying to understand what the bad guys
can do when they use the research results to improve their fakes. But the
reasoning is no different from trying to imagine what could happen when
the “once normal” struggle between political parties is disrupted by a
stream of undetectable AI-generated fake news items, with videos and
audio that confuse people's sense of reality. Even in a wider geopolitical
vision, where one could wonder how mastering the deep fake industry will
change the balance of power between countries, the main question remains
of how badly technology will eventually affect people's business and affairs
and what we can do about it.
To me, addressing Question 1 as a member of an ERB, as I am, or as an
expert in “Diplomacy and Artificial Intelligence” or in any other non-
computer science discipline, which I am not, is no more than a change of
perspective. To understand the risks of the impact of AI in any discipline
where there are concerns about humans and society, a similar effort has to
be made to imagine what changes the adoption of the technology will bring
to well-established, reliable, and familiar processes on which our wellbeing
and the stability of our societies depend.
Whether one accepts this parallelism or not, it is nevertheless crucial that
any argument relies on facts. Lamentably, with AI, a young and fast-
changing technology, the boundary between fact and speculation is thin.
The uncertainty about what AI will become invites us to imagine a future
which is not yet reality but which is feared. Balancing arguments based on
facts and sheer speculation is a challenging and delicate exercise for all
experts alike.

General Concerns: Autonomy


I try to address this challenge solely from the viewpoint of the discipline I
know, that is, computer science.
I follow a systematic approach as far as I can, commenting on how the
computer science community is addressing the ethical questions that
surround AI, while highlighting genuine concerns. To be effective, I need to
scope my analysis carefully. I aim to avoid talking about risks hidden in the
technicalities of AI and about which there is still no consensus in the
community. And, to stay focused, I try to discuss only the concerns peculiar
to AI, that is, concerns that either emerged because of AI but did not exist
previously or concerns that existed but only hypothetically and which the
advent of AI technology has made real.
For instance, a general concern about AI that I consider out of scope
regards data protection and the right to privacy. The UNESCO
Recommendation on the Ethics of AI (UNESCO, 2022) and the EU AI Act
(2024) refer to it in their list of principles. In my view, and to comply with
my pledge to contain the scope of my analysis, the concerns about
protecting personal data and the defense of the right to privacy did not come
into existence because of AI. AI technology can be used at the core of
systems that identify individuals (e.g., by biometrics), it can improve
identification, and we can be nervous about the fact that we do not know
exactly how AI works, but the threat to personal data and to the right to
privacy existed before AI, and was already serious.
If we read the vast literature that already exists on AI ethics (e.g., just to
pick one from the gargantuan heap of publications on the topic, see Leslie,
2019; Resseguier & Ufert, 2023), we find discussions about several other
principles, which I also consider out of scope in this work, such as fairness,
accountability, explainability, transparency, safety, and trustworthiness. A
simplistic approach may suggest that the risk with AI systems is that they
can be biased, obscure in why they produce outcomes, unreliable in their
behavior and untrustworthy. But threats to fairness, accountability,
explainability, transparency, safety, and trustworthiness are shared by AI
systems and almost all other systems from the world of information and
communication technology. For instance, railway engineering has pioneered
the study of software safety; electronic banking and commerce have
advanced technology to assess and build trustworthiness, fairness, and
accountability. Thus, I recognize that fairness, accountability, explainability,
transparency, safety, and trustworthiness are good principles, but they have
been studied extensively in computer science for decades.
Achieving fair, transparent, trustworthy, and explainable AI systems
requires renewed effort and innovative strategies; still, those principles and
discussions about how to achieve them in computer science long precede
the advent of AI. It seems to me that most of those concerns emerge
because AI is changing the way we humans interact with technology. Thus,
the risks I am most interested in discussing are those that exist because of
the intrinsic properties of the technology. Prunkl (2024) talks about “risks
that relate to the use of AI systems and risks that relate to the nature of such
systems.” By intrinsic, I refer to both as well as to risks that exist
exclusively because of the use or the nature of AI, not those that are simply
aggravated by it. Even if the comparison is not perfect, if we applied this
distinction to discuss risks emerging from the development of nuclear
engineering, we would say that the matter is not that we can now build
weapons of mass destruction (e.g., chemistry and biological engineering
can as well) but that now we can build systems that can release an
unimaginable amount of energy that, if not controlled, can literally end
civilization as we know it.
Looked at through this lens, AI reveals the nature of a technology with
critical characteristics:

High stake: the same AI technology used for the same purpose can have
effects that range from the very positive to the very negative. For
instance, AI-powered Chatbots can greatly improve the capacity to
process questions from online customers and to answer them
satisfactorily. However, Chatbots can also fail miserably, as in the Tay
case (an acronym for “Thinking about you,” Wikipedia, 2016). Released
in 2023 on Twitter, it was supposed to mimic the language of a 19-year-
old girl to engage users in pleasant conversation, but it soon started to
generate inflammatory, offensive, and racist tweets in reply to them.
Dual use: the same AI technology can be applied for social benefit but also
to damage it. AI-powered image processing is promising and successful
in oncology: by processing images more efficiently and more effectively,
AI helps doctors spot tumors and check the effectiveness of the
treatment. It can also be used in face recognition and abused by
authoritarian governments to recognize activists walking on the street
and oppress them.
Transversal: AI technology can be used in a very large range of disciplines.
The most recent large language models (LLMs), such as Microsoft's
ChatGPT, Google's Gemini, Meta's Llama, to cite a few of the best
known, are being used across a plethora of disciplines such as medicine
(Meng et al., 2024), chemistry (Liao et al., 2024), physics (Pan et al.,
2024), psychology (Demszky et al., 2023), geosciences, and many more
(Zhang et al., 2024).

These features require all researchers to balance maximizing benefits and


minimizing risks when working with AI. For similar reasons, ERBs are
challenged when trying to advise computer science researchers working on
projects that involve AI, because the effects of the research are not only
confined to computer science.

From Fiction …
What, therefore, are technology-specific risks related to AI?
I ask the reader to forgive me if, to start addressing the question, I call on
well-known sci-fi movies. They are not reliable sources of facts; still, they
can help popularize complex concepts into simple collective images.
Unconventional as it may sound, in scientific discussion is not uncommon
to refer to movies to introduce otherwise hard-to-grasp concepts (Viganò,
2020), and I will do the same.
The two movies I will call on to help me illustrate two significant and
specific concerns about AI are Kubric's “2001: A Space Odyssey” and
Lisenberg's “Tron.”
The first needs no introduction. Its intelligent system HAL 9000 has
become an iconic example of AI gone wrong, turning itself into a
treacherous, deceptive, and cold-blooded murderous adversary. HAL 9000
embodies the first of the concerns about AI that we see today: that AI can
compete with us in deciding on matters that should, or that we think should,
remain under our control, such as matters of life and death. In other words,
the concern is that AI can become autonomous, developing its own agency,
that is, the capacity to make choices without human involvement. We will
discuss this concern further, but those worried that this can happen advocate
the development of Ethical and Responsible AI (Baeza-Yates, 2023). These
are AI systems that, by design and development, do not interfere with
human agency, do not cause harm to humans, and proactively behave
without violating human values.
The second movie has no iconic robot but contains some dialogue that I
find illustrative. It is between Alan, the software engineer of Tron, and
another engineer. Being less well-known, let me quote the dialogue
verbatim.

Engineer: After all computers are just machines. They (machines) can’t
think.
Alan: Some programs will be thinking soon.
Engineer: Won’t that be grand? Computers and the programs will start
thinking and people will stop.

The Engineer's comment verbalizes the other issue (which he seems to


praise): that machines, having gained certain human abilities, could do tasks
as well as humans do, and therefore we start thinking that machines can do
those tasks on our behalf, or even that they are better to such an extent that
we stop trying. Machines’ abilities can affect and change us without
necessarily direct competition or confrontation with us, but, due to a side
effect of our interaction with them, we give up our autonomy and agency. If
you will allow me to violate the English language by creating a new turn of
phrase, I can say the risk is that they “Neanderthal us out.” In other words,
AI will make us lazy to the point that we eventually delegate normally
acclaimed human skills to it: thinking (in Tron), but I would also add:
educating others, interacting with people; choosing whom to hire and trust;
judging if someone is guilty or innocent; picking whom to marry;
recognizing in an image a terrorist act and instructing a drone to launch a
rocket.
Unlike the previous threat, where machines with human-like skills start
competing with us unexpectedly, going beyond what we think they should
do, here it is the humans who, because of prolonged exposition to AI, start
delegating to the machines more and more of the activities that were once
theirs alone.
It may look speculative, but researchers already have evidence of the loss
of cognitive abilities in studies conducted to examine the impact of AI on
humans. Students involved in a study (Ahmad et al., 2023) showed the
following changes: increased laziness with weaker cognitive functions,
such as memorization, analytical thinking, and decision-making; loss of
decision-making skills, with lower intuitive analysis, critical thinking, and
creative problem-solving skills; and increased dependency on AI, with less
initiative and engagement in mentally stimulating tasks.

…To Facts
Let me list explicitly the two concerns introduced in the previous section.
They are both about autonomy and agency but from two different angles:
machine and human:

(R1) Autonomous Machines – AI systems will gain autonomy, that is, the
ability to decide and act outside of human control;
(R2) Human Autonomy – Humans will lose autonomy and agency as an
effect of interaction with AI.

Risk (R1) recalls the definition by the European Commission High-Level


Expert Group on AI for any AI system: “systems that display intelligent
behaviour by analysing their environment and taking actions—with some
degree of autonomy—to achieve specific goals” (High-Level Expert Group
on Artificial Intelligence, 2019). But I prefer to see it as a risk than a
feature, specifically, as a risk that we could enable an AI system to exert. It
is my personal opinion, but there is no autonomy unless we plug AI into a
wider system where its outputs control some physical entity (e.g., in a car,
hitting the brake for us as opposed to sounding an alarm to remind the
human to brake).
Risk (R2), to be fully understood, would require a discussion about
different dimensions and levels of human autonomy and agency (Prunkl,
2024). In short, autonomy is the capacity to act according to our beliefs,
values, motivations, and reason, as well as the availability of meaningful
options to choose from. Losing autonomy implies an interference with that
mechanism. I am inclined to think that this may also happen because of
deception, ignorance, or wishful thinking, which may be considered a form
of self-deception (although this equivalence is questioned by some
philosophers; Lynch, 2015).
Are Risks (R1) and (R2) potential, or are they actual? Since I am still in
the role of a member of the ERB assessing risks, I should be concerned only
with the risks that exist in the short term without speculating on those that
might manifest in the future.
To my mind, the best way to run this exercise is to look at what AI
technology is capable of today, and then discuss whether (R1)–(R2) already
occur and if there is any interest in it happening.

Autonomous Machines
To understand the likelihood of (R1), we look into the field where
autonomous systems are a reality and where semi-autonomous systems are
already operational. The most obvious choice is the industry of autonomous
vehicles, where autonomous driving research has advanced steadily.
Looking at this industry may lead to conclusions that are general enough. In
this industry, autonomy is confined to driving capabilities; the domain is
under national scrutiny and the sector heavily regulated (e.g., see the
German Act on Autonomous Driving, Kriebitz et al., 2022), so nothing
unexpected is about to happen.
Instead, a domain where autonomous systems are studied with greater
attention and broader application, one where there is tension between
adopting innovation and control and one where the impact of the risk would
be so high to constitute a concern, is that of Lethal Autonomous Weapon
Systems (LAWS) (Longpre et al., 2022). Here, adaptive AI functions are
employed to let machines act and adapt to environments in circumstances
where communication with humans cannot take place. The mission-oriented
nature of warfare, the big budgets involved, and the fact that autonomous
systems, such as drones – as unfortunately, we are seeing in the Russia-
Ukraine War – can operate with greater precision while serving as
significant force multipliers and enabling smaller or less well-equipped
forces to achieve greater impact, are tangible risk factors.
Today, according to Duke (2021), there are no fully autonomous weapons
that can operate without human intervention in critical decision-making
processes, particularly against human targets, not because of a lack of trying
but rather due to the fear of unexpected consequences. As far as we know,
LAWS possess autonomous capabilities, such as the ability to identify and
track targets automatically, but these systems still require human
authorization to carry out lethal actions.
LAWS are scrutinized under existing frameworks of International
Humanitarian Law and the Laws of Armed Conflict, but because AI is a
technology developed for civilian applications, in the field of AI in
weaponry, the sector remains largely unregulated (Duke, 2021), and there is
evident concern about how it may evolve. NATO's “New Agenda for
Peace” claims that “LAWS are morally repugnant and politically
unacceptable and should be prohibited by international law” (Guterres,
2023). Declarations of this kind suggest that autonomous AI systems can
exist. Risk (R1) can materialize not because machines will take control, but
rather because we humans judge it convenient and advantageous to delegate
that control. And there is a tangible risk that we lose control altogether.

Human Autonomy and Agency


Certain advances in AI show that the technology can be used to interact
with humans in a way that was unthinkable only a year ago. Let us take as
an example a relatively recent development (it was in 2022) in the game
industry with the AI-powered “Cicero” (Meta, 2020).
Cicero is an AI algorithm developed by Meta capable of playing
“Diplomacy,” an online game, and winning against a cohort of human users
“Diplomacy” is a game that requires not only mastering rules-based
strategy, as in chess and Go, but also skills that undoubtedly are considered
exclusively more exquisitely human, such as intuition, persuasion, and
subterfuge.
The team that developed Cicero reveals that to win “Diplomacy,” one
needs to master both strategic planning and highly sophisticated verbal
negotiation skills (in the online version of the game expressed by chatting)
toward the other players in order to build trust (Meta Fundamental AI
Research Diplomacy Team [FAIR] et al., 2022). This is a game changer
compared to Deep Blue, the champion AI chess player. In chess, as in Go,
there is no need for soft skills, such as building trust, to win a tournament.
Cicero has been trained on hundreds of transcripts generated by humans
playing about 100,000 online games, thus learning how people discuss and
form opinions and strategies and collaborate for a common goal. The
authors claimed that the algorithm learned by itself how to balance
deception and honesty (Meta Fundamental AI Research Diplomacy Team
[FAIR] et al., 2022).
As reported in an article in MIT Technology Review (Williams, 2024),
AI systems have learned to deceive humans, for example, by concealing
facts or by offering untrue explanations even if they were not trained to do
so.
Although Cicero was the first game that opened Pandora's box, there are
today several systems that have been engaged in deception and in a way
that, despite the fact we cannot say it is intentional, is nevertheless
“‘systematic in causing false beliefs as a means to achieve some outcome
other than seeking the truth” (Park et al., 2024).
What we can see from these developments is that not only may we lose
autonomy by delegating tasks to AI systems due to our own laziness, or our
subjective confidence that machines can do better than our peers or
ourselves (termed enfeeblement), but we are already in a situation where AI
systems are perfectly capable of fooling us. As in the previous case study,
regretfully, there is a strong interest in using deception to gain advantage,
for instance, to tamper with a country's elections via political and social
influence, to recruit terrorists or activists, to perpetrate frauds, scams, and
social engineering attacks.
Risk (R2) is, therefore, also justified. Its impact can be devastating,
jeopardizing pillars such as accurate belief formation by poisoning us with
persistent false beliefs, political and social stability by exacerbated
polarization, and eventually human autonomy by constantly disempowering
and disengaging us from our beliefs, motivations, and values. As much as
this grim eventuality may sound like speculation, Cicero reminds us that it
is already a reality.

Conclusion
Now that we have provided evidence that, with AI technology, the risks to
autonomy and agency are serious and peculiarly related to the technology,
should we talk about other risks?
Not that I wish to neglect other important issues, but I am inclined (also
to keep my reasoning simple) to consider other risks subsidiary to (R1) and
(R2) on autonomy.
For the sake of an example, let me discuss our fear that AI systems could
discriminate. It is a serious and legitimate matter when we delegate AI
system decision-making power before thoroughly considering the problem's
complexity. Removing discrimination and providing fair treatment are
valuable goals but are not problems we can solve by acting only on AI
systems, and they are problems far grander than engineering some
machines. It has been argued that certain expectations we have about AI
fairness reflect rather “fallacious reasoning, misleading assertions, and
questionable practices at the foundations of the current fair machine
learning paradigm” (Poe & Mestari, 2023). Thus, focusing on
discrimination as an AI risk could lead us to falsely project our wishes
about a fair reality by trying to act on the AI systems that we believe are the
root cause of the bias, while instead, we may be simply losing autonomy (in
thinking straight) by assuming that AI systems must be capable of fixing
biases that are in the data used for their training and even in the realities that
those data represent.
So, let us remain focused on autonomy and return to the matter of what
attitude we should take in an ERB wishing to educate people to think
ethically about AI. Are there suggestions we can refer to? One obvious
starting point would be the AI Act. It indicates an approach based on a risk
assessment framework with different rules for different risk levels.
However, there has not yet been a straightforward implementation of the
Act's recommendations. Practical guidelines, such as the National Institute
of Standards and Technology's AI Risk Management Framework (NIST,
2024) could be more operational and more suitable for companies than
academia. One could refer to the recommendations stated by the advocates
of responsible AI (Sadek et al., 2024), but once again there is still a big gap
between principles and their implementation in practice.
Establishing how an ERB should work is not as precise as building an
assessment framework for a company that develops AI systems. Setting up
an ERB is a separate and dedicated task and one that is gaining momentum
(e.g., see Schuett et al., 2024). This is an indication of the urgency of the
need for boards on AI ethics and boards that work. The example of the
University of Oxford's Institute for Ethics in AI is inspiring, bringing
together philosophers, experts in the humanities, computer scientists, and
business and government representatives to research key applied sectors for
AI (e.g., AI and Democracy, Governance, Human Rights, Environment).
The goal seems, however, more ambitious than that of an ERB overseeing
research projects.
Further ideas come from experiences gained in another sector of
computer science, specifically cybersecurity. Even in cybersecurity, an ERB
finds itself challenged by establishing the right approach to help researchers
navigate through a large set of reasonable yet still fuzzy security principles
and recommendations, where often the main question that remains is “and
now what?”
A good idea seems to be to discuss the general principles with respect to
concrete cases as done in Macnish and Ham (2020), in a similar fashion as
we did here in reference to autonomy. Another is to work out a self-
assessment questionnaire composed of easily understandable questions with
a clear link to principles and risks. As suggested by Reidsma et al. (2023),
the questions should be elicited to guide researchers with concrete issues
“without requiring that researchers must have deep ethics knowledge.” An
example of simple questions on the risk to autonomy that we have
considered in this work could be, “Could your research lead to AI systems
that may deceive users into false beliefs?”
We are entering into another essential subject, that of the human-centered
design of policies (Mao et al., 2005). This implies that our self-assessment
questionnaire should not be too long or contain onerous questions if we
wish to avoid researchers disengaging from the ethics review process to
switch over to a “tick the box” mindset with the goal of pleasing the ERB
and moving on. But this is another matter and an open question, and, for us
researchers and ERB members, future work.

Notes
1. This is a recipe of my own invention, my tribute to the years I have
spent in The Netherlands, but I never registered it officially, so you
have to trust me here. Welcome to the world of artificial intelligence.
2. Association for Computing Machines, one of the best known
educational and scientific computing societies.
3. Institute of Electrical and Electronics Engineers, the American
professional association for electronics engineering.

References
Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M., Irshad, M., Arraño-
Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial
intelligence on human loss in decision making, laziness and safety in
education. Humanities & Social Sciences Communications, 10(1), 311.
Artificial Intelligence Act (Regulation (EU) 2024/1689). Official Journal
Version of 13 June 2024.
Association for Computing Machinery. (2018). ACM Code of Ethics and
Professional Conduct. Accessed: August 2024.
https://www.acm.org/code-of-ethics
Baeza-Yates, R. (2023). An introGeopoliticsuction to responsible AI
(Lecture held at the Academia Europea Building Bridges Conference
2022). European Review, 31(4), 406–421.
Demszky, D., Yang, D., Yeager, D. S., Bryan, C. J., Clapper, M., Chandhok,
S., Eichstaedt, J. C., Hecht, C., Jamieson, J., Johnson, M., Jones, M.,
Krettek-Cobb, D., Lai, L., Jones Mitchell, N., Ong, D. C., Dweck, C.
S., Gross, J. J., & Pennebaker J. W. (2023). Using large language
models in psychology. Nature Reviews Psychology, 2(11), 688–701.
Duke, J. E. (2021). Autonomous robotics and the laws of war: Methods and
consequences of regulating artificial intelligence in warfare. Global
Security and Intelligence Studies, 6(2), 101–121.
Genç, Z. A., & Lenzini, G. (2020). Dual-use research in ransomware
attacks: A discussion on ransomware defence intelligence. In
Proceedings of the sixth international conference on information
systems security and privacy (ICISSP), March 25–27, La Valletta,
Malta.
Geopolitics. (2023). Conference on AI/geopolitics: Prometheus’ challenge -
The impact of artificial intelligence on geopolitics, Luxembourg, 14
December 2023.
Guterres, A. (2023). A new agenda for peace (our common agenda policy
brief 9). Technical Report. NATO. https://dppa.un.org/en/a-new-
agenda-for-peace
High-Level Expert Group on Artificial Intelligence. (2019). Ethics
guidelines for trustworthy AI. Technical Report. European
Commission.
IEEE Advancing Technology for Humanity. (2014). IEEE Governing
Document, IEEE Code of Conduct. Retrieved August 10, 2024, from
https://www.ieee.org/content/dam/ieee-
org/ieee/web/org/about/ieee_code_of_conduct.pdf
Kriebitz, A., Max, R., & Lütge, C. (2022). The German Act on autonomous
driving: Why ethics still matters. Philosophy & Technology, 35(2), 29.
Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A
guide for the responsible design and implementation of AI systems in
the public sector. Technical Report. The Alan Turing Institute.
Liao, C., Yu, Y., Mei, Y., & Wei, Y. (2024). From words to molecules: A
survey of large language models in chemistry. arXiv.
https://arxiv.org/abs/2402.01439
Longpre, S., Storm, M., & Shah, R. (2022). Lethal autonomous weapons
systems & artificial intelligence: Trends, challenges, and policies. MIT
Science Policy Review, 3, 47–55.
Lynch, K. (2015). Willful ignorance and self-deception. Philosophical
Studies, 173(2), 505–523.
Macnish, K., & Ham, J. (2020). Ethics in cybersecurity research and
practice. Technology in Society, 63, 1–10.
Mao, J. Y., Vredenburg, K., Smith, P. W., & Carey, T. (2005). The state of
user-centered design practice. Communications of the ACM, 48(3),
105–109.
Meng, X., Yan, X., Zhang, K., Liu, D., Cui, X., Yang, Y, Zhang, M., Cao,
C., Wang, J., Wang, X., Gao, J., Wang, Y., Ji, J., Qiu, Z., Li, M., Qian,
C., Guo, T., Ma, S., Wang, Z., …, & Tang, Y. (2024). The application
of large language models in medicine: A scoping review. iScience,
27(5), 109713.
Meta. (2020). CICERO: Meta presents Cicero. Retrieved August 11, 2024.
https://en.wikipedia.org/wiki/Tay_(chatbot)
Meta Fundamental AI Research Diplomacy Team (FAIR) et al. 2022).
Human-level play in the game of diplomacy by combining language
models with strategic reasoning. Science, 378(6624), 1067–1074
(supplementary materials).
NIST Trustworthy and Responsible AI - 600-1. (2024). Artificial
intelligence risk management framework: Generative artificial
intelligence profile. Technical Report. NIST AI 600-1. NIST.
Pan, H., Mudur, N., Taranto, W., Tikhanovskaya, M., Venugopalan, S.,
Bahri, Y., Brenner, M. P., & Kim, E. A. (2024). Quantum many-body
physics calculations with large language models. arXiv.
https://arxiv.org/abs/2403.03154
Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024).
AI deception: A survey of examples, risks, and potential solutions.
Patterns, 5(5), 100988.
Poe, R. L., & Mestari, S. Z. E. (2023). The flawed foundations of fair
machine learning. arXiv. https://arxiv.org/abs/2306.01417
Prunkl, C. (2024). Human autonomy at risk? An analysis of the challenges
from AI. Minds and Machines, 34(26), 1–21.
Reidsma, D., van der Ham, J., & Continella, A. (2023). Operationalizing
cybersecurity research ethics review: From principles and guidelines to
practice. In Proceedings of the second international workshop on
ethics in computer security, San Diego, CA, USA, February 27, 2022.
Internet Society.
Resseguier, A., & Ufert, F. (2023). AI research ethics is in its infancy: The
EU's AI Act can make it a grown-up. Research Ethics.
https://www.turing.ac.uk/sites/default/files/2019-
08/understanding_artificial_intelligence_ethics_and_safety.pdf
Sadek, M., Kallina, E., Bohné, T., Mougenot, C., Calvo, R. A., & Cave, S.
(2024). Challenges of responsible AI in practice: Scoping review and
recommended actions. AI & Society, 1, 1–17.
Sakama, C., Caminada, M., & Herzig, A. (2010). A logical account of
lying. In Conference: Logics in artificial intelligence - 12th European
conference, JELIA 2010, Helsinki, Finland, September 13–15, 2010.
Proceedings, vol. 6341 of Lecture Notes in Computer Science (pp.
286–299).
Schuett, J., Reuel, A. K., & Carlier, A. (2024). How to design an AI ethics
board. AI and Ethics, 1, 1–19.
UNESCO. (2022). Recommendation on the ethics of artificial intelligence.
Technical Report. UNESCO.
Viganò, L. (2020). Explaining cybersecurity with films and the arts. In M.
Emmer, & M. Abate (Eds.), Imagine Math 7: Between culture and
mathematics (pp. 297–309). Springer International Publishing.
Wikipedia. (2016). Tay (chatbot). Retrieved August 2024.
https://en.wikipedia.org/wiki/Tay_(chatbot)
Williams, R. (2024). AI systems are getting better at tricking us (but what
we perceive as deception is AI mindlessly achieving the goals we have
set for it). MIT Technology Review.
Zhang, Y., Chen, X., Jin, B., Wang, S., Ji, S., Wang, W., & Han, J. (2024).
A comprehensive survey of scientific large language models and their
applications in scientific discovery. arXiv.
ANNEX I
A Comparative Overview of Artificial
Intelligence Regulatory Systems

Introduction
Since the recent advent of generative artificial intelligence (AI) chatbots, AI
technology has ushered in a new era for the global digital environment,
offering marvelous potential alongside unprecedented challenges. Its rapid
integration across all sectors is forcing states and international actors to
abruptly reassess their legal frameworks and introduce new measures in the
attempt to govern it (Han et al., 2020; Naudé & Dimitri, 2020; Smuha,
2021; van Erp, 2020). Due to the irresistible benefits AI technology
promises, most governments have indeed embarked on a headlong
legislative race to lead its regulation, with important implications for the
international geopolitical landscape (Koniakou, 2023; Walter, 2024). This
process has fostered the proliferation of all sorts of peculiar instruments –
executive orders, national strategies, regulations, policy approaches, etc. –
creating a complex legislative microcosm. Whereas much academic
speculation focuses on who will get there first, this Annex sets out to
provide a comparative overview on how each actor intends to regulate AI.
Accordingly, it will review the regulatory approaches toward AI adopted by
key legal systems across the world. The following work thus aims to shed
light on current regulatory trends influencing the multifaceted field of AI
governance, complementing the insights found in the previous chapters of
this Handbook.

Methodology
The methodology chosen for this comparative analysis is firmly rooted in
the functionalist perspective as developed by Zweigert and Kötz (1998).
Choosing this approach as the best fit for the purposes of our analysis is
based on two reasons. First, a functionalist comparative method focuses on
legislative experiences and events rather than doctrinal frameworks and
reasoning (Smits & Husa, 2012; Zweigert & Kötz, 1998). Legal systems are
thus reviewed based on their legislative responses to comparable
circumstances (Kischel, 2019; Valcke & Grellette, 2014). It can be
convincingly argued that the unprecedented pervasiveness of a technology
such as AI and its significant effects on our societies have indeed created
comparable circumstances for legal systems across the world, not the least
with regard to the issue of defining AI itself. Second, comparing legislative
choices and their functions makes it possible to evaluate each legal system's
objectives and overall approach to AI. Without pursuing hardcore
functionalists’ evergreen ambition of determining “the better law,” this
method will instead enable us to define potential trends in AI regulation
across the world. In order to understand whether one can discern current
tendencies in AI legislation, the analysis will, where possible, also look at
the potential effects of those regulatory choices within each legal system.
Throughout the research process, information was methodically gathered
from a wide range of sources that includes official documents, scholarly
works, government publications, legal databases, and publications from
relevant international organizations. Understanding the subtleties of each
jurisdiction's regulatory strategy also required reading peer-reviewed
studies and expert commentary. Emphasis was duly placed on reviewing
legal documents, policy papers, and also statements from relevant
authorities in order to guarantee that the most up-to-date and pertinent
material was incorporated. Nonetheless, intellectual honesty requires the
acknowledgment of certain limitations with regard to our comparative
overview. First, as AI develops at an increasingly rapid rate, legal systems
have clearly fallen into the habit of chasing rather than anticipating, and
unforeseen legislative instruments may well be adopted after the time of
writing (mid-February 2024). Additionally, the scope of our work has led to
the inclusion of only a select number of countries. The degree of AI-related
legislative activity and the corresponding actor's geopolitical standing
served as the main selection criteria for the legal systems in this Annex.
Every jurisdiction that was included has been particularly proactive in
creating and putting into effect some version of AI legislation or strategy.
For instance, the European Union has recently led the way in regulation
with its renowned AI Act. Comparably, the United States of America and
the People's Republic of China have either been at the forefront of AI
innovation or have made great strides in developing legislative frameworks
that either address the effects on their society or substantially support their
leading scientific breakthroughs. While perhaps not as legislatively
advanced in the AI sector as the aforementioned actors, countries such as
Japan, the United Kingdom, India, and Brazil have either developed a
unique set of regulatory measures or constitute a key geopolitical actor
whose regulatory approach might help better define the future of global AI
governance. It must be stressed that this selection is without prejudice to the
importance of other fora where AI issues are currently being addressed,
such as the Organisation for Economic Co-operation and Development
(OECD) or the Council of Europe. While not specifically analyzed here,
studying the initiatives of those other frameworks may be a fruitful
complement to this Annex. Conversely, the reflections found in this Annex
and potential future developments may inspire researchers to look at other
legal systems and enlarge the scope and trajectories of our current analysis.

Regulatory Frameworks
This section offers a bird's eye view of the fundamental AI regulatory
infrastructure – where available – alongside the key legislative activities
and other relevant initiatives within each legal system. It sets the main
legislative coordinates as a basis for the following section where the
comparative analysis is carried out. The legal systems are listed in
alphabetical order.

Brazil
This legal system has a longstanding history of legislative efforts in the AI
field. Since 2019, Brazil has witnessed four different legislative initiatives
aiming to regulate AI, showing a clear interest in this policy area despite its
rising political polarization. The series of Bills presented either to the
Brazilian Chamber of Deputies or the Federal Senate shows a fascinating
legal marathon, striving to balance fundamental rights, human well-being
and safety alongside scientific innovation and investments in AI
applications.
The Brazilian road to AI regulation began with the relatively farsighted
Bill 5051/2019, which prioritized a set of AI principles that would
guarantee the respect of fundamental rights, protection of personal data,
transparency and human oversight (Senado Federal Do Brasil, 2019). Just
one year later, Bill 21/2020 introduced a stronger focus on technological
and scientific advancement, fostering overall economic development and
the global competitiveness of the Brazilian AI industry (Câmara dos
Deputados Do Brasil, 2020). This second Bill also kept key aspects of its
predecessor, namely the attention on fundamental rights, transparency and
human oversight – though this final point was placed under the wider
umbrella of AI governance (Câmara dos Deputados Do Brasil, 2020).
Another year on, a new Bill was introduced in the Federal Senate. This third
attempt, known as Bill 872/2021, builds upon the principles presented
through Bill 5051/2019 while expanding its scope to address issues such as
inclusive diversity and sustainable development in AI (Senado Federal,
2021). The latter are promoted through the introduction of policies aimed at
guiding the gradual adoption of AI into public and private workplaces,
incentivizing training for workers who have so far not experienced the
professional use of AI (Senado Federal Do Brasil, 2021).
Despite such fervid legislative activity on AI issues, the three Bills never
came into force. Instead, a new legislative procedure was introduced at the
Federal Senate in 2023, marking the fourth attempt in this area. In this case,
Bill 2338/2023 was designed to replace the previous legislative endeavors,
welding together the most important elements from each (Senado Federal
Do Brasil, 2023). Nevertheless, this Bill also shows a degree of
responsiveness toward the new challenges brought about by developments
in generative AI technology. Indeed, it places respect of human rights at the
very center of its architecture, and adopts a risk-based approach – which, as
will be discussed, is not dissimilar from legislative attempts of other legal
systems (Senado Federal Do Brasil, 2023). This permeates from the
definition of an AI system, which is described in this Bill as a “…computer
system, with different degrees of autonomy, designed to infer how to
achieve a given set of objectives, using learning-based approaches from
machine and/or logic and knowledge representation, by means of data from
input from machines or humans, with the aim of producing forecasts,
recommendations or decisions that may influence the virtual or real
environment” (Senado Federal Do Brasil, 2023, p. 3). From the recognition
of AI's ability to have a dramatic influence stems the adoption of a risk-
based approach which requires AI systems to undergo a series of
preliminary assessments before their use (Senado Federal Do Brasil, 2023).
Where the assessment reveals the potential of high risk, the Bill mandates a
further algorithmic impact assessment and other checks to be carried out by
public authorities. The Bill also prohibits the public and private use of
certain AI systems deemed as causing an excessive risk (Senado Federal Do
Brasil, 2023). The latter includes systems exploiting natural persons or
inducing them to behave in a harmful or unlawful manner, as well as social
credit scoring algorithms. Under this Bill, high-risk systems are also subject
to several data management, information security and transparency
measures, alongside general data protection legislation as provided under
the Brazilian General Data Protection Law (Senado Federal Do Brasil,
2023).
This legislative history is complemented by an overarching National
Strategy for AI produced by the executive branch, known as EBIA
(Ministério da Ciência, Tecnologia e Inovação Do Brasil, 2021). The latter
was introduced in 2021 with the aspiration to bolster Brazil's technological
and human capabilities in the field of AI. It is explicitly aligned with key
principles stemming from international bodies such as the OECD and
UNESCO, attaching great importance to human-centered values,
accountability and international cooperation in the field of AI (OECD,
2024; UNESCO, 2021). It highlights three strands of policy instruments –
named “transversal axes” – to be employed in six different AI-relevant
policy areas – named “vertical axes” (Ministério da Ciência, Tecnologia e
Inovação Do Brasil, 2021). The transversal axes include legislation,
regulation, governance, and international cooperation, whereas the vertical
axes cover the promotion of digital education, AI-related labor skills,
research and development, application in private and public sectors, and
public security. The intersection of these strands creates a total of 73
strategic actions envisioned by the Brazilian government.

EU
With its unique legal system, the EU has recently gained much political and
academic attention due to its latest legislative initiative covering AI, the
well-known EU AI Act. However, the Union's efforts in this field have a
long backstory. Tracing back the steps of this legal system's policy pathway
shows a consistent early focus on human-centered values and the protection
of human rights.
The European Commission's 2018 Communication on Artificial
Intelligence for Europe kickstarted the EU's engagement in AI policy and
law (European Commission, 2018b). By emphasizing the need to adopt a
human-centric approach to AI technologies and ensure that AI systems are
developed and implemented in respect of ethical standards and fundamental
rights, the Communication provided the foundation for the subsequent EU
AI strategy. Building on this basis, the European Commission and the
Member States established the first Coordinated Plan on Artificial
Intelligence in 2018 to ensure coordination and alignment of national AI
policies (European Commission, 2018a). The Coordinated Plan was then
revised in 2021 and further expanded to support investments in AI research
while continuing to foster human-centered principles and encouraging
public confidence in the adoption of AI systems (European Commission,
2021a).
The EU then made noteworthy progress in creating a legislative
architecture tackling the risks created by AI and its applications, well before
the widespread use of generative AI chatbots. The goal of the AI Act, which
was suggested by the European Commission in April 2021 together with the
second Coordinated Plan, is to provide a uniform legal framework for AI
throughout the Member States (European Commission, 2021). The AI Act
pays particular attention to high-risk AI applications, such as those
employed in biometric identity, vital infrastructure, and law enforcement.
Accordingly, it sets forth a series of rules addressing data governance,
accountability, and transparency for AI developers and users (European
Commission, 2021). Finally, it also sets guidelines for data traceability and
quality and mandates the development of conformity assessment procedures
for high-risk AI systems. After much political back and forth, the AI Act
was adopted by the European Parliament at its March 2024
plenary(European Parliament, 2024).
These legislative efforts are complemented by the Data Governance Act,
which aims to ease data exchange for AI purposes while upholding data
protection and privacy standards. Having entered into force in September
2023, this legal instrument is essential to ensure that data is employed in
ways that respect people's rights and freedoms while facilitating the
development and implementation of AI systems that necessarily rely on
massive datasets (Official Journal of the European Union, 2022).
In addition to such legislative efforts, the EU has put in place further
policy initiatives to encourage the development and appropriate application
of AI. For instance, the European Commission advanced the creation of a
European AI Board, which would function as a forum for member state
collaboration and coordination in the AI domain (European Commission,
2021). The Board would promote regulatory harmonization of national
instruments, sharing of best practices, and oversee that AI policies are in
line with the EU's wider strategic goals (European Commission, 2021).

India
In sharp contrast to the legislative system just considered, India does not
currently have an overarching AI legislative framework. Instead,
economically strategic policy measures and legislative actions that support
responsible AI growth across several industries characterize India's
regulatory approach to AI (Dey & Cyrill, 2024; Sinha et al., 2022). First
introduced in 2018 and further updated in 2020, the National Strategy for
Artificial Intelligence underpins the country's commitment toward
leveraging AI for socio-economic development, with a strong focus on
national interests such as healthcare, education, transportation, and the
design of smart cities (NITI Aayog, 2018).
As part of this National Strategy, in 2021 the Indian government
introduced the National Single Window System to streamline the country's
AI development process and simplify business approvals, as well as a series
of Centers of Excellence to promote educational access to AI design and
use across the workforce (Ministry of Commerce and Industry, 2021;
Ministry of Education and IIT Jammu, CPMU for AI CoE, 2023). These
centers have also been established as intersection points encouraging
collaboration across academia, industry, and the government. These
measures integrate the promising Indian AI startup ecosystem, which
features around 2000 startups (Nilgiriwala et al., 2024).
This supportive environment for economic growth through AI was then
complemented by the enactment of the Digital Personal Data Protection Act
in August 2023. This established a comprehensive data governance
framework that created a significant shift in the Indian AI economic
environment by granting citizens stronger control over their personal
information and subsequently impacting both domestic and international
entities operating in the country (Ministry of Electronics and Information
Technology [MeitY], 2023).
Nevertheless, it must be noted that India does not have comprehensive
legislation regulating the use and deployment of AI technology. On the
contrary, the legal system adopts a piecemeal approach, as the MeitY has
only issued advisories to guide the use and deployment of AI tools and
expand the applicability of due diligence measures for intermediaries
(MeitY, 2024). While not codified into amendments to pre-existing
legislation, these advisories promote compliance with content moderation
standards and transparency regarding the reliability of AI models. As a
result, it appears that as the country cannot yet boast a complete legislative
framework governing AI, the existing policy and legislative measures seek
an approach nurturing innovation and economic growth as a priority while
also paying a certain degree of attention to protecting citizens from AI-
related harm. As India's legislative framework for AI regulation evolves, it
remains to be seen whether the country will manage to balance its socio-
economic interests with the introduction of an overarching AI Act.
Japan
Walking a fine line between regulatory and economic interests, Japan's
approach toward AI reflects a strategic commitment to fostering AI
innovation while substantially addressing risks associated with the
technology and its use. This legal system places great importance on
multistakeholder involvement and is attempting to design an agile
governance model that eschews rigid and sector-specific mandates (METI,
2022). It must also be noted that much inspiration for Japanese policy
development in this area came from international impulses through the
2019 OECD AI Principles (Habuka, 2023; OECD, 2019). These Principles
champion inclusivity, human dignity, transparency, fair competition and
innovation.
In this legal system, AI governance relies heavily on Society 5.0, an
initiative first conceptualized in 2016 that strives toward a combination of
societal resilience to unpredictability, the promotion of innovation and
economic growth and respect of human-centric values (METI, 2022;
Tonomura et al., 2023). In order to achieve this vision for Japanese society,
the legal system is adopting a series of highly dynamic and easily adaptable
measures that allow for constant reassessment in response to unexpected
technological-led changes (Donegan, 2024).
Examples following on from this vision of the regulatory landscape
include the implementation of sector-specific laws such as the Digital
Platform Transparency Act and the Financial Instruments and Exchange
Act (METI, 2021a, 2022). These provide strict risk-management rules for
businesses to uphold transparency in the face of AI-related risks, while also
encouraging the use of self-regulating systems such as smart contracts and
blockchain to minimize burdensome oversight from central authorities. This
has been complemented through the introduction of a number of standards
for AI interoperability and quality assurance (Habuka, 2023; Tonomura et
al., 2023). Similarly to the EU legal system, Japan has established an AI
Strategy Council that ensures cooperation and coherence across ministries
for AI measures (METI, 2023). Finally, an ad hoc commission known as
the Digital Rincho has been operating since 2021 to comprehensively
modernize regulations that may impede the use of digital technologies in
order to establish regulatory compliance (METI, 2021b).
Japan's distinctive policy addressing the use of copyrighted content for
training generative AI models must also be highlighted. Unlike many other
legal systems, the Japanese government allows AI models to process data
for both non-profit and commercial purposes, including content from illegal
sites (Tanaka et al., 2023; Tonomura et al., 2023). Though only permitted
for AI-model training purposes, this push for AI development clearly shows
the legal system's fine counterbalancing of competing priorities. In fact, this
permissive stance on copyright content raises important legal challenges for
users and AI service providers during the prompt-input and AI-output
stages (Tanaka et al., 2023; Tonomura et al., 2023).
The above measures show that the Japanese legislative framework
employs a uniquely balanced approach that advocates strong technological
progress while upholding privacy and transparency standards. The legal
system currently strives toward the establishment of an agile set of
legislative instruments that regulate the safe integration of AI technologies
and promote strong innovation. Its permissive approach toward copyright
laws for AI training certainly sets it apart from other legal systems, but it
remains a potentially loose cannon in an otherwise carefully balanced
method.

People's Republic of China (PRC)


In sharp contrast to the legal system noted above, the PRC's regulatory
approach toward AI offers a particularly rigid system built upon a series of
legislative instruments. The country's early and proactive engagement in
this field has brought it much international scrutiny when it comes to
establishing governance structures (Roberts et al., 2021; Sheehan, 2024).
Such engagement shows a clear commitment to harmonizing technological
innovation with social stability and control, alongside the introduction of
certain bureaucratic innovations (Chen, 2024; Xiao, 2024).
The first step in this direction consisted in the adoption of the New
Generation AI Development Plan in 2017, which set forth a timetable for
developing AI regulations up to 2030 (PRC State Council, 2017). This
shows an early interest by the PRC in controlling a fast-developing
technology such as AI. There are three seminal regulations stemming from
the 2017 Plan that showcase this legal system's approach to AI governance.
The 2021 Provisions on the Management of Algorithmic Recommendations
in Internet Information Services are aimed at curbing the unchecked
dissemination of content online, addressing a variety of concerns that
include possible monopolistic behavior by leading tech companies and the
exploitation of gig workers (PRC Cyberspace Central Administration,
2021). The law also introduced the so-called algorithm registry, a key
bureaucratic innovation that collects AI algorithms used by companies for
transparency and accountability purposes. A mention of this mandatory
registry is also present in all other regulations governing AI. On the other
hand, the 2022 Provisions on the Administration of Deep Synthesis Internet
Information Services were adopted in response to rising concerns around
the widespread use of deepfakes depicting public figures (PRC Cyberspace
Central Administration, 2022). It sets a series of rules requiring clear
labeling of AI-generated content to prevent misinformation across
platforms, as well as a general prohibition on the generation of fake news
through AI applications. Finally, since 2023 work has been underway on the
Measures for the Management of Generative Artificial Intelligence Services
(PWC News, 2023). The latter were triggered as a response to the
widespread use of general-purpose AI applications such as ChatGPT. While
covering the same ground as the 2022 Provisions, these draft Measures put
much more emphasis on training data and require service providers to
ensure that the data used to train AI chatbots is both “true and accurate.”
So far, this legal system's regulations have seen the light after undergoing
a multilayered process. The initial input for the PRC's AI regulations often
comes from a digestion of external ideas and processes, rendering this
system quite receptive to international developments (Hine & Floridi, 2022;
Sheehan, 2024). These external inputs are then intellectually and politically
filtered through the ideology and political priorities of the Chinese
Communist Party. After this ever-present filter, a draft law then enters the
so-called “world of ideas,” whereby it is discussed by a wide range of
actors, including think tanks, AI scientists, companies, and lobbyists
(Huang et al., 2024; Sheehan, 2024). The result of these debates is then
processed through the Chinese Communist Party and government
bureaucracy, whereby one or more ministries consolidate it into a specific
law. This customarily intricate procedure for AI regulation ensures that
policy solutions that may contravene the party's wishes or priorities are shot
down (Sheehan, 2024).
Despite the clear absence of an overarching AI regulation, this legal
system presents deep engagement with AI applications, with a view to
safeguarding social stability and control as one key objective of the
communist political elites. Nonetheless, given the well-established political
weight of this system, there are many expectations regarding the
implementation of a comprehensive national AI law that could become a
reference point for policymakers worldwide.

The United Kingdom (UK)


Whereas all the other legal systems considered in this Annex have
developed or are developing a set of legislative measures governing AI,
surprisingly the UK has been rather cautious in doing so. At the moment
this Annex is written, the country does not have any AI-specific regulation
and is openly refraining from starting any legislative process on this matter.
Instead, AI governance is addresses through a context-based approach
through pre-existing legislation (Gallo & Nair, 2024; Rennie et al., 2024;
Szczepański, 2024).
This context-based approach is underpinned by core principles such as
safety, security, transparency, explainability, fairness, and accountability.
Additionally, there is no regulator with an AI-specific portfolio, as
regulators are expected to rely on pre-existing laws and issue
supplementary regulatory guidance only when needed (Roberts et al.,
2023). Regulators affected by AI, such as the Information Commissioner's
Office or the Financial Conduct Authority, are thus expected to carry out
enforcement based upon a proportionate context-based analysis using AI-
neutral regulatory frameworks (Foo & Machin, 2024). Nonetheless,
regulatory bodies are not banned from publishing their own guidance on
AI-relevant topics. Finally, the UK government foresees the creation of the
so-called “central functions” to support existing regulators in AI matters
(UK Department for Science, Innovation & Technology, 2024).
Notwithstanding this, the House of Lords recently conceded the
possibility of working on AI legislation (UK House of Lords, 2024).
However, following a public consultation the context-based approach was
once again confirmed, with the government showing its commitment to a
pro-innovation environment and arguing that it would seek to legislate on
AI only when it is “…confident that it is the right thing to do” (UK
Department for Science, Innovation & Technology, 2024, p. 7).
More recent developments have confirmed the UK's approach of
avoiding too much legislative activity to regulate AI. In January 2025,
Prime Minister Keir Starmer unveiled a comprehensive AI blueprint
strategy aimed at positioning the UK as a global leader in AI. The latter
includes establishing AI “growth zones,” increasing public computing
capacity through the development of a national supercomputer, and
fostering a pro-innovation regulatory environment (Associated Press, 2025).
The UK government also announced plans to significantly expand its AI
computing capacity over the next five years. This initiative aims to support
AI applications in academia and public services, positioning the UK as a
global leader in AI technology (Financial Times, 2025). Considering that
the UK ranks among the leading countries for AI innovation ecosystems,
this approach clearly diverges from even its geographically closest
legislative systems, such as the EU.

United States of America (USA)


Being the country leading AI innovation efforts – at the very least as the
birthplace of ChatGPT – the USA has been taking certain steps toward AI
regulation, without developing an overarching law covering this area.
Notably, there have been many attempts to bring together the necessary
industry, civil society, and legal experts to find the best balance while not
stifling industrial growth (Guenduez & Mettler, 2023).
From a purely strategic viewpoint, the possibility that the USA could end
up in head-to-head competition with the second largest player in the AI
race, namely the PRC, has been a strong motivator for not being too fast in
introducing potentially burdensome legislative blocks (Walter, 2024). In the
absence of a comprehensive AI regulation, the American regulatory
approach consists of a mix of federal government, state governments, case
law, and the industry itself (Szczepański, 2024). However, these remain
separate avenues and present key limits, such as the ready possibility of
conflicts of interest in the case of industry self-governance, or the courts’
limited power in adjudicating AI-related disputes within the confines of
existing laws (Plotinsky & Cinelli, 2024).
Nonetheless, this legal system does have certain regulatory tools at hand.
In particular, the 2020 National Artificial Intelligence Initiative Act
established a so-called American AI initiative and provided legislative
guidance on AI research, development and evaluation standards at US
federal science agencies (US Congress, 2020b). Other acts have obliged
certain agencies to drive AI programs and policies across the federal
government, including the AI in Government Act and the Advancing
American AI Act (US Congress, 2020a). The White House Office of
Science and Technology Policy also published its Blueprint for an AI Bill of
Rights in January 2023 (The White House, 2023a).
In October 2023, the Biden Administration issued an executive order on
the “Safe, Secure, and Trustworthy Development and Use of AI” (The
White House, 2023b). The order covers eight policy fields, including new
standards for AI safety and security, protection of citizens’ privacy,
advancing equity and civil rights, protecting consumers, patients, and
students, supporting workers, promoting innovation and competition,
advancing US leadership abroad, and ensuring responsible and effective
government use of AI (The White House, 2023b).
Comparative Analysis
As mentioned above, this Annex looks at the various legal systems’
regulatory approaches toward AI through the lens of functionalism. As a
result, the previous section provided an overview of each system's
legislative experience to date in their common exposure to AI's
unprecedented pervasiveness and steady growth. From the elements
collected, at face value one may rightly conclude that the proliferation of AI
technologies has forced legal systems to adopt distinct strategies reflective
of their unique social, economic and political contexts, as well as of their
technological ambitions.
Nevertheless, if one looks beyond the mere uniqueness of a given system,
it becomes clearer that there are nascent trends that are beginning to
influence the direction of a country's regulatory approach, one way or the
other. Observing the function exemplified by a given legal measure helps
identify a common thread in trends guiding AI regulations. Considering the
legal systems analyzed in this Annex, two main trajectories can be
extrapolated: an economic/innovation track, whereby a legal system will
tend to prioritize measures favoring technological innovation in the AI field
and the potential subsequent economic growth; and a values-based track,
according to which a legal system will prefer regulating AI according to a
socio-politically established set of values.
From this perspective, one may position legal systems such as the EU
and the PRC at the far end of the values-based line. This is explained by the
fact that both systems prioritize regulations that safeguard a given set of
values. On one side, the EU champions a human-centric regulatory
philosophy, with the recently adopted AI Act and the Data Governance Act
underscoring a clear commitment to fundamental rights and ethical
standards. On the other hand, the PRC presents a highly centralized,
control-based model that still prioritizes a specific list of values, among
which social stability shines through.
Moving along the values-based line, one finds two systems that seem to
go further down the opposite trajectory. While still favoring the
fundamental rights and ethical standards dimension, Brazil's dynamic
legislative experience in AI matters reveals a tendency to not overlook their
economic and innovative dimensions. Similarly, Japan's strategy,
underpinned by the vision of a Society 5.0, shows a strong tendency toward
balancing AI innovation and protecting certain values. Nevertheless, the
impulse to such balancing is slightly thrown off by its peculiar stance on
copyright laws for AI training, which to a notable degree push this system
toward the more innovation-prone side.
As a general trend, and perhaps unsurprisingly, the legal systems
observed that portray a higher tendency toward economic growth and
technological innovation are also those with more fragmented or even non-
existent regulatory measures. The USA strongly tends toward safeguarding
its leading role as an AI technological and economic giant, though it
nonetheless shows a certain degree of attention to fundamental rights and
ethical standards. Conversely, the Indian and the UK legal systems offer
fragmented measures that clearly push in the economic and innovation
direction. In the distinctive case of the UK, this tendency is even sublimated
to a reluctance to work on AI laws, and instead highlights an absolute focus
on regulatory flexibility.
Figure A1.1 shows what such comparative analysis would look like if
spread across an imaginary two-way line, with economic/innovation on one
side and values-based on the other one.
Figure A1.1 Comparative analysis spread across an imaginary two-way line, with
economic/innovation on one side and values on the other one.

It must be noted that this analysis should not be understood as portraying


legal systems as motionless pillars doomed to stay where they are currently
identified. It is precisely the opposite. The overview provided here points to
an emergent continuum of approaches that may vary across time, influenced
by further technological developments or contingent socio-political
dynamics. Accordingly, and especially considering the fast-paced nature of
AI, in one year's time we may see that a legal system has completely
changed direction and is headed toward the opposite side of the spectrum.
Regardless of the stance a legal system may adopt in the future, it is
important to understand how functions indicate evolving trends across the
world. In this sense, even legal systems that are rightly considered as
markedly different – see the EU and the PRC – may end up adopting
approaches that are more similar than one would initially think. As
countries continue to grapple with the ever-changing nature of AI, the legal
systems analyzed here show that the interplay between economic growth,
ethical standards, innovation and fundamental rights is a key factor
influencing the current global regulatory mosaic.

Conclusion
This Annex has delved into the nuances of AI legislation and its functions
across the selected legal systems, providing a blueprint to better understand
the trends driving regulatory choices across the world. It underscores the
need for a more granular understanding of AI governance, whereby each
jurisdiction's approach should also be analyzed as a functional response to
certain strategic priorities and objectives. The analysis thus invites further
comparative overviews of other systems’ regulatory approaches, which may
even help unearth different trajectories from those identified here.
To sum up, as the current mainstream narrative focuses on the many
actors engaged in a headlong race to comprehensively regulate AI, one
must not forget that there are competing impulses driving each system's
regulatory choices. Those impulses will influence the function a certain
measure is designed to carry out, creating an interesting mosaic of
legislative possibilities. While remaining cognizant of the limits of the
functionalist method, this Annex contributes to the academic discourse by
offering an alternative view complementing the insights of this Handbook.

References
General

Han, T., Pereira, L. M., Santos, F. C., & Lenaerts, T. (2020). To regulate or
not: A social dynamics analysis of an idealised AI race. Journal of
Artificial Intelligence Research, 69(2020), 392–405.
https://doi.org/10.1613/jair.1.12225
Kischel, U. (2019). The comparative method. In U. Kischel & A. Hammel
(Eds.), Comparative law (pp. 87–200). Oxford University Press.
https://doi.org/10.1093/oso/9780198791355.003.0003
Koniakou, V. (2023). From the “rush to ethics” to the “race for governance”
in artificial intelligence. Information Systems Frontiers, 25(1), 71–102.
https://doi.org/10.1007/s10796-022-10300-6
Naudé, W., & Dimitri, N. (2020). The race for an artificial general
intelligence: Implications for public policy. AI & SOCIETY, 35(2),
367–379. https://doi.org/10.1007/s00146-019-00887-x
OECD. (2024). Recommendation of the council on artificial intelligence.
https://legalinstruments.oecd.org/en/instruments/oecd-legal-0449
Smits, J., & Husa, J. (2012). A dialogue on comparative functionalism.
Maastricht Journal of European and Comparative Law, 18, 554–558.
Smuha, N. A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’:
Regulatory competition for artificial intelligence. Law, Innovation and
Technology, 13(1), 57–84.
https://doi.org/10.1080/17579961.2021.1898300
UNESCO. (2021). Recommendation on the ethics of artificial intelligence.
https://www.unesco.org/en/articles/recommendation-ethics-artificial-
intelligence
Valcke, C., & Grellette, M. (2014). Three functions of function in
comparative legal studies. In M. Adams & D. Heirbaut (Eds.), The
method and culture of comparative law: Essays in honour of Mark Van
Hoecke (pp. 99–112). Hart Publishing.
https://tspace.library.utoronto.ca/bitstream/1807/129761/1/Three%20F
unctions%20of%20Function%20in%20Comparative%20Legal%20Stu
dies.pdf
van Erp, S. (2020). Data regulation: A race to…? European Journal of
Comparative Law and Governance, 7(4), 335–338.
https://doi.org/10.1163/22134514-00704004
Walter, Y. (2024). Managing the race to the moon: Global policy and
governance in artificial intelligence regulation—A contemporary
overview and an analysis of socioeconomic consequences. Discover
Artificial Intelligence, 4(1), 14. https://doi.org/10.1007/s44163-024-
00109-4
Zweigert, K., & Kötz, H. (1998). An introduction to comparative law.
Oxford University Press.

Brazil

Câmara dos Deputados Do Brasil. (2020). Projeto de Lei No 21, de 2020.


https://www.congressonacional.leg.br/materias/materias-
bicamerais/-/ver/pl-21-2020
Ministério da Ciência, Tecnologia e Inovação Do Brasil. (2021). Portaria
MCTI N. 4979. https://www.gov.br/mcti/pt-br/acompanhe-o-
mcti/transformacaodigital/arquivosinteligenciaartificial/ebia-
portaria_mcti_4-979_2021_anexo1.pdf
Senado Federal Do Brasil. (2019). Projeto de Lei n° 5051, de 2019.
https://www25.senado.leg.br/web/atividade/materias/-/materia/138790
Senado Federal Do Brasil. (2021). Projeto de Lei n° 872, de 2021.
https://www.congressonacional.leg.br/materias/materias-
bicamerais/-/ver/pl-872-2021
Senado Federal Do Brasil. (2023). Projeto de Lei n° 2338, de 2023.
https://www.congressonacional.leg.br/materias/materias-
bicamerais/-/ver/pl-2338-2023

EU

European Commission. (2018a). Coordinated plan on artificial intelligence.


https://digital-strategy.ec.europa.eu/en/policies/plan-
ai#:∼:text=Images%20-
%201152951190%20metamorworks,The%20Coordinated%20Plan%2
0on%20Artificial%20Intelligence%20aims%20to%20accelerate%20in
vestment,AI)%20was%20published%20in%202018
European Commission. (2018b). Communication artificial intelligence for
Europe. https://digital-strategy.ec.europa.eu/en/library/communication-
artificial-intelligence-europe
European Commission. (2021a). Coordinated plan on artificial intelligence
2021 review. https://digital-
strategy.ec.europa.eu/en/library/coordinated-plan-artificial-
intelligence-2021-review
European Commission. (2021b). Proposal for a regulation of the European
Parliament and of the Council laying down harmonised rules on
Artificial Intelligence (Artificial Intelligence Act) and amending
certain Union legislative acts. https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=celex%3A52021PC0206
European Parliament. (2024). European Parliament legislative resolution of
13 March 2024 on the proposal for a regulation of the European
Parliament and of the Council on laying down harmonised rules on
Artificial Intelligence (Artificial Intelligence Act) and amending
certain Union Legislative Acts.
https://www.europarl.europa.eu/doceo/document/TA-9-2024-
0138_EN.html
Official Journal of the European Union (2022). Regulation (EU) 2022/868
of the European Parliament and of the Council of 30 May 2022 on
European Data Governance and Amending Regulation (EU)
2018/1724 (Data Governance Act). https://eur-lex.europa.eu/legal-
content/EN/TXT/?uri=CELEX%3A32022R0868
India

Chakrabarty, R., & Sanyal, K. (2020). Towards a ‘Responsible AI’: Can


India take the lead? South Asia Economic Journal, 21(1), 158–177.
https://doi.org/10.1177/1391561420908728
Chatterjee, S. (2020). AI strategy of India: Policy framework, adoption
challenges and actions for government. Transforming Government:
People, Process and Policy, 14(5), 757–775.
https://doi.org/10.1108/TG-05-2019-0031
Chauhan, P., Avasarala, S., & Singh, A. (2024). MEITY advisory: Dawn of
AI regulation in India or a false start. Lakshmikumaran & Sridharan
Attorneys Blog. https://www.lexology.com/library/detail.aspx?
g=47dda3b5-1111-4b6b-9f87-799ef8066802
Dey, A., & Cyrill, M. (2024). India's regulation of AI and large language
models. India Briefing. https://www.india-briefing.com/news/india-
regulation-of-ai-and-large-language-models-
31680.html/#:∼:text=Intermediaries%20and%20platforms%20must%
20ensure,other%20provision%20of%20the%20IT
Ministry of Commerce and Industry. (2021). National single window
system. https://www.nsws.gov.in
Ministry of Education and IIT Jammu, CPMU for AI CoE. (2023). AI
Centres of Excellence (CoE) make AI in India and make AI work for
India. https://www.education.gov.in/sites/upload_files/mhrd/files/Call-
for-Proposal.pdf
Ministry of Electronics and Information Technology (MeitY). (2023). The
Digital Personal Data Protection Act.
Ministry of Electronics and Information Technology (MeitY). (2024).
Advisory: Due Diligence by Intermediaries/Platforms under the
Information Technology Act, 200 and Information Technology
(Intermediary Guidelines and Digital Media Ethics Code) Rules.
Nilgiriwala, K., Mahajan, U., Ahmad, R., de Castro, R., Lazo, L., Kong, J.
D., Siew Hoong, A. L., Veerakumarasivam, A., Sharef, N., &
Demidenko, S. (2024). Navigating the Governance of Artificial
Intelligence (AI) in Asian Nations: A Focus on India, Indonesia,
Malaysia and the Philippines. SSRN.
NITI Aayog. (2018). National strategy for artificial intelligence.
https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-
for-Artificial-Intelligence.pdf
Sinha, A., Hickok, E., & Basu, A. (2022). AI in India: A policy agenda. The
Centre for Internet and Society. https://cis-india.org/internet-
governance/files/ai-in-india-a-policy-agenda/view

Japan

Donegan, J. (2024, February 24). The US should look at Japan's unique


approach to generative AI copyright law. ManageEngine Insights.
https://insights.manageengine.com/artificial-intelligence/the-us-
should-look-at-japans-unique-approach-to-generative-ai-copyright-
law/
Habuka, H. (2023). Japan's approach to AI regulation and its impact on the
2023 G7 presidency. Center for Strategic and International Studies.
https://csis-website-prod.s3.amazonaws.com/s3fs-public/2023-
02/230214_Habuka_Japan_AIRegulations.pdf?
VersionId=BnLSQRRqoO9jQ8u1RW3SGKOA0i8DBc4Q
METI. (2021a). Digital Platform Transparency Act. https://elaws.e-
gov.go.jp/document?lawid=502AC0000000038
METI. (2021b). Digital platforms.
https://www.meti.go.jp/english/policy/mono_info_service/information
_economy/digital_platforms/index.html
METI. (2022). Agile governance update.
https://www.meti.go.jp/shingikai/mono_info_service/governance_mod
el_kento/pdf/20220808_2.pdf
METI. (2023). Financial Instruments and Exchange Act.
https://www.japaneselawtranslation.go.jp/en/laws/view/4405
OECD. (2019). OECD AI principles. https://oecd.ai/en/ai-principles
Tanaka, H., Wachi, K., Ichikawa, Y., & Takahashi, R. (2023). Japan
Artificial Intelligence. Country Comparative Guides 2023.
https://www.legal500.com/guides/chapter/japan-artificial-
intelligence/?export-pdf
Tonomura, K., Konno, Y., Matsuzaki, Y., & Koike, M. T. (2023). Artificial
intelligence 2023 Japan. Chambers and Partners.
https://practiceguides.chambers.com/practice-guides/artificial-
intelligence-2023/japan/trends-and-developments/O13573

PRC

[People's Republic of] China State Council. (2017). New Generation AI


Development Plan. https://www.gov.cn/zhengce/content/2017-
07/20/content_5211996.htm
Chen, Q. (2024). China's emerging approach to regulating general-purpose
artificial intelligence: Balancing innovation and control. Asia Society
Policy Institute. https://asiasociety.org/policy-institute/chinas-
emerging-approach-regulating-general-purpose-artificial-intelligence-
balancing-innovation-and
Cyberspace Administration of [People's Republic of] China. (2021).
Provisions on the management of algorithmic recommendations in
internet information services.
https://digichina.stanford.edu/work/translation-internet-information-
service-algorithmic-recommendation-management-provisions-
effective-march-1-2022/
Cyberspace Administration of [People's Republic of] China. (2022).
Provisions on the administration of deep synthesis internet information
services. https://en.pkulaw.cn/display.aspx?
cgid=90cff392df74a3ebbdfb&lib=law
Hine, E., & Floridi, L. (2022). Artificial intelligence with American values
and Chinese characteristics: A comparative analysis of American and
Chinese governmental AI policies. AI & SOCIETY, 39(1), 257–278.
https://doi.org/10.1007/s00146-022-01499-8
Huang, X., Yusoff, Z. M., Bin Md Nor, M. Z., & Labanieh, M. F. (2024).
The legal challenges and regulatory responses to artificial intelligence
(AI) in China. Proceedings of the 12th UUM International Legal
Conference 2023 (UUMILC 2023), 335–347.
https://doi.org/10.2991/978-94-6463-352-8_26
PWC News. (2023). Regulatory and legislation: China's interim measures
for the management of generative artificial intelligence services.
https://www.pwccn.com/en/tmt/interim-measures-for-generative-ai-
services-implemented-aug2023.pdf
Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L.
(2021). The Chinese approach to artificial intelligence: An analysis of
policy, ethics, and regulation. In L. Floridi (Ed.), Ethics, governance,
and policies in artificial intelligence (pp. 47–79). Springer
International Publishing. https://doi.org/10.1007/978-3-030-81907-1_5
Sheehan, M. (2024). Tracing the roots of China's AI regulations. Carnegie
Endowment for International Peace. https://carnegie-production-
assets.s3.amazonaws.com/static/files/Sheehan_Reverse_Engineering_
AI_Gov-UPDATED.pdf
Xiao, B. (2024). Agile and iterative governance: China's regulatory
response to AI. SSRN.

UK

Associated Press. (2025, January 13). Britain's leader unveils AI strategy,


including vow to rip up red tape hindering AI projects. AP News.
https://apnews.com/article/c383dd9799aa569c9e76b4322d92a730
Financial Times. (2025, January 25). UK pledges huge increase in
computing capacity to build AI industry.
https://www.ft.com/content/3df56e38-357c-495f-b81b-d11dcfe3055f
Foo, C., & Machin, E. (2024). UK Government develops its AI regulation
framework. Ropes and Gray.
https://www.ropesgray.com/en/insights/viewpoints/102izno/uk-
government-develops-its-ai-regulation-framework
Gallo, V., & Nair, S. (2024). The UK's framework for AI regulation.
Deloitte. https://www2.deloitte.com/uk/en/blog/emea-centre-for-
regulatory-strategy/2024/the-uks-framework-for-ai-regulation.html
Rennie, J., Hickman, T., Poldma, J., & Jha, A. (2024). UK's context-based
AI regulation framework: The Government's response. White and
Case. https://www.whitecase.com/insight-our-thinking/uks-context-
based-ai-regulation-framework-governments-response
Roberts, H., Babuta, A., Morley, J., Thomas, C., Taddeo, M., & Floridi, L.
(2023). Artificial intelligence regulation in the United Kingdom: A
path to good governance and global leadership? Internet Policy
Review, 12(2), 1–31.
Szczepański, M. (2024). The United Kingdom and artificial intelligence.
EPRS.
https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/762285/
EPRS_ATA(2024)762285_EN.pdf
UK Department for Science, Innovation and Technology. (2024). A pro-
innovation approach to AI regulation: Government response.
GOV.UK. https://www.gov.uk/government/consultations/ai-regulation-
a-pro-innovation-approach-policy-proposals/outcome/a-pro-
innovation-approach-to-ai-regulation-government-response
UK House of Lords. (2024). Artificial Intelligence (Regulation) Bill, HL
Bill 11. https://researchbriefings.files.parliament.uk/documents/LLN-
2024-0016/LLN-2024-0016.pdf

USA

Guenduez, A., & Mettler, T. (2023). Strategically constructed narratives on


artificial intelligence: What stories are told in governmental artificial
intelligence policies? Government Information Quarterly, 40(1).
https://doi.org/10.1016/j.giq.2022.101719
Plotinsky, D., & Cinelli, G. M. (2024). Existing and proposed federal AI
regulation in the United States. Morgan Lewis.
https://www.morganlewis.com/pubs/2024/04/existing-and-proposed-
federal-ai-regulation-in-the-united-states
Szczepański, M. (2024). United States approach to artificial intelligence.
EPRS.
https://www.europarl.europa.eu/RegData/etudes/ATAG/2024/757605/
EPRS_ATA(2024)757605_EN.pdf
The White House. (2023a). Blueprint for an AI Bill of Rights: Making
automated systems work for the American people.
The White House. (2023b). Executive order on the safe, secure, and
trustworthy development and use of artificial intelligence.
US Congress. (2020a). H.R.2575 - AI in Government Act of 2020.
https://www.congress.gov/bill/116th-congress/house-
bill/2575#:∼:text=Passed%20House%20(09%2F14%2F2020),-
AI%20in%20Government&text=This%20bill%20creates%20the%20A
I,within%20the%20General%20Services%20Administration.&text=ca
rry%20out%20such%20activities%20for,efficiency%20of%20federal
%20government%20operations
US Congress. (2020b). H.R.6216 - National Artificial Intelligence Initiative
Act of 2020. https://www.congress.gov/bill/116th-congress/house-
bill/6216
US Congress. (2022). S.1353 - Advancing American AI Act.
https://www.congress.gov/bill/117th-congress/senate-bill/1353/text
ANNEX II
Young Leaders’ Vision on the Future of
Artificial Intelligence

When discussing artificial intelligence (AI), the first thing that comes to
mind is the opportunities it presents. It might be both awe-inspiring and
intimidating to consider such a volatile reality permeating fields that are so
close to us. However, it is something we must acknowledge and accept,
allowing the wave of innovation to wash over us. Indeed, AI can be a
significant aid for labor-intensive tasks, potentially increasing productivity
massively. This, in turn, can lead to substantial economic growth,
significant advancements in medicine, and a revolution in agriculture.
These developments can increase living standards worldwide and help
address poverty.
The benefits of AI are not limited to making labor less physically
demanding and more tolerable. It can also help in sectors where human
error still causes significant problems. AI never gets tired, is never
distracted because of personal issues, and never suffers from mood swings.
Overall, AI can be more precise than humans since it is not constrained by
human biological limitations. Collaborating effectively with this innovative
tool can significantly mitigate the negative effects of human error across
numerous fields.
As mentioned, AI has the capability to rectify human errors.
Additionally, it can intervene in technical areas such as preventive
maintenance. Its capacity to process vast amounts of data, far beyond the
scope of traditional methods, demonstrates AI's potential to predict
potential machine failures. This predictive ability enables pre-emptive
maintenance, minimizing downtime and cutting costs by facilitating
interventions before wholesale replacement of parts becomes necessary,
thereby enhancing overall efficiency.
In addition to labor and technical domains, AI presents an opportunity for
personalization, particularly in sectors where achieving this currently
proves challenging due to the high costs and extensive efforts involved.
Areas such as special needs education, medicine, and humanitarian efforts,
among others, stand to be transformed by enabling personalized programs
without the need for prohibitively high expenses and intensive labor,
thereby lowering costs while enhancing efficiency and effectiveness.
Significant opportunities often carry substantial risks, and the
continuously expanding realm of AI is no exception to this principle. One
of the most alarming potential consequences involves its use in the
dissemination of news and information, leading to heightened concerns
regarding the prevalence of misinformation. Malicious actors could misuse
AI, generating vast volumes of fake news that overwhelm reliable sources.
Moreover, AI could facilitate the creation of targeted misinformation,
capable of broadening its impact by reaching audiences normally
uninterested in certain subjects or deeming them irrelevant.
In the sphere of media and news dissemination, AI also advances
deepfake technology, increasingly blurring the line between misinformation
and authentic information. Previously solid forms of evidence, such as
video, may soon become less reliable, posing challenges in discerning truth
from falsehood.
While AI presents advantages in terms of personalized education, its
widespread integration within school systems may also pose risks to the
cultivation of crucial skills such as creativity, writing, problem-solving,
independent thought, and research. These potential outcomes impact not
only the students but the entire educational system. Current ubiquitous
testing methods face challenges from AI, posing a threat to existing skills
and the system's integrity. Consequently, the system is compelled to adapt
in response to these looming disruptions.
Examining the job market and industry landscape, while AI enables a
substantial boost in productivity, the flip side of this advancement is the
likelihood of reduced human employment in areas where AI proves most
beneficial. Indeed, numerous current job roles entail tasks that involve
processing information and generating reports, at least to some extent.
These roles, while not expected to vanish entirely, are likely to witness
reduced demand in a world where AI proliferates. Consequently,
individuals may need to consider career transitions and educational
adjustments, leading to increased instability and insecurity on the job
market.
In addition to the aforementioned fields, the domains of domestic and
international politics also stand to change drastically due to the
advancement of AI systems. Authoritarian governments could take the
chance to increase their state-run surveillance, as it will allow for the
collection and computation of more and more data; computers have already
created the possibility for states to surveil and control people on scales and
in ways that were previously unthinkable. AI will increase this tendency
even further, allowing states to augment their oppression.
The expansion of AI is poised to trigger a race for resources, particularly
regarding rare earth metals that will become increasingly coveted. This
surge in demand could have adverse effects on conflict-prone areas where
these metals are concentrated, as controlling these resources becomes more
profitable. This situation might escalate risks and the intensity of conflicts.
Additionally, heightened dependence on these raw materials may breed
geopolitical tensions between nations possessing these resources and those
lacking them.
Likewise, geopolitical tensions could also flare up when addressing
intellectual property over and access to advanced chips, something that can
already be seen in the arguments around the export of EUV (extreme
ultraviolet) lithography machines, where the United States, China, and the
EU are at loggerheads. These conflicts might escalate, possibly also leading
to broader trade wars.
Geopolitics is intricately linked with warfare where AI is poised to play a
significant role by enabling increased automation. This automation could
potentially mitigate the adverse effects experienced by states when
endangering their populations during wars waged abroad. Consequently,
leaders might become more inclined to initiate conflicts, resulting in
heightened suffering and increased conflict.
Furthermore, warfare inherently challenges ethical boundaries, and the
addition of AI introduces a concerning prospect: the scenario in which AI
autonomously determines targets. This possibility raises a multitude of
ethical dilemmas that warrant careful consideration.
If AI were to develop to the level of artificial general intelligence (AGI),
numerous ethical, practical, and existential questions would be raised.
When would an AGI deserve rights? What kind of rights? How will we test
if it is conscious? How will it be prevented from working against the
interests of humanity? Will it be a risk to the continued survival of the
human race? AGI is not currently on the table, but if it becomes a
possibility, these might be the biggest issues humanity has had to grapple
with.
Opportunities and risks coexist in the realm of AI development, and
regulations play a crucial role in maintaining balance, protection, and
fostering functional progress. However, attempting to regulate AI solely on
a national level is bound to face challenges. Ideally, global regulation would
be more effective, allowing for broader oversight and ensuring a higher
standard of protection.
When contemplating national-level regulation of AI, it becomes evident
that the digital nature of AI renders strict limitations ineffective for many
countries. Restricting AI development in one country could result in AI
research entities relocating elsewhere. Even powerful nations like the
United States and China are susceptible to this reality. Furthermore, if AI
advancements occur elsewhere, their impacts will still reverberate in the
countries where limitations were imposed, leaving those nations with less
control than they would otherwise have had. Additionally, the existence of
an AI arms race undermines national regulatory efforts. Unilateral
constraints on AI development may cause a country to lag behind others,
incentivizing nations to allow AI development without stringent regulations
unless collective provisions are established.
Moreover, the issue intensifies when considering that AI technology is
adaptable and can run on dual-use infrastructure, such as common
computing systems employed for various purposes. These systems, with
multifaceted applications, pose a challenge for regulators attempting to
control AI development, as the same infrastructure is utilized for a
multitude of functions beyond AI research.
While national regulation poses complexities, challenges also loom on
the global stage. AI is predominantly propelled by private companies rather
than governments, presenting a fundamental hurdle. Traditionally, private
entities have limited participation in international negotiations. However,
engaging with them is essential for effective regulatory provisions. Hence,
reimagining the landscape of diplomacy and international negotiations
becomes imperative to effectively address this aspect of AI regulation.
A final and most complex issue to solve is the breakneck speed of AI
advancements; government provisions and international negotiations and
regulations are inherently quite slow. However, there is not enough time for
traditional processes, as the rules would most likely be significantly
outdated by the time they are adopted. It will be a substantial challenge to
adopt regulations in a timely manner while ensuring proper quality. If those
regulations are enforced without adequate attention to detail, crucial aspects
and loopholes might be missed.
AI, in conclusion, offers both huge benefits and existential threats to the
way society currently works. A lot is still unclear about the way AI will
shape the future of humanity, but it is clear that it will have widely
reverberating impacts. The breakneck speed of development and its
technical nature mean it does not fit naturally with traditional forms of
regulation and diplomacy. Nevertheless, it is imperative that decision-
makers and regulators start working on international guidelines for this
technology.
Bram Koers, Beatrice Biffi, Sana Hadžić-Babačić, Stefan Marković,
Vyara Noncheva, Hannah Pauly, Devi Purwanti, Donada Rata, Roberta
Risso, Jérôme Seibert, Juho Toppari, Luis Engelmann, Sofiia Zhuchyk,
Mahmoud Moussa, Lilly Grande, Emilie Uehara, Mariia Mirat, Chiara
Cupola, Sara Maria Barbaglia, Sophia Vermaas, Ricardo Nuno de Almeida
Gens, Matheus Fischer Meyer, Rafaela Lara Caetano Reinhardt, Alina
Diachenko, Ricardo Filipe Tavares Magalhaes, Adrian Wróbel
INDEX

Note: Page references in italics denote figures, in bold tables and with “n”
endnotes.

21 Lessons for the 21st Century (Harari) 79


2030 Agenda for Sustainable Development 21, 82–83, 279, 304, 316–317

Abu-Mostafa, Yaser S. 239


academia: and AI Race 98–99; collaborative research initiatives 98;
educational programs 98–99
Access Now 101–102
accountability 27, 33, 35, 46, 73, 83, 95–97, 99–100, 104–105, 108, 110,
150, 165, 239, 265, 281–282, 289, 290, 356, 359, 366, 370, 373, 384,
394–395; for AI-driven decisions 267; as aspect of ethical AI in
healthcare 374; ethical AI in healthcare 374–375; mechanisms 180,
185
Acemoglu, D. 205
Adler-Nissen, Rebecca 79
Africa: and AI 22–23; China's AI strategies in 23–29; critical sustainability
challenges in 22; digital transformation 19, 25
African Charter on Human and Peoples’ Rights 307
agents 10, 14, 59, 114–116, 121, 157; autonomous AI 136, 230; biological
129–131, 133–134, 139, 141–143; chemical warfare 137–138;
infectious 132; nerve 137–138; pandemic-class 131
Agent Design 115
“The Age of Surveillance Capitalism” (Zuboff) 271
AI4People initiative 106–107
AI Act 97, 105, 337–343, 343n2, 343n11, 344n18, 344n19, 347–359;
Article 13 352–353; Article 14 353–355; Article 26 355–356; Article
50 356–357; broader scenario 341–343; compliance with Union law on
copyright/related rights 358; documentation for AI system providers
358; European approach 340–341; geo-political dimension of AI
regulation in Europe 337–340; information for AI system providers
358; and MDR complexities 374–378; measures ensuring human
agency in 350–351; purpose of 40–43; risk-based approach 47–48;
summary of 347–348; technical documentation 358; users’ protection
43–44
AI bias 230, 231, 236, 240n10
AI-driven predictive analytics 270
AI-enabled telemedicine platforms 269
AI Lab 207
AI Now Institute 102
AI-powered hackers 228–232; better targeted attacks through deepfakes
229–230; email-related compromise 229; emerging threats/challenges
230–231; ethical considerations 231; facing unknown, ugly side of AI
230; fake news 231–232; human oversight 231; neutralizing
defenders’ tools/platforms 229; phishing 229; spam 229
AI-powered warfare 149–159; extended MHC in command and control
150–153; extended MHC in nuclear command and control 153–156;
MHC for autonomous weapons systems 156–158; overview 149–150
AI Race 73, 87; and academia 98–99; and collaboration 104–110; and
industry 99–100; and non-profit organizations 100–102; Quintuple
Helix model 92–95; sector-specific dynamics 95–98; and society 102–
103; through collaborative geopolitics 92–110
AirBnb 225
AI systems 1, 20, 28, 33, 36, 40–51, 73, 83, 95–96, 101–107, 109–110, 117,
133–134, 143, 150–151, 154–156, 158–159, 180, 182, 184–185, 193–
194, 205–206, 228–231, 234–235, 237–239, 264–265, 299, 301–302,
313, 315–317
AI tagging 235–237, 240n11
The Alchemy of Finance (Soros) 333n7
Alexanian, Tessa 139
algorithm bias 240n10
Algorithmic Justice League (AJL) 101
algorithmic machine learning 57
AlphaFold3 133
AlphaMissense 135
AlphaProteo 133
al-Qaeda 164
Al-Rodhan, Nayef 86
Altman, Sam 80, 82, 125, 129, 255
Amaro, Ramon 64–66
Amazon 101, 113, 121, 231
American Convention on Human Rights 307
Amnesty International 102, 271
Amoore, Louise 64, 66, 69–70
Andropov, Yuri 195
Anti-Phishing Working Group (APWG) 224–225
Apollo Program 37
application programming interfaces (APIs) 140
Arab Spring 245
Aristotle 294
armed conflicts: and artificial intelligence 162–169; children in 264–276;
dark side of AI's impact on children in 270–273; safeguarding
children's fundamental rights amidst emerging AI risks in 273–274
arms control, and AI 140–143
Article 13 of Directive (EU) 2016/680 359n2
artificial general intelligence (AGI) 116, 407–408
artificial intelligence (AI): in armed conflicts 162–169; and biological
weapons 131–136; complex reflections on 31–39; and cybersecurity
223–240; cyber threat landscape 223–226; and diplomacy 189–200;
and earth observation-based capability 243–250; and EO 249–250;
ethics and future regulation of 360–368; factor in promotion of
fair/accessible justice 286–289; geopolitical impact of 19–29; and
geopolitics 189–200; and governance 35–37, 57–70, 138–140; and
health security 129–131; human autonomy 349–350; in humanitarian
action 176–186; influence on the economy 322–332; international
cooperation 237–239; and international humanitarian law in conflict
settings 264–276; and international relations 10–15, 72–88; large-scale
transboundary information 243–250; lessons learned from comparable
paradigm 232–234; multistakeholder policy development 237–239;
overview 5–10; promise of AI in cybersecurity 226–228; and race 57–
70; “Reinventing the I in C.I.A.” 234–237; research into 5–15;
supporting right to life of children in armed conflict zones 268–270;
and supranational law 40–51; threat of AI-powered hackers 228–232;
as tool for conflict resolution 124–125; transformative impact of 1–2,
112–122; young leaders’ vision on future of 406–408
Artificial Intelligence Act see AI Act
Artificial Intelligence Agency 87
Artificial Intelligence and Data Act (AIDA) (Canada) 105
“Artificial Intelligence and International Affairs. Disruption Anticipated”
80
artificial intelligence ethics 382–390; autonomous machines 387–388;
autonomy 384–385; general concerns 384–385; human autonomy and
agency 388; warming up on AI and ethics reviews 382–383
“Artificial Intelligence for A Digital Blue Planet” 20
artificial intelligence regulatory systems 392–401; Brazil 393–395;
comparative analysis 400–401; EU 395; India 396; Japan 396–397;
methodology 392–393; People's Republic of China (PRC) 397–398;
regulatory frameworks 393–400; United Kingdom (UK) 398–399;
United States of America (USA) 399–400
“Artificial Intelligence—The Revolution Hasn’t Happened Yet” (Jordan)
112
artificial neural network (ANN) 7–8, 12
Artificial Wisdom (AW) 87
Asaro, Peter 157
ASEAN (Association of Southeast Asian Nations) 238; Cybersecurity
Cooperation Strategy 238
Asia Indigenous Peoples Pact (AIPP) 314, 319n21
Association for Computing Machines 390n2
Australia Group (AG) 141
automated decision-making 183–184
automated science 135–136
automatic identification system (AIS) 249
autonomous machines 387–388
autonomous weapons systems (AWS) 149, 156–158
Autonomous Weapon Systems (AWS) Conference 239
autonomy 19, 34, 42–43, 85, 87, 107, 162–163, 362, 370, 374; artificial
intelligence ethics 384–385; human (see human autonomy);
operational 194; personal 180, 347; rudimentary form of 165; strategic
284

backpropagation (BPG) algorithms 7, 113


Badri, Aiman 269
Baekeland, Leo 233
Baker, David 139
barriers to equitable progress 257–262; access to AI tools 259–262; access
to co-creation/decision-making 257–258; first barrier 257–258; second
barrier 259–262
Beauchamp, T. L. 374
behavior analytics 240n5
Belt and Road Initiative 24–25, 27
Benasayag, Miguel 80
Bengio, Yoshua 80
Benjamin, Ruha 61–63
Bennett, M. M. 246
Bhaskar, M. 31, 33, 37–38
Biden, Joe 85–86, 100
Big Data 57–70
Bing Chat 80
biological design tools (BDTs) 133–136, 140
biological weapons: and artificial intelligence 131–136; automated science
135–136; biological design tools 133–135; governance 138–140; non-
proliferation 138–140
Biological Weapons Convention (BWC) 141–142
biometric technology 66
black box effect 44
Black Horizon approach 63–69
The Black Technical Object (Amaro) 64
Bletchley Declaration 36
Blinken, Antony 81
blockchain 72, 75, 137, 237, 279, 397
Bologna Technopole 214–222
botnet 225, 240n4
brain 7, 12–15, 84, 361, 382
Bran, Andres 137
Bratton, Benjamin 58–60
Brazil 97, 125, 340, 393; artificial intelligence regulatory systems 393–395
Brazilian Chamber of Deputies (Federal Senate) 393–394
Brenner, M. 210
Bretton Woods 192
Brexit 11, 231
A Brief History of Time (Hawking) 117
Brin, Sergey 325
Browne, Simone 61, 63
‘Brussels effect’ 337–338
Buchanan, B. 31, 34
Buolamwini, Joy 101
Burrell, Jenna 68–69

Cambridge Analytica 231


Campbell, D. F. J. 92, 93
“Can’t Wait to Learn” program 269
Captivating Technology (Benjamin and Roberts) 65
Carayannis, E. G. 92, 93
CARE Principles for Indigenous Data Governance 314–315
cave allegory 76–79
CCTV (closed-circuit television) 350–351
central processing unit (CPU) 7
chatbots 8, 113, 131, 280–281, 287, 348, 385, 392, 395, 398
ChatGPT 9, 78, 80, 105, 113, 228, 294–303; age of AI 296–300; challenges
and risks 296–300; historical evolution of knowledge organization
294–296; models of knowledge storage 294–296; new architecture of
knowledge 300–303
ChatGPT3 255
ChemCrow 137
chemical weapons: and Generative AI 136–138; governance 138–140; non-
proliferation 138–140
Chief Information Security Office (CISO) 226–227, 240n7
children: in armed conflicts 264–276; safeguarding fundamental rights
amidst emerging AI risks 273–274
Childress, J. F. 374
China 97; 2021 Provisions on the Management of Algorithmic
Recommendations in Internet Information Services 397–398; AI
investment programs 74; AI strategies in Africa 23–29; approach to AI
governance 86; artificial intelligence regulatory systems 397–398; Belt
and Road Initiative 24–25, 27; Measures for the Management of
Generative Artificial Intelligence Services 398; New Generation AI
Development Plan in 2017 26, 397
Choi, Yejin 84
Chun, Wendy 64, 67–68, 70
Church, George 139
Cicero 124, 388
Claude 3 113
cognitive bias 44
cognitive warfare 200
Colebrook, Claire 68
collaboration/collaborative geopolitics: and AI Race 104–110; AI Race
through 92–110; future regulatory frameworks 108–110; multi-
stakeholder partnerships 104–106; strategic models 104–110;
successful frameworks 106–108
Collaborations Pharmaceuticals, Inc. 136–137
Collingridge dilemma 339, 343n4
complex thinking 31–32
Compute Unified Device Architecture (CUDA) 7
confidential computing 211
confidentiality: finance 209; financial crime 209–211; healthcare 209;
preserving in data-driven applications 205–213; preserving through
privacy enhancing technologies 206–208
conflict resolution, and AI 124–125
conservation of biodiversity 308
containing 37–39
Convention on Biological Diversity (CBD) 308–309, 315, 317
Convention on the Rights of the Child (CRC) 268
convolutional neural networks (CNNs) 8
Copernicus program 244–245, 250
co-regulation 343n5
Coscientist 137
cost and AI 117–119
Council of Europe (CoE) 283, 393
Council of Europe European Commission for the efficiency of justice
(CEPEJ) 283
Council of Europe Framework Convention on Artificial Intelligence and
Human Rights, Democracy and the Rule of Law 40, 45–47, 51, 98,
101
Covid-19 no-vax movement 80
COVID-19 pandemic 23, 57, 121, 129, 130, 133, 142, 224, 369
Cowls, J. 374
Critical Assessment of Structure Prediction Conference (CASP) 133
critical race theory 58
cryptography 207, 240n9
CryptoLocker 224
cultural diversity 44, 305, 309–310
cybernetics 6
“Cybernetics or Control and Communication in the Animal and the
Machine” (Wiener) 6
cybersecurity: AI for security operations 226–228; AI strengthening
cybersecurity training 228; artificial intelligence and 223–240;
professionals 231–232; promise of AI in 226–228
Cyberspace Administration of China (CAC) 26
cyber threat landscape 223–226

DALL·E 113
dark sousveillance 63
data: Indigenous 313–316; protection of 313–316; sharing 181, 211, 366;
synthetic 211
Data Act 41
data-driven applications: confidentiality in 205–213; privacy in 205–213
data protection: addressing bias through 181–182; AI and humanitarian
action 181–185; automated decision-making 183–184; human
oversight 184–185; personal 180–181; and transparency 182–183
DDoS (Distributed Denial of Service) 240n3
deep belief networks (DBNs) 8
Deep Blue 233
deepfakes: AI-powered hackers 229–230; better targeted attacks through
229–230
deep learning (DL) 7–8, 113
DeepMind 113, 131, 133, 135
deep neural networks (DNNs) 208
De Haro, Leyma P. 132
design-test-build-learn (DBTL) 133
Dictionnaire Raisoné des Sciences, des Arts et des Métiers 294
differential privacy 209–210
digital diplomacy 289, 337, 340, 343n1
digital innovation, as political weapon 23–28
Digital Personal Data Protection Act 396
Digital Platform Transparency Act 397
Digital Service Act 41
Digital Single Market Act 41
digital twin (DT) 15
digitization 223–224, 305
diplomacy: and AI 189–200; and AI's future challenges 81–84; and
geopolitics 189–200; quantum 76–79; and technology 73–74
disarmament treaties 193–194
Discriminating Data (Chun) 67
dolus directus 169
dolus eventualis 167
Domaradskij, Igor V. 134
Duke, J. E. 387

Earth Observation (EO) 243–250


earth observation-based capability and transboundary information 243–250
Ebola emergency 130
economic allocation of resources 329–332
economy: AI and economic allocation of resources 329–332; AI and
investments 329–332; AI as transformative force 322–323; and AI
revolution 323–326; artificial intelligence's influence on 322–332;
global economy/society 322–323; hardware revolution 323–326;
money and AI 326–328; new/pervasive sources of information 326–
328; software revolution 323–326
“e-embassies” initiative 75
Eklund, L. 245
e-Learning Sudan 269
email-related compromise 229
entropy 5–15
Entscheidungsproblem 6
environmental challenges, and AI 304–318
environment and AI 117–119
epidemiological model of society 59
ePrivacy Directive 349
equality 255–262, 365–366
ethical algorithmic future 63
ethical considerations 231
ethical dilemmas: in healthcare artificial intelligence 369–379; and
regulatory landscape 369–379
ethical generative AI: coping with protection of Indigenous data 313–316;
governance 313–316; sovereignty 313–316; use and sustainable
approaches 304–318
ethics: and AI 85, 360; autonomy 85; beneficence 85; as critical reflection
to inspire regulation/policies 364–365; dual advantage 105; ethical
recommendations for policy 367–368; and future regulation of AI
360–368; geopolitical tool to foster 337–343; and healthcare AI 371–
374; justice 85; non-maleficence 85; requirements for regulation of AI
365–366; techno-centrism 360–364; toward regulation 366–367
EuroHPC Joint Undertaking 216
EuroHPC System 215–221
Euromissiles crisis 195
Europe: AI ACT 340–341; geo-political dimension of AI regulation in 337–
340
European Commission 184, 244, 286, 340, 375, 395
European Convention on Human Rights (ECHR) 44
European Court of Human Rights (ECtHR) 48
European Data Protection Board (EDPB) 51n12, 344n12
European Data Protection Supervisor (EDPS) 51n12
European Digital Strategy 379
European Ethical Charter on the Use of Artificial Intelligence in Judicial
Systems and Their Environment 283
European framework 215
European Meteorological Center 217–218
European Parliament 238
European Space Agency (ESA) 244
European Union (EU) 11, 24–25, 97, 150–151; AI Act (see Regulation on
Artificial Intelligence (AI Act)); approach to AI governance 26;
artificial intelligence regulatory systems 395; Charter of Fundamental
Rights 25, 41, 43–44, 47, 50, 347, 349; General Data Protection
Regulation (GDPR) 272, 374; regulatory challenges of healthcare AI
in 374–378
exclusion 20, 22, 305, 306, 314, 366
Expedition of the Thousand 199
explainability 9, 44, 186, 212, 357, 365, 368, 370, 373, 378
eXplainable AI (XAI) 155, 185–186
Export-Import Bank of China 27

Facebook 113
fairness 365–366
fake news 231–232
Fanon, Frantz 64
Faxon, H. O. 246
federated learning (FL) 208, 213
FedEx 225
financial crime: confidential computing 211; differential privacy 209–210;
homomorphic encryption 210–211; synthetic data 211
Financial Instruments and Exchange Act 397
The Fink Report 130
Fischhendler, I. 248
flat ontology 59
Floridi, L. 374
Food and Agricultural Organization (FAO) 20
Fourth Industrial Revolution 72
free, prior, and informed consent (FPIC) 311–312, 315

Gates, Bill 6, 255


Gates Foundation's view on AI 256–257
Gemini 8, 100, 105, 113–114, 385
Gemini Pro 114
General Data Protection Regulation (GDPR) 41, 44, 206, 337, 345n21, 349
general-purpose AI models 357–358
generative AI 112, 116, 228, 312–313, 332n1, 332n2; and chemical
weapons 136–138; concerns companies have 117–119, 118;
transformative impact of 119–122, 120, 121
geonetwork 191
geopolitical: dimension of AI regulation in Europe 337–340; tool to foster
human rights and ethics 337–343
geopolitics 407; and AI 189–200; and diplomacy 189–200
Ghebreyesus, Tedros Adhanom 130
GitHub 225
Global AI Governance Initiative (GAIGI) 86
global development aid: impact of AI on likeminded actors 282–286;
market of 282–286
Global Digital Compact (GDC) 83, 316
global economy 322–323
Global Grand Challenges 259
Global Indigenous Data Alliance 314
Global North 197, 338
Global South 125, 197, 338, 343n3
Google 99, 113, 324–325
Google Bard 80, 113
Google Earth Engine (GEE) platform 246
Gopal, A. 140
governance: and AI 35–37, 57–70, 138–140; chemical and biological
weapons 138–140; ethical generative AI 313–316; and non-
proliferation 138–140; and security 35–37
governments: and AI development 96–98; case studies of national
approaches 97; international agreements and treaties 97–98; role of
governmental bodies 96–97; and tech industry 100
GPT-3.5 115
GPT-4 115, 117
GPT4V 114
Grand Challenges initiative 259
graphical processing unit (GPU) 7–8, 10
Guidance Document on Ethics and Governance of Artificial Intelligence for
Health 20
“Guidelines for secure AI system development” 83
Guterres, Antonio 82

H1N1 swine flu pandemic 130, 131


H5N1 influenza 131
hackers 228–232
The Hague District Court 48
Hall, Stuart 66–67
Harari, Yuval Noah 79–80, 235
hardware revolution 323–326
Harmonices Mundi 14
Hawking, Stephen 80, 117
healthcare artificial intelligence: AI Act and MDR complexities 374–378;
ethical dilemmas and regulatory landscape in 369–379; ethical
landscape of 371–374; opportunities and challenges of 369–371;
regulatory challenges in EU 374–378
health security and artificial intelligence 129–131
Heyns, Christof 156
high-frequency trading (HFT) 194
High-Level Expert Group on Artificial Intelligence (HLEG) 347, 375
high-performing computing (HPC) 214–215
high risk 348
high-risk AI systems 48–50, 351–357
Hinton, Geoffrey 80, 116
“Hiroshima AI Process” 82
Homo Deus. A Brief History of Tomorrow (Harari) 79
homomorphic encryption 210–211
Höne, Katharina 74
How Generative AI is transforming business and society (Wyman) 332n1
HTTPS protocol 207
human agency 388; and AI Act 350–351; challenges to human autonomy
posed by AI 349–350; and general-purpose AI models 357–358; and
high-risk AI systems 351–357; responsible/trustworthy use of AI 347–
359; summary of the AI Act 347–348
human autonomy 388; and AI 349–350; Article 13 352–353; Article 14
353–355; Article 26 355–356; Article 50 356–357
human being/humans: ‘direct’ behavior 362–363; free and with a purpose
363; interpreting meanings 362; relational 363–364; self-aware 362
human-centrism 361–362
Human Footprint Index 247
humanitarian action: AI concerns and data protection 181–185; artificial
intelligence in 176–186; overview 176–177; personal data 179–181;
and principle of humanity 178–179
human oversight 231, 347–359
human rights 337–343
Human Rights Council 319n22
Human Rights Watch 271
Humby, Clive 73
Huttenlocher, Daniel 124

IBM 7, 101, 324, 370


identification 235
ILO Convention No. 169 307
Imbrie, A. 31, 34
India 75, 125, 393; artificial intelligence regulatory systems 396; as swing
country 197
Indigenous data, protection of 313–316
Indigenous peoples: contextualization 304–306; environmental
management 306–309; from identification to implementation 306–309;
rights, data and environmental sustainability 304–306; rights of 306–
309
Indigenous sovereignty data (ID-SOV) 314
individual criminal liability 166–168
industry (technological): and AI Race 99–100; and governments 100;
technological innovation and application 99–100
innovation 93, 99–100
innovation ecosystems: lessons learned and future challenges 221–222; role
of public sector in developing 214–222
information: for AI system providers, 358; importance of, 232; large-scale
transboundary, 243–250; new/pervasive sources of, 326–328; satellite-
based EO, 245–249
Institute of Electrical and Electronics Engineers (IEEE) 390n3; codes 383;
Ethically Aligned Design principles 374
intangibility and traditional knowledge 309–313
intellectual property 309–313
Inter-American Human Rights System 307
Intergovernmental Panel on Climate Change (IPCC) 87
International Atomic Energy Agency (IAEA) 82
International Committee of the Red Cross (ICRC) 150, 156–158, 177, 267
International Covenant on Civil and Political Rights (ICCPR) 307
International Covenant on Economic, Social, and Cultural Rights (ICESCR)
307
International Criminal Court 166, 168
international development cooperation 279–290
international humanitarian law (IHL) 149; AI and future perspectives of
264–276; in conflict settings 264–276; and smart weapons 163–165;
violation of 166–168
International Indigenous Forum on Biodiversity (IIFB) 317
International Monetary Fund 27, 331
international organizations and AI 19–20
International Red Cross 176, 178
international relations: AI dangers 79–81; and artificial intelligence 10–15,
72–88; diplomacy and technology 73–74; in technologization age 72–
88
International Telecommunication Union 20
Internet of Things (IoT) 72, 194, 223–225, 227, 232
interpretability approach 183
investments, and AI 329–332
invisibility 314
Iron Dome 157
Islamic State in the Levant (ISIL) 138
Israeli Defense Forces (IDF) 150, 151, 201n7

Jackson, Zakiyya Iman 61


Japan 45; artificial intelligence regulatory systems 396–397; as observer
state 85; as swing country 197
Johnson, Dwayne “The Rock” 255
justice: accessible 286–289; fair 286–289

Kahneman, Daniel 152


Kariyeva, J. 247
Kasparov, Garry 233
#KeepItOn campaign 101
Kelly, C. J. 370
Kim Jong-Nam 138
Kissinger, H. A. 31, 34–35, 39, 73, 83–84, 124
knowledge 93; control 311; governance 311; storage 294–296; traditional
304–318
knowledge organization: historical evolution of 294–296; models of
knowledge storage 294–296
Korea War 200
Kötz, H. 392
Kurzweil, Ray 239

large language models (LLMs) 8–10, 37, 114–116, 131–132, 219


large-scale transboundary information 243–250
Law Enforcement Directive 359n2
LEDS-150 Land-Electronic Defense System 157
legal issues of AI 117–119
Lentzos, Filippa 134
Leonardo 215–216, 218–219; architecture of 216; tech specifications 217
Lethal Autonomous Weapon Systems (LAWS) 387
Levin, N. 245
Linder, Astrid 258
Llama2-13B 115
LLM BLOOM (Big Science Large Open-science Open-access Multilingual
Language Model) 119
Local Communities and Indigenous Peoples Platform (LCIPP) 317
lower- and middle-income countries (LMICs) 258, 260–261

Machado, M. R. 247
machine learning (ML) 6–7, 176, 229, 240n8
machine learning bias 240n10
machines: ‘act as if they have a purpose’ 363; automatic 362; ‘calculate’
362; not self-aware 362; operate syntactically 362; predict behavior
362–363; self-referential 363–364
man: autonomous 362; ‘feels’ 362
Manifattura Tabacchi 218
MapReduce programming paradigm 7
Martens Clause 171n24
Martínez Cobo, José 306
Massive Open Online Course (MOOC) 288
McKinsey 75
mDoc 260–261
MDR complexities 374–378
meaningful human control (MHC) 149, 365; for autonomous weapons
systems 156–158; extended, in command and control 150–153;
extended, in nuclear command and control 153–156
Médecins Sans Frontières (MSF, Doctors Without Borders) 270
MegaSyn 137
Merck 225
Meta 124, 388
Microsoft 78, 99, 113
Millennium Development Goals 21
minimal risk 348
Mirai botnet 225
Missing Persons Digital Matching Project (MPDM) 177
Mistral AI 113
ML algorithms 7
Mobisson, Nneka 260–261
models of knowledge storage 294–296
Modern AI 112
MODIS (Moderate-resolution Imaging Spectroradiometer) satellite 245
Mondelēz 225
money and AI 326–328
Moore, Charles 233
Moore, Gordon 6
Mo’otz Kuxtal Voluntary Guidelines 315
Moulange, Richard 140
MSN 78
“multi-party computation” design 212–213

Nächstbereichschutzsystem (NBS) MANTIS 157


Nagoya Protocol to the CBD on Access and Benefit-sharing 309
NASA 37
National Institute for Nuclear Physics (INFN) 217
National Science Advisory Board on Biosecurity (NSABB) 130
Naturalis Historia (Pliny the Elder) 294
Navalny, Alexei 138
negative narratives 314
Nelson, Cassidy 135
Netflix 113, 225
Newell, Allen 72
New Generation Artificial Intelligence Development Plan (AIDP) 26
“The New Geopolitics of Converging Risks: The UN and Prevention in the
Era of AI” 20
New Jim Code 61–62, 63, 68
new technological wave 31–32
Noble, Safiya 61, 63
non-discrimination 365–366
non-profit organizations: and AI Race 100–102; case studies 101–102; role
in AI ethics 100–101
Non-Proliferation Treaty 197
Normalized Difference Vegetation Index (NDVI) 245–246
NTMs (National Technical Means) 198
Nvidia 113

OECD AI Principles 98
OKO 155
Olivetti 323
OpenAI 9, 113–114, 117, 132, 233, 255, 296
Organisation for Economic Co-operation and Development (OECD) 20,
393, 394
Organisation for the Prohibition of Chemical Weapons (OPCW) 138
Organization for Security and Co-operation in Europe (OSCE) 238
Ovh 225

Page, Larry 325


Parkes, Alexander 233
Parkesine 233
Peckham, R. 247
People's Republic of China (PRC) see China
Perotto, Pier Giorgio 323
personal data: and AI 179–181; humanitarian action 179–181; protection
180–181
Personal Information Protection Law 86
Petersmann, M. 60, 66, 69
phishing 229, 240n1
Pictet, Jean 179
‘planetary’ approaches 57–59, 70
Plato 76–79
Pliny the Elder 294
policies: ethical recommendations for 367–368; role of ethics as critical
reflection to inspire 364–365
polycrisis 31–32
Pope Francis 32, 34, 38
positive biopolitics 59
Predictive AI 332n2
privacy: and AI 117–119; differential 209–210; enhancing technologies
206–208; finance 209; financial crime 209–211; healthcare 209;
preserving confidentiality 206–208; preserving in data-driven
applications 205–213
privacy enhancing technologies (PETs) 206–208
Programma 101 323–324
Prometheus 76–79
proto-typical whiteness 61
proxies 182
public engagement and education 366
public investments 214–215
public sector: developing innovation ecosystems 214–222; developing
supercomputing 214–222
Putin, Vladimir 73

Quadruple Helix model 93–95


quantum diplomacy 76–79
Quickbird 248
Quintuple Helix model 92–95, 103; knowledge and innovation 93;
understanding 95

race: and AI 57–70; Black Horizon approach 63–69; as floating signifier


66–67; as technology 60–63
Raimondo, Gina 81
RAND Corporation 132, 194
Ransomware-as-a-service 224
Reagan, Ronald 195
Realpolitik 190
Rechtsstaat 191, 200n1
“Recommendation on the Ethics of Artificial Intelligence” 20, 106
Red Crescent Movement 176, 178
Reddit 225
regulation 365–366; AI Act and MDR complexities 374–378; data sharing
366; equality, fairness, and non-discrimination 365–366; ethics 366–
367; in Europe 337–340; geo-political dimension of 337–340; of
healthcare AI in the EU 374–378; meaningful human control 365;
public engagement and education 366; reliability 365; role of ethics as
critical reflection to inspire 364–365; shared responsibility 366;
sustainability 366; transparency and explainability 365
reinforcement learning (RL) 14
“Reinventing the I in C.I.A.” 234–237
relaunching 37–39
reliability 365
Renaissance 294
The Republic (Plato) 76
Research and Development Corporation (RAND) 72
Restrepo, P. 205
The Revenge of the Real (Bratton) 58
RFdiffusion 133
Roberts, Dorothy 65–66
Robichaud, Carl 125
Rose, Sophie 135, 140
rule of law 279–290
Russia 2, 24, 134, 138, 189; ad hoc political strategies 24
RYAN Complex 195–196

Saint-Gobain 225
Samuel, Arthur 6–7
Sandbrink, Jonas 133–134
Sapiens: A Brief History of Humanity (Harari) 79
SARS-CoV-2 see COVID-19 pandemic
satellite-based EO information to take action 245–249
satellite Earth observation 244–245
Scannell, R. Joshua 62, 67
Schmidt, Eric 84, 124
Schrödinger, Erwin Rudolf Josef Alexander 78, 81
Search in Own Database 114
Second Vatican Council 332n5
security: and AI 35–37; and governance 35–37
self-driving labs 136
September 11th terror attacks in 2001 130
shared responsibility 366
Shaw, Cliff 72
Shultz, George 79
Simon, Herbert A. 72
The Singularity Is Nearer 239
Sinha, R. 247
smart weapons: and international humanitarian law 163–165; violation of
law 166–168
society: and AI Race 102–103; emergence of AI stands as a transformative
force 322–323; public perception of AI 102; societal concerns 102–
103
Society 5.0 397
socio-technical systems 36
software revolution 323–326
solar radiation management (SRM) techniques 107–108
Soros, George 333n7
sovereignty, and ethical generative AI 313–316
spam 229
Spanish influenza 130
specific transparency risk 348
SPICE (Stratospheric Particle Injection for Climate Engineering) project
107–108
stacked autoencoders (SAs) 8
Starmer, Keir 399
“Study on the Problem of Discrimination against Indigenous Populations”
(Martínez Cobo) 306
Suleyman, M. 31, 33, 37–38, 82, 131
supercomputing: lessons learned and future challenges 221–222; role of
public sector in developing 214–222
supranational law: AI legal framework 40; and artificial intelligence 40–51
sustainability 221, 233, 276, 283, 288, 366; ecological 93, 107;
environmental 21, 83, 85; global strategy for 87
Sustainable Development Goals (SDGs) 279, 316
Suzman, Mark 256
symbiotic realism 74
synthetic data 211, 240n12

Taddeo, Mariarosaria 80
Taylor, Maxwell 200
Technical Assistance and Information Exchange (TAIEX) 286
techno-centrism: answers of 360–364; automatic machines/autonomous
man 362; human being as relational 363–364; human-centrism 361–
362; humans ‘direct’ behavior 362–363; humans interpreting meanings
362; machines as self-referential 363–364; machines calculate 362;
machines/human being and purpose 363; machines/humans and self-
awareness 362; machines operating syntactically 362; machines
‘predict’ behavior 362–363; man ‘feels’ 362
technologies: and diplomacy 73–74; privacy enhancing 206–208;
technological infrastructures 214–215
‘techno-realist’ attitude 32
tokenization 240n12
traditional knowledge: and artificial intelligence 304–318; and culture 309–
313; and intangibility 309–313; and intellectual property 309–313;
strengthening/integrating Indigenous approach 316–318
traditional knowledge systems (TKS) 304–305, 309–314
transparency 365; as aspect of ethical AI in healthcare 374; and data
protection 182–183
Triple Helix model 93–95
trust 326–328
Turing, Alan 6, 72
Turing machine 6
Tversky, Amos 152
Twitter 225

unacceptable risk 348


The Uncertain Trumpet (Taylor) 200
UNESCO 85, 106, 109–110, 317, 394; Convention concerning the
Protection of the World Cultural and Natural Heritage 310; Convention
for the Safeguarding of the Intangible Cultural Heritage 310;
Convention on the Protection and Promotion of the Diversity of
Cultural Expressions 310; Recommendation concerning the Promotion
and Use of Multilingualism and Universal Access to Cyberspace 310;
Recommendation on the Ethics of AI 384; Universal Declaration on
Cultural Diversity 309
UN Expert Mechanism on the Rights of Indigenous Peoples (EMRIP) 315
UN Framework Convention on Climate Change (UNFCCC) 317
UniCoin 331
Union law on copyright and related rights 358
United Kingdom (UK): artificial intelligence regulatory systems 398–399;
Brexit 11, 231; Financial Conduct Authority 398; Information
Commissioner's Office 398
United Nations (UN) 19, 21; Declaration on the Rights of Indigenous
Peoples 311, 315; General Assembly 309; Security Council Resolution
1540 141
United Nations Center for Policy Research (UNU-CPR) 20
United Nations Conference on Environment and Development (Earth
Summit, Rio de Janeiro) 308
United Nations Convention on the Rights of the Child (UNCRC) 272–273
United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP)
309, 319n22
United Nations Environmental Program (UNEP) 20
United Nations Institute for Disarmament Research (UNIDIR) 271
United Nations Office on Drugs and Crime (UNODC) 272
United States 24; 2022 National Security Strategy 141; Africa's digital
development 25; AI investment programs 74; Algorithmic
Accountability Act 51n9; artificial intelligence regulatory systems
399–400; collaboration between industry and government 100
Universal Declaration of Human Rights 306
unmanned aerial vehicles (UAVs) 158
UN Permanent Forum on Indigenous Issues (UNPFII) 312, 317–318
UN Sustainable Development Goals (SDGs) 28–29, 279, 289, 316–317;
and AI 21–24, 26
US National Security Commission on AI (NSCAI) 153–154

value-oriented governance strategy 84–87


Van Den Meerssche, D. 60, 66, 69
van Leeuwen, W. J. D. 247
Varè, Daniele 76
VIIRS Active Fire product 246
Visible Infrared Imaging Radiometer Suite (VIIRS) 245
von der Leyen, Ursula 221
Vumacam 28

War on Terror 130


Warsaw Pact 195
Watson, Thomas J., Sr. 324
Weheliye, Alexander 61
Wiener, Norbert 6
Wikimedia 295
Wikipedia 197, 295
Wine, Bobi 28
WinLock 224
WIPO Intergovernmental Committee on Intellectual Property and Genetic
Resources, Traditional Knowledge and Folklore 312
WIPO Member States 312
Woods, K. M. 246
World Bank 27, 75
World Economic Forum 227
World Health Organization (WHO) 20, 130
World Intellectual Property Organization (WIPO) 310
WorldView-1 248
WorldView-2 248–249
WorldView-3 249
World War II 81, 257
“World War of Aid” 284
Wyman, Oliver 332n1
Wynter, Sylvia 61, 64, 67
WYSIATI (What-You-See-Is-All-There-Is) heuristics 152

Xi Jinping 86

young leaders, and AI 406–408

Ziller, J. 283
Zuboff, Shoshana 271
Zweigert, K. 392
This file was downloaded from Z-Library project

Your gateway to knowledge and culture. Accessible for everyone.

z-library.sk z-lib.gs z-lib.fm go-to-library.sk

Official Telegram channel

Z-Access

https://wikipedia.org/wiki/Z-Library

You might also like