0% found this document useful (0 votes)
37 views19 pages

High-Reward, High-Risk Technologies An Ethical and Legal Account of AI Development in Healthcare

The paper reviews the ethical and legal challenges posed by AI technologies in healthcare, identifying six key categories of issues: privacy, individual autonomy, bias, responsibility and liability, evaluation and oversight, and the job market. It emphasizes the need for education and training for healthcare professionals, support during AI system use, and the integration of ethical and legal considerations into AI tools. The authors argue that collaboration between healthcare professionals, ethicists, and lawyers is essential to ensure the responsible development and implementation of AI in medical practices.

Uploaded by

Jeff Hambre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views19 pages

High-Reward, High-Risk Technologies An Ethical and Legal Account of AI Development in Healthcare

The paper reviews the ethical and legal challenges posed by AI technologies in healthcare, identifying six key categories of issues: privacy, individual autonomy, bias, responsibility and liability, evaluation and oversight, and the job market. It emphasizes the need for education and training for healthcare professionals, support during AI system use, and the integration of ethical and legal considerations into AI tools. The authors argue that collaboration between healthcare professionals, ethicists, and lawyers is essential to ensure the responsible development and implementation of AI in medical practices.

Uploaded by

Jeff Hambre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Corfmat et al.

BMC Medical Ethics (2025) 26:4 BMC Medical Ethics


https://doi.org/10.1186/s12910-024-01158-1

REVIEW Open Access

High-reward, high-risk technologies? An


ethical and legal account of AI development
in healthcare
Maelenn Corfmat1,2*, Joé T. Martineau3† and Catherine Régis1,4†

Abstract
Background Considering the disruptive potential of AI technology, its current and future impact in healthcare,
as well as healthcare professionals’ lack of training in how to use it, the paper summarizes how to approach the
challenges of AI from an ethical and legal perspective. It concludes with suggestions for improvements to help
healthcare professionals better navigate the AI wave.
Methods We analyzed the literature that specifically discusses ethics and law related to the development and
implementation of AI in healthcare as well as relevant normative documents that pertain to both ethical and
legal issues. After such analysis, we created categories regrouping the most frequently cited and discussed ethical
and legal issues. We then proposed a breakdown within such categories that emphasizes the different - yet often
interconnecting - ways in which ethics and law are approached for each category of issues. Finally, we identified
several key ideas for healthcare professionals and organizations to better integrate ethics and law into their practices.
Results We identified six categories of issues related to AI development and implementation in healthcare: (1)
privacy; (2) individual autonomy; (3) bias; (4) responsibility and liability; (5) evaluation and oversight; and (6) work,
professions and the job market. While each one raises different questions depending on perspective, we propose
three main legal and ethical priorities: education and training of healthcare professionals, offering support and
guidance throughout the use of AI systems, and integrating the necessary ethical and legal reflection at the heart of
the AI tools themselves.
Conclusions By highlighting the main ethical and legal issues involved in the development and implementation of
AI technologies in healthcare, we illustrate their profound effects on professionals as well as their relationship with
patients and other organizations in the healthcare sector. We must be able to identify AI technologies in medical
practices and distinguish them by their nature so we can better react and respond to them. Healthcare professionals
need to work closely with ethicists and lawyers involved in the healthcare system, or the development of reliable and
trusted AI will be jeopardized.


Joé T. Martineau and Catherine Régis have contributed equally to
this work.
*Correspondence:
Maelenn Corfmat
[email protected]
Full list of author information is available at the end of the article

© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use,
sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and
the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included
in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will
need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:​​​//creativecommo​ns.​​org/lice​ns​e​s/by/4.0/.
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 2 of 19

Keywords Artificial intelligence, Machine learning, Healthcare, Ethics, Law

Introduction policy initiatives. Based on such definition, this paper


Recently, researchers, media, and practitioners have includes all kinds of computational systems process-
taken a keen interest in developments in artificial intel- ing input data to generate outputs such as predictions,
ligence (AI). Indeed, since the launch of ChatGPT and content, recommendations, or decisions that can influ-
GPT-4 by OpenAI at the end of 2022, citizens and profes- ence its healthcare environment of implementation [16].
sionals from all sectors, including healthcare, have been In healthcare, AI has great potential and it can be inte-
debating the contributions, impacts, and risks of such grated to connected objects (e.g., smart blood pressure
technologies. This paper outlines the main ethical and monitor [17]), robotic systems (e.g., surgical robot [18]),
legal considerations associated with the development and virtual assistants (e.g., patient management or appoint-
deployment of AI within healthcare systems. ment scheduling systems1), chatbots (e.g., customer ser-
Medical doctors have used advanced technologies for vice), contacts tracking during epidemic episodes [19],
many years. So why is AI different? First, it is far more or medical decision support (e.g., radio image recogni-
disruptive. By allowing autonomous, opaque learning— tion for diagnosis2, choice of optimal treatment options3).
and sometimes even decision-making—in a dynamic The practice of medicine is based on medical doctors’
environment [1], AI leads to some unique technical, ethi- knowledge and experience, and AI’s dizzying calculation
cal, and legal consequences. For the first time since the capacities mean that it can develop clinical associations
birth of medicine, technology is not limited to assisting and insights [20] on data derived from this knowledge
human gesture, organization, vision, hearing, or mem- (i.e., evidence from textbooks) and experience (i.e., lab
ory. AI promises to improve every area from biomedi- results from patients) [21]. Thus, to the extent that the
cal research, training, and precision medicine to public “AI chasm” can be reduced, healthcare professionals will
health [2, 3], thus allowing for better care, more adapted increasingly see intelligent tools or machines being inte-
treatments, and improved efficiency within organizations grated into their daily practice [22]. This naturally pro-
[4]. AI techniques including artificial neural networks, vokes concerns such as the fear of being replaced and
deep learning, and automatic language processing can lack of confidence in the machine. In addition, healthcare
now for example analyze a radiology image more quickly professionals are poorly informed about the ethical and
and precisely than a human [5], diagnose a pathology [6, legal issues raised by the use of AI [23, 24].
7], predict the occurrence of a hyperglycemia crisis and Worries about the blind spots, complex implementa-
inject an appropriate dose of insulin [8], and analyze tion, impacts, and risks of AI have generated much politi-
muscle signals to operate an intelligent prosthesis [9]. Yet, cal, academic, and public debates [15, 25]. Some have
these improvements need to be balanced by the gap that called for new ethical frameworks to guide the responsi-
now exists between the development (and marketing) of ble development and deployment of AI, which has led to
many AI systems and their concrete, real-life implemen- numerous declarations, ethics charters, and codes of eth-
tation by healthcare and medical service providers such ics, proposed by organizations of every type [26], includ-
as hospitals and medical doctors. This “AI chasm” [10] ing international organizations [27], public and academic
is notably explained by the disconnect that sometimes institutions [28], hybrid groups [28], and private compa-
exists between the information technology (IT) side of nies such as Google [29], IBM [30], Microsoft [31], and
system development and their adaptation to the specific Telia [32]. AI legislation has also been called for.
needs and reality of healthcare institutions and patients, All these productions are sources of normativity [33].
as well as by the ethical and legal issues discussed in In other words, they guide human behavior, providing
this paper [10, 11]. Investment in the infrastructure that parameters for what “should” and “shouldn’t” be done.
leads to AI solutions capable of “being implemented in However, the disciplines of ethics and law have dis-
the system where they will be deployed (feasibility), [and tinct logics, conceptual frameworks and objectives and
of ] showing the value added compared to conventional
interventions or programs (viability)” should also be tar-
1
For instance, Elise A.I. Technologies Corp. specializes in conversational AI
geted [12].
solutions. EliseAI offers AI-powered technology that can automate admin-
Second, health professionals generally seem to have istrative tasks like appointment scheduling and sending payment remind-
rather poor knowledge of what AI is and what it allows ers (SMS, voice, email and web chat formats).
[13]. While there is no unanimous definition of AI, the 2
For example, Enlitic Inc. is developing deep learning medical tools to
streamline radiology diagnoses.
one proposed by the Organization for Economic Coop- 3
For example, Healthee is a company that uses AI to help its team mem-
eration and Development (OECD) [14, 15] has gained bers effectively navigate the coverage and medical treatment options avail-
international traction and is often referred to in various able to them.
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 3 of 19

respond to different procedures of creation and imple- literature that specifically discusses the ethical and legal
mentation [34], making ethics and law two separate dimensions related to AI development and implemen-
sources of normativity. First, law is composed of general, tation in healthcare as well as relevant normative docu-
impersonal, external and binding rules, accompanied ments that pertain to both ethical and legal issues (i.e.,
by potential formal sanctions (by courts or police for AI ethics guides or charters developed by governments,
instance), while ethical norms do not exist in a coher- international organizations and industries, as well as legal
ent and organized set of norms – as in the case within a instruments). After such analysis, we created catego-
legal order - and while adherence to ethical principles is ries regrouping the most frequently cited and discussed
voluntary [35]. Second, legal rules derive from the state ethical and legal issues. We then proposed a breakdown
structure, in force at a given time, in a given legal space. within such categories that emphasizes the different - yet
The field of ethics, meanwhile, is derived from philoso- often interconnecting - ways in which ethics and law are
phy, and more recently social sciences, and relates to a approached for each category of issues. Finally, we iden-
reflexive process [36] that does not freeze ethical princi- tified several key ideas for healthcare professionals and
ples in time and space, but seek to define them in a more organizations to better integrate ethics and law into their
dynamic way. Third, legal rules seek to provide a frame- practices.
work for the coexistence of people in society, to protect The paper is divided into six sections, corresponding to
its members and to guarantee political, economic and the most important issues associated with AI in health-
social interests at the same time, whereas ethical norms care: (1) Privacy; (2) Individual autonomy; (3) Bias; (4)
and discussions are more based on moral values [35]. In Responsibility and liability; (5) Evaluation and oversight;
sum, legal rules could be defined as the minimal duty that and (6) Work, Professions, and the Job Market. In conclu-
every person must respect (whether one can do some- sion, we advance a few proposals aimed at resolving some
thing), while ethics encourages reflection on choices and of the highlighted issues for healthcare professionals.
behaviors (whether one should do something). In health-
care, ethics first dealt with the manipulation of living Privacy
organisms through “bioethics” before considering patient In machine learning or deep learning models, the com-
relationships through “clinical ethics” and management putational algorithm solves problems by seeking connec-
and governance through “organizational ethics” [37]. The tions, correlations, or patterns within the data on which
latter two aspects are still difficult to grasp today, because it is “trained” [40]. Since the effectiveness of these models
they demand a global understanding of organizations depends heavily on the quality and quantity of training
that encompasses employees’ issues beyond the relation- data4, one of the most common techniques in AI technol-
ship of care. ogy development is to collect, structure, and use as much
Interestingly, despite the wealth of literature on AI, varied data as possible [41]. In the healthcare arena, this
there is little to show healthcare professionals the main data can take many forms - such as measurements of a
issues with an eye on the conceptual differences between patient’s clinical vital parameters, biological analysis
ethics and law. This confusion is important to clarify, results, or genetic characteristics [42] -, and is created
considering the different level of opportunities and limi- and collected from a wide variety of sources, from tradi-
tations they bring forward in medical practice. Therefore, tional healthcare system activities to self-tracking of con-
in this paper, we highlight how ethics and law approach sumers using digital technologies (“quantified self ”) [43,
the issues of AI in health from different perspectives. 44]. Thus, this type of data is linked to an individual or
While law is mostly a local matter, our reflection does a group who is directly or indirectly identifiable or tar-
not target any one national jurisdiction. Nevertheless, getable. However, health data is much broader than most
the examples we use to better illustrate our analysis are people realize, and can also cover diet, exercise, and sleep
focused on western countries and regions most active - all collected by private companies outside the health
in the AI field (on the governance and technical sides) system through connected devices such as smartphones
[38], i.e. the United States, Canada, Australia, the Euro- and smart watches. Considering the intimacy and sen-
pean Union and the United Kingdom. In ethical matters, sitivity of health data and the many actors potentially
the discussion encompasses a variety of ethical work on involved, AI highlights the question of individual privacy.
AI [39], but the monopolization of the ethical debate by
a few countries from the Global North [38] should be
underlined.
This paper presents an overview of the main issues per-
taining to AI development and implementation in health- 4
“Quantity” usually refers to the amount of (massive) data often required
to run a system, while “quality” refers to both its accuracy and currency,
care, with a focus on the ethical and legal dimensions but also its relevance (the representativeness of the data in relation to the
of these issues. To summarize these, we analyzed the system’s target population, freedom from bias, etc.).
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 4 of 19

The ethics of privacy initially anonymized [59]7, which could cause breaches of
From an ethical point of view, issues of privacy are rooted confidentiality. Third, the more the data is anonymized,
in conflicting moral values or duties. The very concept of the greater the risk that its utility is reduced. In addition,
privacy has been defined in many ways in the ethics lit- the portability and diversity of information collection
erature, with its origine intertwined with its legal protec- systems (e.g. health, sport, or wellness applications; con-
tion [45], so it can hardly be summarized through a single nected devices; data shared on social networks) make it
definition. In the field of health, the search for what is much harder to guarantee the protection, security, and
right or wrong, appropriate or inappropriate, commend- confidentiality of personal data [61] in comparison to
able or condemnable [46–48] is an ancient reflection data collected through the traditional health system (e.g.,
that constitutes precisely the foundation of biomedical, hospitals, clinics)8. For example, data that might initially
clinical, and research ethics [37, 46]. In a context where be loosely related to someone’s health (e.g., daily calorie
people reveal details of illness, pain, life, and death [46], intake) can become more sensitive when correlated with
respect for their privacy as confidentiality of their infor- other variables (e.g., a person’s weight), which is almost
mation, and protection of their care spaces, both physical inevitable in the construction of an AI model9. However,
and virtual5, from interference or intrusion (e.g., con- taking this kind of data into account can help reveal more
straint, coercion and uninvited observation) is crucial. factors of a disease, and allows for a more predictive and
Without this assurance of secrecy, patients would be personalized medicine10. These arguments all come as
less willing to share intimate information with their doc- challenges to the principle of privacy.
tor, affecting their care or the usefulness of research [50, Others take a very different view, departing from
51]. Safeguarding confidentiality of health information as the principles of bioethics and privacy protection. For
well as personal health choices is also crucial in prevent- instance, engineers might argue that the astonishing
ing discrimination, deprivation of insurance or employ- recent advances in computing power, data collection,
ment [52], emotional stress, psychological consequences and the speed and ease of data exchange are realities that
of revealing intimate information, and erosion of trust, make privacy an outdated concept unsuited to our time11.
among others [53]. Thus, preventing the damage caused In that sense, engineers may see privacy as a hindrance to
by a violation of privacy is a major moral imperative in the profitability of business models and innovation [53],
medical ethics6. thus limiting the benefits to health.
However, this principle of privacy is confronted with
the duty to disclose information, either for the direct Privacy and the law
benefit of the patient (e.g., sharing of information for From a legal perspective, privacy refers to the principles,
better care, their reimbursement, their own self-physical rules, and obligations embedded in law that protect
protection), for the benefit of others or society as a whole informational privacy and personal information. These
(e.g., disclosure of a communicable disease [55], protec- rules are also challenged by the characteristics of AI tech-
tion of other victims [56], medical research [57], etc.), niques in the field of healthcare. Specifically, it becomes
or for the commercial gains of AI specialized companies harder to respect principles and rights already enshrined
[58] that can all claim a valuable moral interest. in law, and the application of certain rules is more peril-
This tension between individual privacy and disclosure ous - either because it ends up blocking the creation or
for potential useful uses is exacerbated by digital innova-
tion, data analytics, and AI for several reasons. First, reli-
7
able AI development depends on access to health data, Some techniques make it difficult to ensure data confidentiality and secu-
rity [27, 60].
but this is restricted by the imperatives of confidentiality. 8
Until now, standards-based tools have generally been more prevalent in
Second, creating and using AI algorithms implies find- the sensitive medical sector, where confidentiality of information is essen-
ing correlations across data sets that can allow the re- tial to the quality of care (e.g. professional and medical secrecy, general
obligation of confidentiality of medical records, specific protection laws
identification of individuals [2, 59], even if the data was
applicable to the healthcare sector).
9
This may be precisely the objective of the AI system (e.g., to find the risk
factors for a disease, or those that lead an individual to buy a particular
5
We are referring here in part to a liberal conception of privacy as described over-the-counter product), or different data points may be linked before
by Alan Westin or Stanley Benn, who defend the idea of a shield protecting the algorithm even starts running, during the database creation phase.
10
individual autonomy. This is indeed one of the aspects of privacy, which Rumbold and his coauthors show why the coupling of ethnographic, geo-
can serve one of the dimensions of individual autonomy in that it creates graphic, and genetic data for genomics research is of enormous interest, but
a space in which individuals feel at ease, whatever the social and political can contribute to or directly lead to re-identification [62].
pressures, see: [49] 11
Spiekermann and his co-authors present results showing how “engineers”
6
The principle of non-maleficence encompasses privacy (and security) and see the need to respect privacy as a barrier to engineering and, by exten-
is, according to the principles of modern medical ethics, a moral standard sion, to public utility, and therefore a value which, when integrated into an
to be considered. The principle of beneficence encompasses the protection organization’s ethical standards, is less important than it is a loss of time and
of dignity, from which the protection of privacy also partly derives [54]. autonomy, which sometimes contradict it [63].
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 5 of 19

use of a system, or because it does not allow the protec- journey for researchers15. Pooling and managing this data
tion of privacy. While the following discussion is not to offer easy but controlled access requires additional
exhaustive, it represents the bulk of legal discussions legal imperatives on technical security, in particular
about informational privacy. against cyberattacks.
First, a law’s scope of application has a major impact Fourth, health and data protection laws do already
on the protection that it will grant. While the com- consider AI through the way data is used and the con-
mon meaning of “personal data” may be clear12, its legal sequences for the individual16. For example, fully auto-
definition can vary between countries (and even within mated decision-making and profiling systems are
them). For example, it may refer narrowly to data man- increasingly subject to special rules through legislative
aged and held in a particular file or by a particular entity amendments in specific situations. For instance, there
(e.g., the U.S. HIPAA Privacy Rule, which covers certain may be a specific right to be informed of the use of profil-
entities within the traditional health system [64], or the ing techniques (as in the new Quebec’s Act modernizing
Australian Privacy Act, which applies only to health ser- provisions as regards the protection of personal informa-
vice providers [65]). It may also extend its protection to tion [70–72] or the new California Privacy Rights Act17);
information that allows both direct and indirect identi- fully automated decisions are prohibited when they cause
fication (e.g., first and last name, social security number, harm to the individual (as in the GDPR); and the right to
address, phone number, race, identification key, depend- have the decision reviewed by a human can be problem-
ing on countries), and re-identification capacities (e.g., atic, as the reasoning behind the decision is not always
overlaying two sets of data to create a deep learning data- fully comprehensible.
base for the AI system). An example is the new Califor-
nia Consumer Privacy Act, which includes “reasonable” Individual autonomy
possibilities for re-identification13. Laws can define per- The second issue is closely related to some of the consid-
sonal health data as data that is medical by nature (e.g., erations outlined above. Autonomy is one of the four key
a medical test result), by purpose (e.g., used medically), principles identified by medical ethics. The Greek terms
or by cross-referencing (e.g., crossed with other data, as autos and nomos mean “self ” and “law, rule,” so “auton-
in AI analysis, to provide health information in combina- omy” refers to a person creating their own rule of con-
tion)—as it appears to be the case with the French Data duct and having the capacity to act without constraint
Protection Authority [66] based on the European General and make their own decisions [73]. Many western juris-
Data Protection Regulation (GDPR) definition [67]. dictions incorporate the principle that free and informed
Second, AI also challenges rules regarding the collec- consent must be obtained for any medical examination,
tion, use, and disclosure of personal data. For example, treatment, or intervention, based on both the ethical
the requirement to determine the purposes for which principle of autonomy and the legal foundation of the
data will be used in advance is a fundamental tenet of inviolability and integrity of the person [74]. This prin-
many privacy laws14. Similarly, the legal obligation of ciple of autonomy, as well as the moral value it embod-
proportionality, minimization, or necessity requires that ies and the regulation that frames it, are confronted with
data be processed only to the extent necessary for the several characteristics specific to AI.
purpose at hand. However, many deep learning models
require large amounts of data without knowing its pur- The ethics of autonomy
pose or even necessity in advance [68]. These principles First, the “black box” phenomenon can impair the auton-
will probably need to be revisited or relaxed if legislators omy of the person whose data is processed for AI pur-
wish to allow the widespread deployment of AI. poses. Indeed, some machine learning algorithms (e.g.,
Third, meeting the conditions of access to qualita-
tive and exhaustive health data held and produced by 15
Pesapane et al. (2018) consider that “access to big data of medical images
health systems is often a long, arduous, and discouraging is needed to provide training material to AI devices, so that they can learn
to recognise imaging abnormalities. One of the problems is that sensitive
data might either be harvested illicitly or collected from unknown sources
12
“Personal,” adjective: “of, relating to, or constituting personal property,” in because of the lack of unique and clear regulations” [68].
Merriam-Webster Dictionary. 16
Few privacy laws refer explicitly to “artificial intelligence,” “machine learn-
13
The legislative disposition defines “personal information” as information ing” or other specific AI techniques. However, they do consider AI through
that identifies, relates to, describes, is reasonably capable of being associated the way data is used and the consequences (regulating profiling as analysis
with, or could reasonably be linked, directly or indirectly, with a particular and prediction of human behavior; and the subsequent automated decision
consumer or household. made without human verification)—see, for example: [67, 69].
14 17
See, for example, the Principle of Purpose limitation in the European Data “Profiling” is now defined and included in the law, but for now the Act
Protection Regulation (art. 5), the Fair Information Principle, Limiting Use, only provides for the Attorney General to adopt regulations requiring busi-
Disclosure, and Retention in the Canadian Personal Information Protection nesses’ response to access requests to include meaningful information about
and Electronic Documents Act (principle 5), as well as the Collection, Use, the logic involved in those decision-making processes, as well as a descrip-
and Disclosure limitation principle in the HIPAA Privacy Rule. tion of the likely outcome of the process with respect to the consumer [69].
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 6 of 19

the “random forest” classification algorithm) and, among Secondly, some controversial business practices reduce
them, deep learning algorithms (e.g., neural networks) people’s moral agency, i.e. their ability to make moral
have a high variability of inputs and a complex data- choices, to exercise a form of evaluative control over
driven operation (non-linear system, where interactions them, and be held accountable for these choices [79],
do not follow a simple additive or proportional relation- which impacts people’s autonomy. Tools ostensibly sold
ship), making it difficult for experts, let alone the general for healthcare or fitness (e.g., smart watches) become
population, to understand how and why an algorithm monitoring and information-gathering tools for the
arrived at a result (which we refer to as “intelligibility”) firms that collect these data [80]. These personalization
[75]. Whether it is about the process of model generation technologies allow a “better understanding of consumer
or the result obtained, the challenge is to provide a satis- behavior by linking it very precisely to a given segment
factory explanation tailored to the user or person affected based on observed and inferred characteristics” (our
by the result, thus increasing the “interpretability” of the translation) [81]. For example, “dark pattern” practices
AI system [75]. trigger the brain system that corresponds to rapid, emo-
In the medical context, increasing importance is placed tional, instinctive, and routine-driven choice, producing
on patients’ co-participation in their care [54] and their an emotional stimulus that tips the consumer towards
ability to refuse care or request additional medical advice. a purchase [81]. Thus, personalized manipulations join
In some circumstances, the use of AI can erode the personalized prices in the marketer’s toolbox [81]. On the
patient’s autonomy (even if the democratization of AI can one hand, the user’s range of choices is narrowed accord-
also enhance people’s autonomy in other ways, including ing to their past consumption or the customer segment
by increasing access to, and interpretation of, medical that the algorithm assigns them to (e.g., filter bubbles,
information). It may be difficult, if not impossible, for a misinformation [77, 81]). On the other hand, the com-
patient to challenge a decision if the health professional mercial entity manipulates consumer behavior to create
cannot clearly explain how or why they proposed a cer- an incentive to purchase or consume a particular product
tain treatment or procedure. Thus, the use of opaque, (e.g., dark nudges, emotional pitches, or “dark sludge”18)
unintelligible AI systems might resurrect a certain medi- [81]. The probability of a consumer being manipulated
cal paternalism, accentuating this loss of autonomy [76]. depends on their tech literacy and ability to spot the
Refusing the use of the AI system may also be ethically manipulation. These impediments to autonomy speak to
questionable because of the characteristics of informed the primordial moral and ethical choices of what consti-
consent. “Valid informed consent requires clear and tutes a dignified, free, or satisfying human life, and sev-
accurate recognition of the situation, absence of coercion eral authors have exhorted us to deeply reflect on them
(physical or psychological), and competence to make [83].
decisions (or representation, in the case of minors and Third, healthcare professionals’ autonomy may also
incompetent adults)” [47]. be impacted, either because they use, are assisted by,
Each of these three elements, however, differs depend- or could be replaced by AI systems, which may have an
ing on the individual’s level of AI literacy and other sub- impact on the delivery of care. The key players involved
jective characteristics (i.e., psychological, cognitive, or in the healthcare relationship need to maintain the
contextual), the interpretability of the algorithm used, agency over their actions, and the dilution of responsi-
and the amount and accuracy of information given to bility deserves to be thought through [80]. Conversely,
the patient. Currie and Hawks consider that “the public “imposing AI on a community by a profession or a part
and patients are not always sufficiently informed to make of it is perhaps not ideal in terms of social or ethical stan-
autonomous decisions” [54]. Using nuclear medicine and dards” [54].
molecular imaging as examples, they argue that people
are probably underinformed and underqualified to deter- Autonomy and the law
mine what they want from AI, what they can expect from On the legal front, obtaining individuals’ specific, free,
it, and thus whether they will allow AI to decide on their and informed consent is considered one of the ultimate
behalf [54]. Moreover, freedom to consent is called into expressions of autonomy [84]. Informed consent is usu-
question when access to a health service or the use of a ally required before personal information is obtained
connected tool is conditional on sharing personal data or used, either as a principle prior to any exchange of
[77, 78]. However, maintaining trust in the use of AI in
healthcare may push towards disclosing the use of AI for 18
A dark sludge can be defined as “an evil nudge […] that can exploit [online
purposes other than treatment. In this regard, Amann et consumers’] cognitive biases to persuade them to do something that is
al. believe that “appropriate ethical and explicability stan- undesirable, typically by introducing excessive friction into choice architec-
ture.” Dark sludges include strategies that make consumers’ more opaque,
dards are therefore important to safeguard the auton- make it harder for them to freely express their preferences, or lead them to
omy-preserving function of informed consent” [60]. take decisions that they would not have taken spontaneously [82].
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 7 of 19

information – as in Quebec (Canada), for example [71, supports research into solution strategies and the practi-
72] – or as a legal basis on which to rely, as in the Euro- cal implementation of new legal requirements.
pean Union19 or United States20. This relates to both Finally, respect for autonomy also lies in the capacity
the creation of an AI model and the context of its use in to exercise the rights granted in principle to individuals
healthcare activities. Emerging issues include whether [77]. This question deserves to be asked, in view of the
informed consent to care includes consent to the use of characteristics of data exchanges and computer access
AI systems, machines, or techniques within such care that condition the construction of an AI system. The
[85]. Each jurisdiction makes a different choice, and each operation of certain AI systems may hinder people from
one is open to question. In Quebec, for example, the right exercising their right to be forgotten, their right to know
to be informed must specify the professional who per- what data is being used and what for, their right to limit
forms the therapeutic intervention [86], but not neces- the use of their data, the right to opt out, or the right to
sarily whether they used AI to make the diagnosis. human review22—at least in certain legal jurisdictions.
Inspired by the ethical reflection defining the con- How can one ensure the deletion of an item of data where
tours of valid consent, the law usually requires that the initial consent had been given for its use, when one does
person giving consent is sufficiently informed to decide not know whether and to what extent that item has influ-
in an objective, accurate, and understandable manner. enced a decision taken by the system? How can the right
In healthcare contexts, it usually encompasses informa- to human review of an automated decision be guaranteed
tion about the diagnosis, the nature and purpose of the when the reasoning behind that decision is unintelligible?
procedure or treatment, the risks involved, and possible What is the scope of the right to dereferencing or dele-
therapeutic options [86]. In addition, when personal tion if AI can aggregate information from the results of
information is used to make a decision based exclusively multiple search engines?
on automated processing, there is now a tendency to
require data subjects to be informed of the reasons, prin- Bias
cipal factors, and parameters that led to the decision21. Algorithms’ reasoning is precisely induced and driven
These requirements raise questions when using com- by the data they are trained on. As a result, it can reflect
plex machine learning algorithms: the main factors and biases present in that data, which will in turn impact the
parameters may be difficult to report in an understand- algorithms’ results and potentially exacerbate inequalities
able way and raise questions about legal compliance [60, and discrimination against marginalized communities
87]. Informed consent may therefore be impacted, calling and underrepresented groups.
compliance with this obligation into question.
Second, valid consent usually implies that consent is The ethical view of bias
obtained without pressure, threat, coercion or prom- Some authors have categorized the main types of bias
ise. However, patients rarely read or check the require- induced by AI [92]. The first is replicating or exacerbating
ments for obtaining electronic consent, especially when societal and historical biases already present in the learn-
it comes to personal information [88, 89]. The legal dis- ing data (demographic inequality), which can lead to self-
cussion ultimately concerns the possibility of respecting fulfilling predictions [93] and disproportionately affect
these requirements as well as other possible legal bases particular groups [94]. One study reports, for example,
(e.g., another mode of consent), perhaps based on the that “the use of medical cost as a proxy for patients’
notion that the subject’s autonomy resides more in gen- overall health needs led to inappropriate racial bias in
eral trust and transparency around AI use than in a but- the allocation of healthcare resources, as black patients
ton they unthinkingly click about 20 times a day [90]. In were erroneously considered to be lower risk than white
these questionable cases, an underlying ethical reflection patients because their incurred costs were lower for a
given health risk state” [95]. Yet, such lower costs also
illustrate the inequalities in accessing medical services
19 for black populations. As healthcare delivery varies by
Consent is one of the six legal bases on which the collection of personal
data can be legitimate, as stated in Article 6 of the European General Data ethnicity, gender, housing status and food stability [96–
Protection Regulation. 98], among other things, feeding an algorithm with such
20
Without establishing consent as an absolute principle, the HIPAA consid-
ers that in some situations it is a means of basing the use of health informa-
tion, and the right to opt out integrated in the California Consumer Privacy 22
In France, the “human guarantee principle” was supported by the Ethik-
Act considers that it must be possible for an individual to refuse the selling IA group to be integrated into the revision of Article 11 of the French bill
or sharing of their information upon request. relating to bioethics in 2021, taken up in two opinions of the National Eth-
21
For example, the European Data Protection Regulation requires informa- ics Advisory Committee, and involved the exercise of a systematic human
tion on the existence of automated decision-making or profiling, as well as review of the real-life conditions of an AI device. This concept was taken
information useful for understanding the algorithm and its logic and its con- up in the proposal for a Regulation of the European Parliament and of the
sequences for the data subject. Council in Article 14 (“human control”: COM(2021) 206 final) [91].
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 8 of 19

data can make one of these social determinants of health personal trait or characteristic such as race or ethnic
a salient factor in the outcome [68]. “Creating a tool from origin, civil status (e.g., marital status, gender expres-
data that fundamentally lacks diversity could ultimately sion, age), sexual orientation, health or social condition,
result in an AI solution that deepens healthcare inequi- religious and political belief, etc.24 It generally prohibits
ties in clinical practice” [54]. differential treatment in similar situations such as service
The second type of bias relates to incomplete or unrep- access, employment, or housing unless justified by par-
resentative data [95, 99], especially that which over- or ticular circumstances or legal duties [100]. The law often
under-represents a subgroup such as a minority, a vul- focuses on the effects on the victim [100] rather than the
nerable group, a subtype of disease, etc [54]. When the fault or bad intent of the perpetrator.
theoretical reference population is not representative Although definitions vary by jurisdiction, an AI system
of the target population for which the model provides a used to determine people’s entitlement to reimburse-
result, there is a risk of bias, error, and overfitting, which ment based on their higher risk in terms of health costs
can exacerbate health inequalities. For example, “an (e.g., that would be indexed to age, race, sexual orienta-
algorithm designed to predict outcomes from genetic tion, etc.) could constitute discrimination under most
findings may be biased if there are no genetic studies in legal systems in which equality is protected [101]. Yet,
certain populations” [68]. The risks of developing cer- the context and the nature of the AI system could make
tain diseases often depend on other factors such as sex proof of discrimination extremely difficult: determining
or age, and failure to account for these characteristics in the criteria behind decisions is difficult enough for the
the baseline training data biases the prediction of disease designers of some complex machine learning systems,
risks in other types of populations. especially if they are autonomous and evolve over time.
The third type of bias can be induced by the design- One can imagine how much more difficult it would be for
ers of the system themselves through the decisions they the individual victim of discrimination, who must obtain
make when setting certain variables, the data to be used, access to the information used and to the parameters of
or the objective of the algorithm [92]. The ethical issues the model, which at present frequently remain opaque.
that arise concern, for example, the possibility of predict-
ing and possibly adding parameters that were not initially Responsibility and liability
present in the data to make it as accurate as possible to AI algorithms can sometimes make mistakes in their
eliminate bias. For instance, should the HIV status [93] predictions, forecasts, or decisions. Indeed, the very
of a patient who has refused to provide this information principle of such models’ construction and operation is
be added to the training data? And before even reaching fallible due to the theory of complexity [102]. The com-
the bias-correction stage, it is crucial to ask whether a puter program that underlies an AI model comprises a
potentially biased system should be introduced when it certain number of operations that allow it to solve a given
is already known it can reproduce societal biases. More- problem. The complexity of the problem can be evalu-
over, the tech world seems to focus on eliminating indi- ated according to the number of operations necessary to
vidual-level human bias and training developers23. As reach an exact answer [103]. For highly complex prob-
Joyce et al. are arguing, “sociological research demon- lems, no 21st -century machine can surpass the threshold
strates, though, that bias is not free-floating within indi- for the number of operations required. The objective of
viduals but is embedded in obdurate social institutions” AI programs that tackle such problems, therefore, is “to
so that “there are severe limitations to an approach that compute a reasonably correct solution to the problem,
primarily locates the problem within individuals” [96]. in a computation time that remains acceptable” [103]. AI
researchers call this type of calculation a “heuristic.” The
Bias and the law system cannot ensure absolute certainty in its results, but
When considering the issue of bias from a legal perspec- it can (or at least hope to) propose better predictions than
tive, the primary areas affected are the right to equality a human in the same situation, especially the least expe-
and protection from discrimination. Biases can affect rienced clinicians [104] and is therefore of major inter-
decisions taken with respect to individuals, who may be est. Apart from this intrinsic complexity, many different
discriminated against based on non-representative data
or because some of their characteristics are accentuated
by the operation of an AI model. 24
Efforts to distinguish between prohibited grounds of discrimination are
Equal rights legislation is based on the idea that indi- found in numerous international tools such as the Universal Declaration
viduals cannot be treated differently because of any on Human Rights, the International Covenant on Economic, Social and
Cultural Rights, the International Covenant on Civil and Political Rights;
regional human rights conventions such as the African Charter on Human
23
For instance, Google launched a fairness module in its ML Crash Course and People’s Rights, the American Convention on Human Rights and the
in 2018. European Convention on Human Rights; and national legal instruments.
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 9 of 19

types of error impact on the responsibility of the actors consider that it is inappropriate for clinicians who use an
involved throughout the lifecycle of an AI system. autonomous AI to make a diagnosis they are not com-
fortable making themselves to accept full medical liability
The ethics of responsibility for harm caused by that AI [95].
A first type of error arises from initial coding errors made For complex systems, some of which work with rein-
by the programmer of the model. Unavoidable human forcement learning, it is still hard to predict what experi-
error means there is a chance of the model providing ences the system will encounter or how it will develop.
incorrect answers in use. So, what probability of error Like Pesapane and co-authors, one can thus question
can be accepted in these systems, and proceed to imple- whether it is the device or its designer who should be
ment them in our society? considered at fault [68]. Should the designer be consid-
The need to maintain the quality of training data ered negligent “for not having foreseen what we have
throughout the model’s lifecycle may also incur other called unpredictable? Or for allowing the possibility of
types of liability-related errors. For example, image rec- development of the AI device that would lead it to this
ognition based on artificial neural networks is one of the decision?” [68] Some believe that if an autonomous AI
most advanced fields in AI [104]. Modifying inputs, “in is used according to the instructions, ethical principles
the form of tiny changes that are usually imperceptible require its creators to take responsibility for the damage
to humans, can disrupt the best neural networks” [105]. caused [95]. However, similar to what we mentioned with
Finlayson and co-authors explain that pixels may be mali- respect to the risk of losing a certain degree of human
ciously added to medical scans in order to fool a DNN agency in some circumstances (see section on Auton-
(deep neural network) into wrongly detecting cancer omy), the automation bias - which refers to the ten-
[106]. The quality and representativeness of data (see sec- dency of clinicians (and people more broadly) to overly
tion on Bias) and the opacity of the system (see section rely on assistive technologies like AI25, calls into ques-
on Autonomy) can also lead to errors with detrimental tion the extent to which human responsibility should be
consequences. considered.
The misuse of a system is also problematic. Users’
level of knowledge about AI might vary greatly, whether Liability and the law
they are a health worker helping to triage patients in From a legal point of view, AI errors are generally linked
the emergency department, a medical doctor handling to the harm suffered by the victim and its reparation.
an AI-powered surgical robot, or a patient setting up a In criminal matters, however, the legal perspective also
connected device to measure their physiological vitals at encompasses the attitude that one wishes to punish, or
home. Moreover, users might decide to ignore the result the protection of society and other individuals from a
that the system provides, either because they misread it possible recurrence.
or because they consider it too far removed from their Regarding the role of health professionals, we can
own assertions. Intentional malice aside, how should look at current medical liability regimes to consider how
the responsibilities of the actors involved be considered? mechanisms for civil liability and compensation for dam-
Over the short term, “human in the loop” approaches are ages can be applied to the use of AI systems in health,
recommended so that medical doctors take responsibility and whether they consider the particularities of the
for their decisions while using AI systems, including the operation and context. For example, in many fault-based
way information is used and weighed [54]. But to what liability regimes, the victim must prove that (1) the prac-
extent should medical doctors be held responsible if they titioner was at fault, (2) there was a prejudice (i.e., dam-
are unaware of an initial error in the input data, if they do age or infringement of a person’s rights or interests), and
not know the computational process leading to the result, (3) there was a direct and immediate causal link between
or if it is beyond their power to modify it? Should doctors fault and prejudice [107]. Medical doctors are usually
be liable for harm even though the model itself contains under an obligation of means (for example, products and
an error hazard due to the sheer complexity of the prob- equipment used) and much more rarely under an obliga-
lem? Should the final decisions in medical matters sys- tion of results. So, to determine the fault, the judge asks
tematically depend on human judgment alone? It remains whether a “reasonably diligent” [86] medical doctor con-
difficult to argue that systems that provide personalized forming to the acquired data of science and placed in the
health advice, diagnostic or clinical decision support rely same circumstances would have acted the same way.
solely on human interpretation [68]. However, should
the victims of the various prejudices potentially caused
by AI systems (patient refusing care, unfair access to AI, 25
Through several case studies (vignettes), the authors demonstrate the ten-
discrimination, prejudice linked to privacy or physical dency to trust the technological tool more when AI is used as a diagnostic
harm…) be able to claim compensation? Indeed, some assistant, which is mostly the case today [108, 109].
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 10 of 19

Yet, since the use of AI in medicine is so novel, a com- issued by national health authorities for medical devices.
mon understanding of how a “reasonably diligent” prac- These authorities examine the product or technology
tice would look might need to be determined. How far according to criteria that mostly relate to effectiveness,
would one consider the level of literacy of the medical quality, and safety. Scientific validity is paramount, but
doctor in relation to the AI decision support system? A should it be the sole criterion for the use and deployment
surgical robot carrying out routine sutures under the of AI systems? In particular, the likelihood and magni-
control of an AI system remains under a medical doctor’s tude of adverse effects should be assessed. In addition,
supervision: to what extent does the safety obligation there should be an “ethical” assessment that considers
imply liability for damage occurring during the opera- both the individual and collective benefits and risks of
tion, which the doctor might have been able to prevent the technology, as well as its compliance with certain pre-
with better knowledge of the system? We argue that viously validated ethical principles. For example, the UK’s
judges will minimally require a sufficient understanding Medicines & Healthcare products Regulatory Agency
of the AI tools that medical doctors and other healthcare (MHRA), the Food and Drug Administration (FDA),
professionals use, based on explanations provided by the and Health Canada have developed “good practice” that
system supplier. At present, however, this interpretation aims to promote “safe, effective and high-quality medical
is mostly at judges’ own discretion, and to the best of our devices using artificial intelligence and machine learn-
knowledge, there are no major case-law decisions that ing.” This document currently seems to incorporate26 a
could guide us. more global consideration by also integrating ethical con-
Moreover, the opacity of AI systems and the many cerns over the deployment of AI systems [113].
actors involved in their development and implementation Second, AI technologies must be monitored and evalu-
make it much harder to prove a causal link between the ated throughout their use, especially “reinforcement”
fault and the damage—and the burden of proof invariably learning models that take advantage of the data that is
falls on the victim’s shoulders. The patient must know continuously generated and provided to carry on training
that such a system was used as well as all the steps in and learning [114]. This is precisely what the WHO advo-
the decision-making process if they are to prove that the cates, in the name of a final ethical principle that its com-
medical doctor should, for example, have disregarded the mittee of experts has termed “responsiveness.” Designers,
recommendation, detected an initial bias, checked the users, and developers should be able to “continuously,
inputs, etc [110]. systematically, and transparently assess” each AI tech-
nology to determine “whether it responds adequately,
Evaluation and oversight appropriately and according to communicated, legitimate
To minimize the risks of using AI in healthcare, we need expectations and requirements” [27] in the context in
to evaluate AI systems before they are marketed, imple- which AI is used. It is necessary to consider how these
mented, and used, and monitor them through ongoing standards can be assured, taking into account the proce-
oversight, especially for those systems that represent a dures and techniques available to do so [68].
higher risk for patients. The “human in the loop” approach is often seen as
part of the responsible development of AI technolo-
The ethics of evaluation and oversight gies. Applied to system evaluation, it could take the
Beyond the medical ethics principle of non-maleficence, form of establishing several points of human supervision
the protection and promotion of human well-being upstream and downstream of the design and use of the
[111], safety, and public interest implies that “AI tech- algorithm [115]. Establishing such a guarantee, which can
nologies should not harm people” [27]. This idea, pre- also be described as a “human warranty” [27] or “human
sented as the second of the six principles established by control” [116], would make it possible to ensure that only
the expert group mandated by the World Health Orga- ethically responsible and medically effective machine
nization (WHO), implies that the control, measure- learning algorithms [27] were implemented.
ment, and monitoring of the performance and quality However, the question remains open as to how this
of systems and their continuous improvement must be approach can be applied to technologies that require
paramount in the deployment of AI technology [112]. All no prior approval or regulatory authorization process,
actors involved should probably be accountable for these in particular because they do not qualify as medical
aspects. On this theme, there are several elements that
merit consideration. 26
For example, the guiding principles value multidisciplinary expertise
First, pre-deployment evaluation of AI systems involves throughout the product lifecycle so that the benefit/risk balance is assessed
determining the criteria for their evaluation. Today, most not only with regard to validity and clinical efficacy, but also other social
risks, confidentiality, representativeness, “human in the loop” performance
systems are evaluated within the framework of existing or at least the role of humans in interpreting the model’s outputs, and user
authorizations, certifications, or licenses, such as those information.
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 11 of 19

“devices” or “instruments.” Such technologies, which incorporate all of the realities of AI systems and are in
often monitor fitness, women’s hormonal cycles, sleep, or need of revision [124]; “the law and its interpretation and
overall well-being, can still have harmful consequences. implementation have to constantly adapt to the evolving
The companies developing and selling such products state-of-the-art in technology” [124, 125]. While some
often make public commitments through so-called ethi- authors are still questioning possible approaches to the
cal declarations and charters or self-developed ethi- regulation of innovation, some countries have already
cal quality labels. End users, who are rarely qualified to made their choice. On the one hand, over-regulation [68]
evaluate whether developers’ actions are in line with could stifle innovation and impair the benefits that AI
these statements, risk falling victim to the phenomenon would bring [126]. Conversely, “over-autoregulation,” or
of “ethics washing” [117] denounced by AI researchers leaving the market to regulates itself, would lead in the
[118], ethicists, and philosophers27. The repurposing of other direction, with companies deciding for themselves
the ethical debate to serve large-scale investment strate- which norms to develop and follow, solving problems as
gies merits intense reflection followed by action by public they arise. Several countries have chosen to rely on risk-
authorities. based approaches for specific regulatory-device schemes
to encompass these challenges. For example, the Euro-
The legal view of evaluation and oversight pean Parliament has recently voted for its new Regulation
From a legal point of view, the issues also concern the on Artificial Intelligence (better known as the “AI Act”),
regulation of marketing. First, as previously underlined, which defines four levels of risk, where the minimal risk
the definition of AI is neither unanimous nor stable, and requires a simple declaration of compliance and the max-
this complicates the legal qualification of AI tools [68]. imum risk incurs a ban on use. The Canadian Artificial
Indeed, tools qualified as medical devices are usually Intelligence and Data Act (AIDA) proposal also plans, if
subject to strict rules concerning their manufacturing adopted, to regulate AI systems based on the intensity of
process, safety, efficacy and quality controls, evaluations, their impact [127].
and more. In principle, they have a medical objective, and
these constraints are therefore linked to the risks they Work, professions, and the job market
pose to users’ health and safety. So far, however, the legal In the health sector, AI’s impacts on jobs and work con-
definition of medical devices rarely expressly includes all cern medical practice, the delivery of care, and the func-
kinds of AI systems, even though some may share many tions overseen by non-medical staff.
characteristics of certain qualified devices or incur com-
parable risks. For example, in the United States, some The ethics of transforming work
types of medical software or clinical decision support AI systems are destined to become part of medical prac-
systems are considered and regulated as medical devices tice and care delivery, if they have not done so already.
[119], but the FDA’s traditional paradigm of medical For example, an AI system mobilizing image recognition
device regulation was not designed for adaptive AI and can detect a tumor on a mammogram [128]. In ortho-
machine learning technologies [120]. The inadequacy of pedic surgery, robots with on-board AI are capable of
this traditional vision and the lack of clarity on the reg- assisting and securing the surgical gesture and ensuring
ulatory pathway can have major consequences for the better postoperative results by integrating the anatomy
patient [93]. For this reason, the FDA has been adapting specific to each patient [129]. However, if these kinds of
over recent years by specifically reviewing and authoriz- tasks become more widespread, might AI endanger jobs
ing many AI and machine learning devices [120, 121] or even replace health professionals, as is often feared in
and plans to update its proposed regulatory framework technological transitions [130]?
presented in the AI/ML-based SaMD discussion paper Healthcare systems, professionals, and administrators
[122], which is supported by the commitment of the will all be impacted by the implantation of AI systems.
FDA’s medical product centers and their collaborative The first impact consists in the transformation of tasks.
efforts [123]. The integration of AI is transforming professional tasks,
Second, the quality control and assessment of medi- creating new forms of work [131], and forcing a readjust-
cal devices are not fully adapted to the growing and ment of jobs (e.g., changing roles and tasks, modifying
constantly evolving nature of AI systems, the safety professional identities, evolving of professional account-
and effectiveness of which may have to be controlled ability). For the WHO, readjusting to workplace disrup-
over time. Classical legal regimes seem to be failing to tion appears to be a necessary consequence of the ethical
principle of “sustainability” identified by the committee
27
For example, advocating the development of “trustworthy AI” would of experts on the deployment of AI. In particular, gov-
seem to be conceptual nonsense to Dr. Thomas Metzinger, Professor of Phi-
losophy at the University of Mainz in Germany, who argues that machines
ernments and companies should consider “potential job
are not trustworthy as only humans can be trustworthy. losses due to the use of automated systems for routine
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 12 of 19

healthcare functions and administrative tasks” [27]. artificial intelligence” [131]. Here, a risk arises that is
Image recognition, for example, makes radiology one of similar to those related to the computerization and digi-
the most advanced specialties in AI system integration tization of medical records: the time spent on training
[132]. AI is now able to “automate part of conventional and correct use should not be to the detriment of clinical
radiology” [133], reducing the diagnostic tasks usually time, which is rightly considered to be paramount.
assigned to the radiologist. The authors of the French However, whereas previous technological revolutions
strategy report believe that this profession could then concerned lower-skilled workers, AI may herald the
“evolve towards increased specialization in interventional opposite [136]. AI can pose the risk of a future deskill-
radiology for diagnostic purposes (punctures, biopsies, ing among healthcare professionals, especially by induc-
etc.) for complex cases or therapeutic purposes guided ing dependence [137] or cognitive complacency [138].
by medical imaging” [133]. The practice of electrocar- The capacities offered by automating cognitive work that
diograms in cardiology [133] or that of dentists in their previously required high-skill workers might cause con-
routine and laborious tasks [134] is already undergo- sequences such as altering clinical reasoning processes
ing upheaval. The field of general medicine is also being (e.g., reducing a clinician’s diagnostic accuracy). How-
impacted by applications available to the public, such ever, the use and application of AI itself require periodic
as “medical assistant” chatbots that can analyze users’ refinements by experts, including medical doctors [137].
symptoms and direct them to a specialist or pharmacist. Radiologists’ professional networks allayed this fear by
In the case of minor ailments, such technologies de facto reducing the scope in which AI could enter while recog-
diminish the role of the general practitioner. nizing the potential benefits of automating more routine
However, if the medical doctor profession is safe for tasks and upskilling their roles overall [139]. In situations
now, the role of an ethical approach is precisely to set where the use of AI is preferred, there are several ways to
guidelines, which could correspond to the level of social mitigate the risks of deskilling. For example, Jarrahi and
acceptability among the population and professionals’ co-authors suggest that some “informating capacities” of
desire to hang on to certain roles or tasks. For example, AI systems (i.e., capacities beyond automation “that can
the “human in the loop” approach, as well as the prin- be used to generate a more comprehensive perspective
ciples of non-maleficence and beneficence, imply think- on organizational reality” [138]) could be used to gener-
ing about when the medical doctor should intervene and ate “a more comprehensive perspective on organization,
how much latitude they have in the face of automation and equip workers with new sets of intellectual skills”
[14]. The profoundly human character of care is a major [138].
element in the debate concerning the restructuring of The impact of AI should also be considered at the more
missions and professional pathways [131]. The opportu- global level of managing organizations and non-medical
nity to “re-humanize” healthcare is opened up by hand- staff. Areas affected include patient triage in the emer-
ing over certain tasks to AI systems and should be seized. gency room and the management and distribution of
For example, the Paro therapeutic robot, which responds human resources across different services. This is where
to the sound of its name, spoken praise, and touch, is organizational ethics comes in, with human resources
used in geriatric services in Japan and Europe and has management and social dialogue figuring as major con-
received positive reviews from patients [135]. For nurses cerns. Indeed, in the health sector, the layers of the social
and care assistants, the integration of these robots would fabric are particularly thick, diverse, and interwoven:
take some of the physical and psychological strain out of changes in a healthcare institution affect many, if not all,
their activity. However, while implementing such a tool of its workers, with major repercussions in the lives of
might help to address human resources shortages, it may users and patients too. The care of individuals who inter-
only be desirable for certain populations and contexts. act with medical assistants or diagnostic applications is
Moreover, it will, of course, come up against other exis- also shifting. Thus, such “evolutions, introduced in a too
tential, social, and cultural issues, e.g., the evolution of radical and drastic way, damage the social fabric of a
social ties and the acceptance of this kind of technology society” [120]. Moreover, these transformations also blur
in different cultures. the boundary between work and private life and alter the
The transformation of skills is another consequence of link between the company and its employees, both old
the introduction of AI technologies into medical practice. and new [140].
As with the influx of computers into the workplace in the In this respect, the deployment of AI technologies cer-
1990–2000s, healthcare workers must learn to work with, tainly implies the emergence of new professions, which
or alongside, AI systems [27]. In addition to knowing must be properly understood. For example, new techni-
how to use the technologies, health professionals should cal professions such as health data analysts, experts in
be aware of the repercussions and issues “technical, legal, knowledge translation, quality engineers in ehealth, and
economic or ethical posed by the use of tools based on telemedicine coordinators, as well as professionals in
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 13 of 19

social and human sciences such as ethicists of algorithms Education and training
and robots are to be imagined [141, 142]. The construc- Many AI tools are intended to be used by healthcare
tion of the organization’s ethical culture will depend in professionals (e.g., risk prediction of future deteriora-
particular on its ability to identify areas of ethical risk, tion in patients [146], clinical decision support system
deploy its ethical values, and engage all its members in its [147]; diagnoses assistance tools from radiological images
mission [143]. [148]). Therefore, these professionals must know about
these tools, how they work, and their implications to
Transformation of work and the law ensure the quality, safety, and effectiveness of AI. In
The transformation of qualifications questions the rela- order to deploy AI while taking all this information into
tionship between the medical professions and technol- account, there is a need to increase the technical, legal,
ogy, as well as the legislative and regulatory obligations and ethical AI literacy of healthcare professionals [149].
for training. Requiring the medical doctor to be able to We propose two main ways to achieve this.
explain or interpret the outputs of an AI model remains a First, basic AI training should be integrated into aca-
legal issue as well as a significant challenge. The upheav- demic programs, where students are the future users of
als within certain professions may mean that their regula- AI in healthcare [150]. A study in Canada revealed that
tion must be adapted—as the regulatory framework for more than half of healthcare students [151] either do not
radiologists in France has already been modified, redefin- know what AI is or regard it as irrelevant to their field.
ing the acts and activities that can be performed by medi- In addition, few institutions cover the goals of AI in their
cal electroradiology manipulators [144]. According to the educational programs [152, 153]. This is a missed oppor-
National Federation of Radiologists, the move towards tunity to address misconceptions and fears related to AI
diagnostic interventional radiology mentioned above has and to raise awareness about ethical and legal issues asso-
already been integrated by the profession [133]. The High ciated with these systems. As Wiens et al. explain, suc-
Council for the Future of Health Insurance speaks of the cessful training involves bringing together experts and
major task of “concentrating and developing the role of stakeholders from various disciplines, including knowl-
medical doctors in expertise and synthesis activities,” edge experts, policymakers, and users [93].
which will certainly require regulatory change. Second, continuing education on AI for health profes-
From a legal point of view, this issue could also call into sionals should be integrated into health organizations
question the right to be treated or cared for by AI rather and institutions [13, 110]. Apart from illuminating the
than a healthcare professional. The trend towards quan- use of digital tools and data and the internal workings of
tified self or personal analytics, where data analysis and systems, this training would engage health professionals’
measurement tools become more powerful every year, moral responsibility. Confronted with a situation involv-
has given individuals greater knowledge on managing ing moral values, ethical principles, or the application
their health and sometimes implies a different under- of legal rules, they would question themselves before
standing of themselves as patients within healthcare mechanically applying their technical knowledge. They
structures. Individuals’ awareness and use of AI services could then reflect on the ethical consequences of their
is also growing, despite fears. That considered, some actions, such as the use of a particular AI tool, depend-
demands for surgery might be best met by AI, particu- ing on the context and the patient involved. Depending
larly if it is safer, quicker, more efficient and more likely to on the situation, professionals could refer to the ethical
succeed. And if cultural differences or social acceptability principles and standards defined within the organiza-
lag behind such demands [145], one might justifiably ask tion, their deontological code or the ethics committee
whether they should catch up. Could the right to choose within their organization. These reflexes are not new
one’s doctor be extended to include the right to access an among medical professionals, since medical ethics has
“AI doctor”? been widely implemented in processes and practices.
Moreover, the important regulation of the health sector
Discussion already forces professionals to question the conformity
The issues raised by AI in healthcare take on differ- of their practices to the law or to ethics. However, these
ent nuances depending on whether one speaks of them mechanisms deserve to be adapted to the use of AI.
in terms of legal compliance, the ethical choices behind Such training is widely encouraged by institutions such
practices and decisions, or reflective processes integrated as the American Medical Association [154], which sup-
into professional practices. We propose three avenues of ports research on how augmented intelligence should
reflection to address such issues. be addressed in medical education, or the Royal College
of Physicians and Surgeons of Canada [155], which rec-
ommends incorporating this teaching into the curricula
of residents [112]. We believe that the responsibility for
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 14 of 19

integrating training is shared between professional bod- that the machine has returned. One could even envisage
ies, healthcare institutions and academic institutions. that these alerts be personalized: indeed, some systems
Indeed, we believe that the issues we describe cannot be know how to personalize alerts based on the informa-
resolved unless accountability is shared in such a way. tion they have about the situation. Instead of alerting
users about the contraindication of a drug prescription
Support and guidance or how to complete an exploration [157], the interface
The second, complementary theme concerns the accom- could provide alerts on certain ethical considerations. For
paniment of health professionals in these new practices. example, medical doctors entering symptoms into a diag-
This support would first involve the creation of an inter- nostic support system could be alerted when specific data
nal or external interdisciplinary committee to approve points (as input) were atypical and could prove particu-
the implementation of new AI technology—a special larly sensitive in the operation of this algorithm. Keeping
authority for AI at the organizational or institutional the approach focused on the user experience, these func-
level. Such a committee should include ethicists, AI engi- tionalities should be light enough to preserve the human-
neers and developers, healthcare professionals, patients machine interaction and the ergonomics of the interface
and health organizations administrators would make it (meaning that tasks can be performed within a reason-
possible to assess whether a given technology met pre- able time).
defined evaluation criteria, based on the ethical issues it Finally, feedback loops should be established, coupled
triggers, before it could be used. It should also include a with the obligation for the professional to report any
lawyer to resolve certain legal issues and stay alert to the problems that occur when using AI. This functionality
evolution of the law, which is bound to change to inte- would prevent the professional from implicitly trust-
grate the particularities of this technology. ing the tool and force them to remain alert and critical
The committee would also ensure that the technology regarding its recommendations, predictions, previsions,
has been developed around the skills, expectations, inter- or other results.
actions, or technical or organizational constraints of the
user. This would force AI developers to work with poten- Limitations
tial future users (including both healthcare professionals We have tried in this paper to present an encompass-
and patients), from the design stage onwards. The crite- ing view of the ethical and legal issues surrounding the
ria adopted by the committee would then be integrated development and implementation of AI in healthcare.
throughout the creation of the technology, giving it the However, we recognize that our research has limitations.
best chance of being approved and implemented in the First, the six issues presented are not exhaustive since
safest, most efficient, most collaborative and, therefore, they include those most cited in the targeted literature.
highest-quality manner possible. Unlike institutions that Second, they are presented in a broad and rather geo-
review systems for regulatory and legislative compliance graphically non-specific manner to be able to give an
and evolve in parallel, this ethical approval process would overview in a single paper. Third, our presentation of
be the responsibility of the institution’s administra- these issues is based on basic differences between ethics
tors, who would also be responsible for building bridges and law and does not integrate all the intersections and
between developers and users. intertwined relations between the two disciplines, since
it aims to clarify the distinctions. Fourth, we have cho-
Tool adaptation sen not to approach ethical discussions through a single
Another solution concerns the AI tool itself, whose inter- normative approach, which would give importance to a
face must be designed to serve the user, taking account of specific classical traditions in ethics (e.g., Aristotle’s vir-
the issues that arise for them and allowing them to play tue ethics or Kantian deontology) or to more contempo-
an active role in the system (for example, in terms of con- rary currents such as the ethics of care, but to account for
trol, decision-making, choice of actions, etc.) [156]. Thus, a certain diversity in the presentation of the issues, which
the bridge between designers and users would make can present themselves differently depending on the cho-
it possible to create an interface that is intuitive, ergo- sen angle.
nomic, transparent, accessible, and easy to use.
As we have seen, one of the objectives of training health Conclusion
professionals is to encourage reflective thinking, which is The six issues we highlighted in this article illustrate
broader than mere concern for legal liability. Function- the intensity and extent to which healthcare profession-
alities to trigger the desired “ethical reflex” should be als are already being affected by the development of AI,
integrated into the heart of the interface—for example, and will be even more so in the future. In order for AI
alerting the professional about the diversity or source to benefit them, as well as patients, healthcare organiza-
of the data they are entering, or even about the result tions, and society as a whole, we must first know how to
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 15 of 19

3. Dilsizian SE, Siegel EL. Artificial intelligence in medicine and cardiac imaging:
identify these issues in practice. It is vital that healthcare harnessing big data and advanced computing to provide personalized medi-
professionals can tell whether ethical or legal problems cal diagnosis and treatment. Curr Cardiol Rep. 2014;16(1):441.
arise while implementing and using AI tools, so they can 4. Ramesh AN, Kambhampati C, Monson JRT, Drew PJ. Artificial intelligence in
medicine. Ann R Coll Surg Engl. 2004;86(5):334–8.
react to them in the most appropriate way. Such knowl- 5. Somashekhar SP, Sepúlveda MJ, Puglielli S, Norden AD, Shortliffe EH, Rohit
edge can guide their usage of AI, allowing them to better Kumar C, et al. Watson for Oncology and breast cancer treatment recommen-
adjust to this new technology and to keep a helpful criti- dations: agreement with an expert multidisciplinary tumor board. Ann Oncol.
2018;29(2):418–23.
cal lens - notably through a benefit/risk perspective that 6. Li Y. Research and Application of Deep Learning in Image Recognition. In:
is already important in the healthcare field. To achieve 2022 IEEE 2nd International Conference on Power, Electronics and Computer
this, we suggest reviewing the initial and ongoing train- Applications (ICPECA) [Internet]. 2022. pp. 994–9. Available from: ​h​t​t​​p​s​:​/​​/​d​o​​i​.​​o​
r​g​/​1​0​.​1​0​9​3​/​a​n​n​o​n​c​/​m​d​x​7​8​1​​​​​​​
ing of professionals, supporting professionals in their use 7. Amato F, López A, Peña-Méndez EM, Vaňhara P, Hampl A, Havel J. Artificial
of AI tools through ethical and regulatory evaluation, and neural networks in medical diagnosis. J Appl Biomed. 2013;11(2):47–58.
cultivating new reflexes to respond to a “potential risk” in 8. Motulsky A, Nikiema JN, Després P, Castonguay A, Cousineau M, Martineau JT
et al. Promesses de l’IA en santé [Internet]. Québec: Observatoire interna-
legal or ethical terms. tional sur les impacts sociétaux de l’IA et du numérique; 2022 Oct [cited 2023
Oct 18]. Available from: https://docdro.id/OGeIz8c
Author contributions 9. Lawson BE, Atakan Varol H, Sup F, Goldfarb M. Stumble detection and classifi-
All the authors jointly brainstormed, shared ideas and developed the cation for an intelligent transfemoral prosthesis. In: 2010 Annual International
substance of the manuscript, i.e. in particular main arguments and discussion. Conference of the IEEE Engineering in Medicine and Biology [Internet]. 2010.
MC wrote the main part of the manuscript, JTM and CR rewrote some parts pp. 511–4. Available from: https:/​/doi.or​g/10.11​09/I​EMBS.2010.5626021
and revised the overall text several times. 10. Aristidou A, Jena R, Topol EJ. Bridging the chasm between AI and clinical
implementation. Lancet. 2022;399(10325):620.
Funding 11. Ferryman K. Rethinking the AI Chasm. Am J Bioeth. 2022;22(5):29–30.
The authors of the study are funded by the HEC Montreal Chair in 12. Karpathakis K, Morley J, Floridi L. A Justifiable Investment in AI for Healthcare:
Organizational Ethics and AI Governance (JTM), IVADO HCAI PFR3 Program Aligning Ambition with Reality [Internet]., Rochester NY. 2024 [cited 2024
and the Fonds de Recherche du Québec-Société et Culture (MC), the Canada- Aug 15]. Available from: https:/​/papers​.ssrn.c​om/a​bstract=4795198
CIFAR Chair in Artificial Intelligence and the Canada Research Chair in Health 13. Régis C, Laverdiere M. Soutenir l’encadrement des pratiques profession-
Law and Policy (CR). nelles en matière d’intelligence artificielle dans le secteur de la santé et des
relations humaines: Proposition d’un prototype de code de déontologie
Data availability [Internet]. Montreal: University of Montreal H-POD; 2023 [cited 2023 Sep 8] p.
We do not analyze or generate any datasets, because our work proceeds 36. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​d​o​c​​d​r​o​i​​d​.​n​​e​t​​/​d​N​y​c​f​u​v​/​d​o​c​u​m​e​n​t​-​f​i​n​a​l​-​i​a​c​o​d​
within a theoretical approach. One can obtain the relevant materials from the e​d​e​o​n​t​o​l​o​g​i​e​s​a​n​t​e​-​p​d​f​​​​​​​
references below. 14. OECD. OECD Framework for the Classification of AI systems [Internet]. 2022.
Available from: https:/​/www.oe​cd-ilib​rary​.org/content/paper/cb6d9eca-en
Declarations 15. OECD. Artificial Intelligence in Society [Internet]. 2019. Available from: ​h​t​t​​p​s​:​/​​/​
w​w​​w​.​​o​e​c​​d​-​i​l​​i​b​r​​a​r​​y​.​o​r​g​/​c​o​n​t​e​n​t​/​p​u​b​l​i​c​a​t​i​o​n​/​e​e​d​f​e​e​7​7​-​e​n​​​​​​​
Ethics approval and consent to participate 16. OECD. Explanatory memorandum on the updated OECD definition of an AI
Not applicable. No direct human or human data was involved in the study. system [Internet]. Paris: OECD. 2024 Mar [cited 2024 Aug 16]. Available from: ​
h​t​t​​p​s​:​/​​/​w​w​​w​.​​o​e​c​​d​-​i​l​​i​b​r​​a​r​​y​.​o​​r​g​/​s​​c​i​e​​n​c​​e​-​a​​n​d​-​t​​e​c​h​​n​o​​l​o​g​y​/​e​x​p​l​a​n​a​t​o​r​y​-​m​e​m​o​r​a​
Consent for publication n​d​u​m​-​o​n​-​t​h​e​-​u​p​d​a​t​e​d​-​o​e​c​d​-​d​e​f​i​n​i​t​i​o​n​-​o​f​-​a​n​-​a​i​-​s​y​s​t​e​m​_​6​2​3​d​a​8​9​8​-​e​n​​​​​​​
Not applicable. No direct human or human data was involved in the study. 17. Tan P, Xi Y, Chao S, Jiang D, Liu Z, Fan Y, et al. An Artificial Intelligence-
Enhanced Blood Pressure Monitor Wristband Based on Piezoelectric Nano-
Competing interests generator. Biosens (Basel). 2022;12(4):234.
The authors declare no competing interests. 18. Hussein A, Sallam ME, Abdalla MYA. Exploring New Horizons: Surgical Robots
Supported by Artificial Intelligence. Mesopotamian J Artif Intell Healthc.
Author details 2023;2023:40–4.
1
Faculty of Law, University of Montreal, Ch de la Tour, Montreal, 19. Min-Allah N, Alahmed BA, Albreek EM, Alghamdi LS, Alawad DA, Alharbi
QC H3T 1J7, Canada AS, et al. A survey of COVID-19 contact-tracing apps. Comput Biol Med.
2
Faculty of Law, Economics and Management, University of Paris Cité, Av. 2021;137:104787.
Pierre Larousse, Malakoff 92240, France 20. Amisha, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in
3
Department of Management, HEC Montreal, 3000 chemin de la Cote- medicine. J Family Med Prim Care. 2019;8(7):2328–31.
Sainte-Catherine, Montreal, QC H3T 2A7, Canada 21. Mintz Y, Brodie R. Introduction to artificial intelligence in medicine. Minim
4
Canada-CIFAR Chair in Artificial Intelligence, Mila, St-Urbain, Montreal, Invasive Therapy Allied Technol. 2019;28(2):73–81.
QC H2S 3H1, Canada 22. Shaheen MY. Applications of Artificial Intelligence (AI) in healthcare: A review.
ScienceOpen Preprints [Internet]. 2021 Sep 25 [cited 2024 Aug 4]; Available
from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​s​c​i​​e​n​c​e​​o​p​e​​n​.​​c​o​m​​/​h​o​s​​t​e​d​​-​d​​o​c​u​m​e​n​t​?​d​o​i​=​1​0​.​1​4​2​9​3​/​S​2​1​9​
Received: 24 October 2023 / Accepted: 10 December 2024 9​-​1​0​0​6​.​1​.​S​O​R​-​.​P​P​V​R​Y​8​K​.​v​1​​​​​​​
23. Kimiafar K, Sarbaz M, Tabatabaei SM, Ghaddaripouri K, Mousavi A, Mehneh
M, et al. Artificial Intelligence Literacy Among Healthcare Professionals and
Students: A Systematic Review. Front Health Inf. 2023;12:168.
24. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the applica-
References tion of AI in health care. J Am Med Inform Assoc. 2020;27(3):491–7.
1. Nazar M, Alam MM, Yafi E, Su’ud MM. A Systematic Review of Human–Com- 25. AI Index. Report 2023 – Artificial Intelligence Index [Internet]. [cited 2023 Jul
puter Interaction and Explainable Artificial Intelligence in Healthcare With 5]. Available from: https:/​/aiinde​x.stanf​ord.​edu/report/
Artificial Intelligence Techniques. IEEE Access. 2021;9:153316–48. 26. Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat
2. Azencott CA. Machine learning and genomics: precision medicine versus Mach Intell. 2019;1(9):389–99.
patient privacy. Philosophical Transactions of the Royal Society A: Mathemati- 27. World Health Organization. WHO guideline. [Internet]. 2019 [cited 2020 Nov
cal, Physical and Engineering Sciences [Internet]. 2018 Sep 13 [cited 2022 28]. Available from: http://​www.ncb​i.nlm.n​ih.g​ov/books/NBK541902/
Dec 15]; Available from: https:/​/doi.or​g/10.10​98/r​sta.2017.0350
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 16 of 19

28. The Declaration [Internet]. Déclaration de Montréal IA responsable. [cited 54. Currie G, Hawk KE. Ethical and Legal Challenges of Artificial Intelligence in
2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​m​o​​n​t​​r​e​a​​l​d​e​c​​l​a​r​​a​t​​i​o​n​-​r​e​s​p​o​n​s​i​b​l​e​a​i​.​c​o​m​/​t​ Nuclear Medicine. Semin Nucl Med. 2021;51(2):120–5.
h​e​-​d​e​c​l​a​r​a​t​i​o​n​/​​​​​​​ 55. Baker R. Confidentiality in Professional Medical Ethics. Am J Bioeth.
29. Google AI. [Internet]. [cited 2023 Sep 8]. Google AI Principles. Available from: 2006;6(2):39–41.
https:/​/ai.goo​gle/res​pons​ibility/principles/ 56. Protect the public. | Collège des médecins du Québec [Internet]. [cited 2023
30. AI Ethics | IBM [Internet]. [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​i​b​m​.​ Sep 8]. Available from: https:/​/www.cm​q.org/e​n/pr​otect-the-public
c​o​m​/​t​o​p​i​c​s​/​a​i​-​e​t​h​i​c​s​​​​​​​ 57. Siegler M. Confidentiality in Medicine: A Decrepit Concept. N Engl J Med.
31. Microsoft Responsible. AI | Microsoft AI [Internet]. [cited 2023 Sep 8]. Avail- 1982;307(24):1518–21.
able from: https:/​/www.mi​crosoft​.com​/en-us/ai/responsible-ai 58. Artificial Intelligence [AI] in Healthcare Market Size [Internet]. Fortune Busi-
32. Telia Company [Internet]. [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​t​e​l​i​ ness Insights. 2024 Jul [cited 2024 Aug 16] p. 159. Report No.: FBI100534
a​c​o​m​p​a​n​y​.​c​o​m​/​e​n​/​a​r​t​i​c​l​e​s​/​a​i​-​e​t​h​i​c​s​​​​​​​ Source: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​f​o​r​​t​u​n​e​​b​u​s​​i​n​​e​s​s​​i​n​s​i​​g​h​t​​s​.​​c​o​m​/​i​n​d​u​s​t​r​y​-​r​e​p​o​r​t​s​/​a​r​t​i​f​i​c​i​a​
33. Boddington P. Towards a Code of Ethics for Artificial Intelligence [Internet]. l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​i​n​-​h​e​a​l​t​h​c​a​r​e​-​m​a​r​k​e​t​-​1​0​0​5​3​4​​​​.​ Available from: h ​ ​t​t​​p​s​:​/​​/​w​w​​w​.​​f​o​r​​t​
Cham: Springer International Publishing; 2017 [cited 2024 Aug 4]. (Artificial u​n​e​​b​u​s​​i​n​​e​s​s​​i​n​s​i​​g​h​t​​s​.​​c​o​m​/​i​n​d​u​s​t​r​y​-​r​e​p​o​r​t​s​/​a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​i​n​-​h​e​a​l​t​h​c​a​r​
Intelligence: Foundations, Theory, and Algorithms). Available from: ​h​t​t​​p​:​/​/​​l​i​n​​k​.​​ e​-​m​a​r​k​e​t​-​1​0​0​5​3​4​​​
s​p​r​​i​n​g​e​​r​.​c​​o​m​​/​​​​​​h​t​t​p​s​:​/​/​d​o​i​.​o​r​g​/​1​0​.​1​0​0​7​/​9​7​8​-​3​-​3​1​9​-​6​0​6​4​8​-​4​​​​​​​ 59. Henderson B, Flood C, Scassa T. Artificial Intelligence in Canadian Healthcare:
34. Bouquet B. Éthique et travail social [Internet]. Dunod. Paris: Dunod; 2017 Will the Law Protect Us from Algorithmic Bias Resulting in Discrimination?
[cited 2023 Jul 5]. 288 p. (Santé Social). Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​d​u​n​​o​d​.​c​​ Can J Law Technol. 2022;19(2):475.
o​m​/​​s​c​​i​e​n​​c​e​s​-​​h​u​m​​a​i​​n​e​s​-​e​t​-​s​o​c​i​a​l​e​s​/​e​t​h​i​q​u​e​-​e​t​-​t​r​a​v​a​i​l​-​s​o​c​i​a​l​-​u​n​e​-​r​e​c​h​e​r​c​h​e​-​d​ 60. Amann J, Blasimme A, Vayena E, Frey D, Madai VI. the Precise4Q consortium.
u​-​s​e​n​s​-​0​​​​​​​ Explainability for artificial intelligence in healthcare: a multidisciplinary
35. Robles Carrillo M. Artificial intelligence: From ethics to law. Telecomm Policy. perspective. BMC Med Inf Decis Mak. 2020;20(1):310.
2020;44(6):101937. 61. Sharon T. The Googlization of health research: from disruptive innovation to
36. Larose G. Droit, déontologie et éthique clinique. In: Éthique clinique : un disruptive ethics. Per Med. 2016;13(6):563–74.
guide pour aborder la pratique [Internet]. Montreal: Sainte Justine Hospital; 62. Rumbold JMM, Pierscionek BK. A critique of the regulation of data science in
2015 [cited 2023 Feb 1]. pp. 89–96. (Actions cliniques). Available from: ​h​t​t​​p​s​:​/​​ healthcare research in the European Union. BMC Med Ethics. 2017;18(1):27.
/​s​c​​h​o​​l​a​r​​.​g​o​o​​g​l​e​​.​c​​o​m​/​s​c​h​o​l​a​r​?​h​l​=​f​r​&​a​s​_​s​d​t​=​0​%​2​C​5​&​q​=​%​C​3​%​8​9​t​h​i​q​u​e​+​c​l​i​n​i​q​ 63. Spiekermann S, Korunovska J, Langheinrich M. Inside the Organization: Why
u​e​+​%​3​A​+​u​n​+​g​u​i​d​e​+​p​o​u​r​+​a​b​o​r​d​e​r​+​l​a​+​p​r​a​t​i​q​u​e​&​b​t​n​G​=​​​​​​​ Privacy and Security Engineering Is a Challenge for Engineers. Proc IEEE.
37. Pimont M. L’éthique : fondements, approches et applications en santé et 2019;107(3):600–15.
services sociaux. Equilibre. 2016;11(1):4–15. 64. U.S. HIPAA Privacy Rule [Internet]. U.S. Code of Federal Regulations. Sec-
38. Stanford University Human-centered Artificial Intelligence. Artificial Intel- tion 160.103; 164.508. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​e​c​f​r​.​g​o​v​/​c​u​r​r​e​n​t​/​t​i​t​l​e​-​4​5​/​p​
ligence Index Report 2024 [Internet]. Stanford University Human-centered a​r​t​-​1​6​0​​​​​​​
Artificial Intelligence; 2024 [cited 2024 Aug 16] p. 502. Report No.: Index. 65. Australian Privacy Act [Internet]. Australian Federal Register of Legislation, 119
Available from: ​h​t​t​​p​s​:​/​​/​a​i​​i​n​​d​e​x​​.​s​t​a​​n​f​o​​r​d​​.​e​d​u​/​w​p​-​c​o​n​t​e​n​t​/​u​p​l​o​a​d​s​/​2​0​2​4​/​0​5​/​H​ 1988. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​o​a​i​​c​.​g​o​​v​.​a​​u​/​​p​r​i​v​a​c​y​/​p​r​i​v​a​c​y​-​l​e​g​i​s​l​a​t​i​o​n​/​t​h​
A​I​_​A​I​-​I​n​d​e​x​-​R​e​p​o​r​t​-​2​0​2​4​.​p​d​f​​​​​​​ e​-​p​r​i​v​a​c​y​-​a​c​t​​​​​​​
39. Dameski A. A Comprehensive Ethical Framework for AI Entities: Foundations. 66. French Data Protection Authority. (Commission nationale de l’informatique et
In: Iklé M, Franz A, Rzepka R, Goertzel B, editors. Artificial General Intelligence. des libertés). Cnil.fr. [cited 2023 Oct 13]. What is health information ? / Qu’est-
Cham: Springer International Publishing; 2018. pp. 42–51. ce ce qu’une donnée de santé ? ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​c​n​i​l​.​f​r​/​f​r​/​q​u​e​s​t​-​c​e​-​c​e​-​q​u​u​n​e​-​d​o​
40. Jean A. Une brève introduction à l’intelligence artificielle. Med Sci (Paris). n​n​e​e​-​d​e​-​s​a​n​t​e​​​​​​​
2020;36(11):1059–67. 67. Regulation (EU) 2016/679 of the European Parliament and of the Council
41. Cai Q, Luo X, Wang P, Gao C, Zhao P. Hybrid model-driven and data-driven of 27 April. 2016 on the protection of natural persons with regard to the
control method based on machine learning algorithm in energy hub and processing of personal data and on the free movement of such data, and
application. Appl Energy. 2022;305:117913. repealing Directive 95/46/EC (General Data Protection Regulation) [Internet].
42. Anom BY. Ethics of Big Data and artificial intelligence in medicine. Ethics Med OJ L, (EU) 2016/679 Apr 27, 2016. Available from: ​h​t​t​​p​s​:​/​​/​e​u​​r​-​​l​e​x​.​e​u​r​o​p​a​.​e​u​/​e​l​i​
Public Health. 2020;15:100568. /​r​e​g​/​2​0​1​6​/​6​7​9​/​o​j​​​​​​​
43. Sharon T, Zandbergen D. From data fetishism to quantifying selves: 68. Pesapane F, Volonté C, Codari M, Sardanelli F. Artificial intelligence as a
Self-tracking practices and the other values of data. New Media Soc. medical device in radiology: ethical and regulatory issues in Europe and the
2017;19(11):1695–709. United States. Insights Imaging. 2018;9(5):745–53.
44. Lupton D. The Quantified Self. Wiley; 2016. p. 196. 69. California Consumer Privacy Act [Internet]. California Civil Code. Sec-
45. Walters GJ. Privacy and security: an ethical analysis. SIGCAS Comput Soc. tion 1798.100–1798.199.100. Available from: ​h​t​t​​p​s​:​/​​/​l​e​​g​i​​n​f​o​​.​l​e​g​​i​s​l​​a​t​​u​r​e​​.​c​a​.​​g​o​v​​/​
2001;31(2):8–23. f​​a​c​e​s​/​c​o​d​e​s​_​d​i​s​p​l​a​y​S​e​c​t​i​o​n​.​x​h​t​m​l​?​l​a​w​C​o​d​e​=​C​I​V​&​s​e​c​t​i​o​n​N​u​m​=​1​7​9​8​.​1​4​0​​​​​​​
46. Goodman K. Ethics in Health Informatics. Yearbook of Medical Informatics 70. An Act to modernize legislative provisions as regards the protection of
[Internet]. 2020;29. Available from: https:/​/doi.or​g/10.10​55/s​-0040-1701966 personal information [Internet]. RLRQ, c. 25 Sep 22. 2021. Available from: ​h​t​t​​p​
47. Beauchamp TL, Childress JF. Principles of Biomedical Ethics [Internet]. Oxford s​:​/​​/​w​w​​w​.​​c​a​n​​l​i​i​.​​o​r​g​​/​e​​n​/​q​c​/​l​a​w​s​/​a​s​t​a​t​/​s​q​-​2​0​2​1​-​c​-​2​5​/​l​a​t​e​s​t​/​s​q​-​2​0​2​1​-​c​-​2​5​.​h​t​m​l​​​​​​​
University Press; 2001. 470 p. Available from: ​h​t​t​​p​s​:​/​​/​b​o​​o​k​​s​.​g​o​o​g​l​e​.​c​a​/​b​o​o​k​s​?​i​ 71. Quebec Act respecting the protection. of personal information in the private
d​=​_​1​4​H​7​M​O​w​1​o​4​C​​​​​​​ sector [Internet]. Sect. chapter P-39.1 p. preprint 12–1. Available from: ​h​t​t​​p​s​:​/​​/​
48. Gillon R. Defending the four principles approach as a good basis for good w​w​​w​.​​l​e​g​i​s​q​u​e​b​e​c​.​g​o​u​v​.​q​c​.​c​a​/​e​n​/​d​o​c​u​m​e​n​t​/​c​s​/​p​-​3​9​.​1​​​​​​​
medical practice and therefore for good medical ethics. J Med Ethics. 72. Quebec Act respecting Access to documents held by public bodies. and the
2015;41(1):111–6. Protection of personal information [Internet]. Sect. chapter A-2.1 p. preprint
49. Mokrosinska D. Privacy and Autonomy: On Some Misconceptions Concern- 65.2. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​l​e​g​i​s​q​u​e​b​e​c​.​g​o​u​v​.​q​c​.​c​a​/​e​n​/​d​o​c​u​m​e​n​t​/​c​s​/​
ing the Political Dimensions of Privacy. Law Philos. 2018;37(2):117–43. A​-​2​.​1​​​​​​​
50. Koepsell D. Duties of Science to Society (and Vice Versa). In: Koepsell D, editor. 73. Christman J. Autonomy in Moral and Political Philosophy. In: The Stanford
Scientific Integrity and Research Ethics: An Approach from the Ethos of Sci- Encyclopedia of Philosophy [Internet]. Fall 2020. Stanford: Metaphysics
ence [Internet]. Cham: Springer International Publishing; 2017 [cited 2023 Jul Research Lab, Stanford University; 2020 [cited 2023 Sep 8]. Available from:
5]. pp. 85–95. (SpringerBriefs in Ethics). Available from: ​h​t​t​​p​s​:​/​​/​d​o​​i​.​​o​r​g​/​1​0​.​1​0​0​ https:/​/plato.​stanfor​d.ed​u/arch​ives/fa​ll2020/​entr​ies/autonomy-moral/
7​/​9​7​8​-​3​-​3​1​9​-​5​1​2​7​7​-​8​_​8​​​​​​​ 74. Miquel PA. Respect et inviolabilité du corps humain. Noesis. 2007;(12):239–63.
51. Formarier M. La relation de soin, concepts et finalités. Recherche en soins 75. Erasmus A, Brunet TDP, Fisher E. What is Interpretability? Philos Technol.
infirmiers. 2007;89(2):33–42. 2021;34(4):833–62.
52. Gerke S, Minssen T, Yu H, Cohen IG. Ethical and legal issues of ingestible 76. Faden RR. King TLBI collaboration with NMP. A History and Theory of
electronic sensors. Nat Electron. 2019;2(8):329–34. Informed Consent [Internet]. Oxford, New York: Oxford University Press; 1986.
53. Abdullah YI, Schuman JS, Shabsigh R, Caplan A, Al-Aswad LA. Ethics of Artifi- 408 p. Available from: ​h​t​t​​p​s​:​/​​/​b​o​​o​k​​s​.​g​​o​o​g​l​​e​.​c​​a​/​​b​o​o​k​s​?​h​l​=​f​r​&​l​r​=​&​i​d​=​j​g​i​7​O​W​x​
cial Intelligence in Medicine and Ophthalmology. Asia-Pacific J Ophthalmol. D​T​9​c​C​&​o​i​=​f​n​d​&​p​g​=​P​A​3​&​d​q​=​A​+​H​i​s​t​o​r​y​+​a​n​d​+​T​h​e​o​r​y​+​o​f​+​I​n​f​o​r​m​e​d​+​C​o​n​s​e​n​
2021;10(3):289. t​&​o​t​s​=​Z​i​O​T​Z​Y​X​i​Q​8​&​s​i​g​=​4​Y​t​x​l​s​3​x​o​Q​B​e​r​h​3​r​u​G​k​C​K​S​Z​T​L​f​8​&​r​e​d​i​r​_​e​s​c​=​y​#​v​=​o​n​e​
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 17 of 19

p​a​g​e​&​q​=​A​%​2​0​H​i​s​t​o​r​y​%​2​0​a​n​d​%​2​0​T​h​e​o​r​y​%​2​0​o​f​%​2​0​I​n​f​o​r​m​e​d​%​2​0​C​o​n​s​e​n​t​&​f​ 94. Crawford K, Miltner K, Gray M. Critiquing Big Data: Politics Ethics Epistemol-
=​f​a​l​s​e​​​​​​​ ogy (2014) International Journal of Communication 8 1663; Boyd D Crawford
77. Cordeiro JV. Digital Technologies and Data Science as Health Enablers: An K ‘Critical Questions for Big Data: Provocations for a Cultural Technologi-
Outline of Appealing Promises and Compelling Ethical, Legal, and Social cal and Scholarly Phenomenon. Information, Communication & Society.
Challenges. Frontiers in Medicine [Internet]. 2021 [cited 2022 Dec 15];8. Avail- 2012;15(5):662.
able from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​f​r​o​​n​t​i​e​​r​s​i​​n​.​​o​r​g​/​a​r​t​i​c​l​e​s​/​​​​​​h​t​t​p​s​:​/​/​d​o​i​.​o​r​g​/​1​0​.​3​3​8​9​/​f​m​e​d​.​ 95. Abràmoff MD, Tobey D, Char DS. Lessons Learned About Autonomous AI:
2​0​2​1​.​6​4​7​8​9​7​​​​​​​ Finding a Safe, Efficacious, and Ethical Path Through the Development
78. Porsdam Mann S, Savulescu J, Sahakian BJ. Facilitating the ethical use of Process. Am J Ophthalmol. 2020;214:134–42.
health data for the benefit of society: electronic health records, consent and 96. Joyce K, Smith-Doerr L, Alegria S, Bell S, Cruz T, Hoffman SG, et al. Toward a
the duty of easy rescue. Philosophical Transactions of the Royal Society A: Sociology of Artificial Intelligence: A Call for Research on Inequalities and
Mathematical, Physical and Engineering Sciences. 2016;374(2083):20160130. Structural Change. Socius. 2021;7:2378023121999581.
79. Pham A, Rubel A, Castro C, editors. Autonomy, Agency, and Responsibility. 97. Chen S, Bergman D, Miller K, Kavanagh A, Frownfelter J, Showalter J. Using
In: Algorithms and : The Ethics of Automated Decision Systems [Internet]. applied machine learning to predict healthcare utilization based on socio-
Cambridge: Cambridge University Press; 2021 [cited 2024 Aug 5]. pp. 21–42. economic determinants of care. Am J Manag Care. 2020;26(1):26–31.
Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​c​a​m​​b​r​i​d​​g​e​.​​o​r​​g​/​c​​o​r​e​/​​b​o​o​​k​s​​/​a​l​​g​o​r​i​​t​h​m​​s​-​​a​n​d​-​a​u​t​o​ 98. Onukwugha E, Duru OK, Peprah E, Foreword. Big Data and Its Application in
n​o​m​y​/​a​u​t​o​n​o​m​y​-​a​g​e​n​c​y​-​a​n​d​-​r​e​s​p​o​n​s​i​b​i​l​i​t​y​/​B​A​4​D​8​0​9​3​8​2​F​6​3​A​0​D​B​4​F​9​5​4​9​E​E​ Health Disparities Research. Ethn Dis 27(2):69–72.
9​E​9​9​6​4​1​​​​​​​ 99. Buolamwini J, Gebru T. Gender Shades: Intersectional Accuracy Disparities in
80. Duguet J, Chassang G, Béranger J, Enjeux. répercussions et cadre éthique Commercial Gender Classification. In: Proceedings of the 1st Conference on
relatifs à l’Intelligence Artificielle en santé. Vers une Intelligence Artificielle Fairness, Accountability and Transparency [Internet]. PMLR; 2018 [cited 2024
éthique en médecine. Droit, Santé et Société. 2019;3(3):30–9. Aug 16]. pp. 77–91. Available from: ​h​t​t​​p​s​:​/​​/​p​r​​o​c​​e​e​d​i​n​g​s​.​m​l​r​.​p​r​e​s​s​/​v​8​1​/​b​u​o​l​a​
81. de Marcellis-Warin N, Marty F, Thelisson E, Warin T. Intelligence artifici- m​w​i​n​i​1​8​a​.​h​t​m​l​​​​​​​
elle et manipulations des comportements de marché : l’évaluation ex 100. Khaitan T. A Theory of Discrimination Law [Internet]. Oxford, New York: Oxford
ante dans l’arsenal du régulateur. Revue Int de droit économique. 2020;t University Press; 2015. 288 p. Available from: ​h​t​t​​p​s​:​/​​/​b​o​​o​k​​s​.​g​​o​o​g​l​​e​.​c​​a​/​​b​o​o​k​s​?​
XXXIV(2):203–45. h​l​=​f​r​%​2​6​l​r​=​%​2​6​i​d​=​H​w​j​H​C​Q​A​A​Q​B​A​J​%​2​6​o​i​=​f​n​d​%​2​6​p​g​=​P​P​1​%​2​6​d​q​=​A​+​T​h​e​o​r​
82. SUNSTEIN CR. Sludge Audits. Behavioural Public Policy. 2020/01/06 ed. y​+​o​f​+​D​i​s​c​r​i​m​i​n​a​t​i​o​n​+​L​a​w​%​2​6​o​t​s​=​S​_​9​u​p​V​a​m​-​5​%​2​6​s​i​g​=​5​-​i​B​_​Y​Y​t​R​f​e​B​M​Q​R​3​-​L​
2022;6(4):654–73. a​c​A​T​6​4​U​G​w​%​2​6​r​e​d​i​r​_​e​s​c​=​y​#​v​=​o​n​e​p​a​g​e​%​2​6​q​=​A​%​2​0​T​h​e​o​r​y​%​2​0​o​f​%​2​0​D​i​s​c​r​i​
83. Birhane A. Algorithmic injustice: a relational ethics approach. Patterns. m​i​n​a​t​i​o​n​%​2​0​L​a​w​%​2​6​f​=​f​a​l​s​e​​​​​​​
2021;2(2):100205. 101. www.mipex.eu [Internet]. [cited 2023 Sep 8]. Anti-discrimination - MIPEX
84. Abeezar S. Consent requires more than respect for autonomy [Internet]. 2020. Available from: https:/​/www.mi​pex.eu/​anti​-discrimination
Journal of Medical Ethics blog. 2021 [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​ 102. Li Vigni F, Chapitre. 1. Les théories de la complexité : un essai de mise en
/​b​l​​o​g​​s​.​b​​m​j​.​c​​o​m​/​​m​e​​d​i​c​​a​l​-​e​​t​h​i​​c​s​​/​2​0​2​1​/​0​8​/​2​7​/​c​o​n​s​e​n​t​-​r​e​q​u​i​r​e​s​-​m​o​r​e​-​t​h​a​n​-​r​e​s​ ordre. In: Histoire et sociologie des sciences de la complexité [Internet].
p​e​c​t​-​f​o​r​-​a​u​t​o​n​o​m​y​/​​​​​​​ Paris: Éditions Matériologiques; 2022. pp. 17–44. (Modélisations, simulations,
85. Cohen IG. Informed Consent and Medical Artificial Intelligence: What to Tell systèmes complexes). Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​c​a​i​​r​n​.​i​​n​f​o​​/​h​​i​s​t​o​i​r​e​-​e​t​-​s​o​c​i​o​
the Patient? the georgetown law journal [Internet]. 108. Available from: ​h​t​t​​p​s​:​ l​o​g​i​e​-​d​e​s​-​s​c​i​e​n​c​e​s​-​-​9​7​8​2​3​7​3​6​1​3​3​4​6​-​p​-​1​7​.​h​t​m​​​​​​​
/​​/​s​c​​h​o​​l​a​r​​.​g​o​o​​g​l​e​​.​c​​o​m​/​s​c​h​o​l​a​r​?​h​l​=​f​r​&​a​s​_​s​d​t​=​0​%​2​C​5​&​q​=​%​2​2​I​n​f​o​r​m​e​d​+​C​o​n​s​e​ 103. Sabouret N. The Conversation. 2020 [cited 2023 Sep 8]. Why Artificial Intel-
n​t​+​a​n​d​+​M​e​d​i​c​a​l​+​A​r​t​i​f​i​c​i​a​l​+​I​n​t​e​l​l​i​g​e​n​c​e​%​3​A​+​W​h​a​t​+​t​o​+​T​e​l​l​+​t​h​e​+​P​a​t​i​e​n​t​%​3​F​ ligence Gets It Wrong All the Time. Available from: ​h​t​t​​p​:​/​/​​t​h​e​​c​o​​n​v​e​​r​s​a​t​​i​o​n​​.​c​​o​
%​2​2​&​b​t​n​G​​​​= ​ ​.​​​ m​/​​p​o​u​r​​q​u​o​​i​-​​l​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​s​e​-​t​r​o​m​p​e​-​t​o​u​t​-​l​e​-​t​e​m​p​s​-​1​4​3​0​1​9​​​​​​​
86. Philips-Nootens S, Kouri RP. Éléments de responsabilité civile médicale - Le 104. Todisco I, Giglio GEM, Zerlenga O. Automatic Image Recognition. Applica-
droit dans le quotidien de la médecine [Internet]. 5ème. Éditions Yvon Blais; tions to Architecture. In: Luigini A, editor. Proceedings of the 1st International
2022 [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​w​i​l​​s​o​n​l​​a​f​l​​e​u​​r​.​c​o​m​/​w​i​l​s​o​ and Interdisciplinary Conference on Digital Environments for Education, Arts
n​l​a​f​l​e​u​r​/​C​a​t​D​e​t​a​i​l​s​.​a​s​p​x​?​C​=​3​4​7​.​5​2​7​.​2​2​​​​​​​ and Heritage [Internet]. Cham: Springer International Publishing; 2019. pp.
87. Shearer E, Cho M, Magnus D. Chapter 23 - Regulatory, socithical, and legal 106–15. (Advances in Intelligent Systems and Computing). Available from:
issues of artificial intelligence in medicine. In: Xing L, Giger ML, Min JK, edi- https:/​/doi.or​g/10.10​07/9​78-3-030-12240-9_12
tors. Artificial Intelligence in Medicine [Internet]. Academic Press; 2021 [cited 105. Heaven D. Why deep-learning AIs are so easy to fool. Nature.
2022 Dec 15]. pp. 457–77. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​s​c​i​​e​n​c​e​​d​i​r​​e​c​​t​.​c​o​m​/​s​c​i​ 2019;574(7777):163–6.
e​n​c​e​/​a​r​t​i​c​l​e​/​p​i​i​/​B​9​7​8​0​1​2​8​2​1​2​5​9​2​0​0​0​2​3​5​​​​​​​ 106. Finlayson SG, Bowers JD, Ito J, Zittrain JL, Beam AL, Kohane IS. Adversarial
88. Turner BAL, Rainie M, Anderson A, Perrin. Madhu Kumar and Erica. Americans attacks on medical machine learning. Science. 2019;363(6433):1287–9.
and Privacy: Concerned, Confused and Feeling Lack of Control Over Their 107. Ozturk A. Lessons Learned from Robotics and AI in a Liability Context: A
Personal Information [Internet]. Pew Research Center. 2019 [cited 2024 Aug Sustainability Perspective. In: Carpenter A, Johansson TM, Skinner JA, editors.
16]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​p​e​w​​r​e​s​e​​a​r​c​​h​.​​o​r​g​​/​i​n​t​​e​r​n​​e​t​​/​2​0​​1​9​/​1​​1​/​1​​5​/​​a​m​e​​r​ Sustainability in the Maritime Domain: Towards Ocean Governance and
i​c​a​​n​s​-​​a​n​​d​-​p​r​i​v​a​c​y​-​c​o​n​c​e​r​n​e​d​-​c​o​n​f​u​s​e​d​-​a​n​d​-​f​e​e​l​i​n​g​-​l​a​c​k​-​o​f​-​c​o​n​t​r​o​l​-​o​v​e​r​-​t​h​e​i​ Beyond [Internet]. Cham: Springer International Publishing; 2021 [cited 2023
r​-​p​e​r​s​o​n​a​l​-​i​n​f​o​r​m​a​t​i​o​n​/​​​​​​​ Feb 2]. pp. 315–35. (Strategies for Sustainability). Available from: ​h​t​t​​p​s​:​/​​/​d​o​​i​.​​o​r​
89. Viard-Guillot L. 82% des internautes protègent leurs données personnelles en g​/​1​0​.​1​0​0​7​/​9​7​8​-​3​-​0​3​0​-​6​9​3​2​5​-​1​_​1​6​​​​​​​
ligne - Insee Focus – 272 [Internet]. 2022 [cited 2024 Aug 16]. Available from: 108. Bond RR, Novotny T, Andrsova I, Koc L, Sisakova M, Finlay D, et al. Automation
https:/​/www.in​see.fr/​fr/s​tatistiques/6475020 bias in medicine: The influence of automated diagnoses on interpreter accu-
90. Dufresne Y, Dumouchel D, Poirier W. Fondements de l’acceptabilité sociale racy and uncertainty when reading electrocardiograms. J Electrocardiology.
des applications de traçage en temps de pandémie- technophobie, crainte 2018;51(6, Supplement):S6–11. ​h​t​t​​p​s​:​/​​/​d​o​​i​.​​o​r​g​/​1​0​.​1​0​1​6​/​j​.​j​e​l​e​c​t​r​o​c​a​r​d​.​2​0​1​8​.​0​8​.​
sanitaire ou idéologie démocratique [Internet]. International Observatory on 0​0​7​​​​.​ ​​​
the societal impacts of AI and digital technology; 2021 Jun [cited 2023 Sep 109. Lyell D, Magrabi F, Raban MZ, Pont LG, Baysari MT, Day RO, et al. Automation
8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​d​o​c​​d​r​o​i​​d​.​c​​o​m​​/​8​B​​7​Q​z​z​​K​/​f​​o​n​​d​e​m​​e​n​t​s​​-​d​e​​-​l​​a​c​c​​ bias in electronic prescribing. BMC Med Inform Decis Mak. 2017;17(1):28.
e​p​t​a​​b​i​l​​i​t​​e​-​s​​o​c​i​a​​l​e​-​​d​e​​s​-​a​p​p​l​i​c​a​t​i​o​n​s​-​d​e​-​t​r​a​c​a​g​e​-​e​n​-​t​e​m​p​s​-​d​e​-​p​a​n​d​e​m​i​e​-​t​e​c​h​ https:/​/doi.or​g/10.11​86/s​12911-017-0425-5.
n​o​p​h​o​b​i​e​-​c​r​a​i​n​t​e​-​s​a​n​i​t​a​i​r​e​-​o​u​-​i​d​e​o​l​o​g​i​e​-​d​e​m​o​c​r​a​t​i​q​u​e​-​p​d​f​​​​​​​ 110. O’Sullivan S, Nevejans N, Allen C, Blyth A, Leonard S, Pagallo U, et al. Legal,
91. Crichton C. L’intelligence artificielle dans la révision de la loi bioéthique - IP/ regulatory, and ethical frameworks for development of standards in artificial
IT et Communication | Dalloz Actualité [Internet]. Dalloz Actualité IP/IT. 2021 intelligence (AI) and autonomous robotic surgery. Int J Med Rob Comput
[cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​d​a​l​​l​o​z​-​​a​c​t​​u​a​​l​i​t​​e​.​f​r​​/​n​o​​d​e​​/​l​-​i​n​t​e​ Assist Surg. 2019;15(1):e1968.
l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​d​a​n​s​-​r​e​v​i​s​i​o​n​-​d​e​-​l​o​i​-​b​i​o​e​t​h​i​q​u​e​​​​​​​ 111. Gracia D. The Foundation of Medical Ethics in the Democratic Evolution
92. Besse P, Besse-Patin A, Castets-Renard C. Implications juridiques et éthiques of Modern Society. In: Thomasma DC, Weisstub DN, Kushner TK, Viafora C,
des algorithmes d’intelligence artificielle dans le domaine de la santé. Statis- editors. Clinical Bioethics: A Search for the Foundations [Internet]. Dordrecht:
tique et Société. 2020;8(3):21–53. Springer Netherlands; 2005 [cited 2023 Sep 8]. pp. 33–40. (International
93. Wiens J, Saria S, Sendak M, Ghassemi M, Liu VX, Doshi-Velez F, et al. Do no Library of Ethics, Law, and the New Medicine). Available from: ​h​t​t​​p​s​:​/​​/​d​o​​i​.​​o​r​g​/​
harm: a roadmap for responsible machine learning for health care. Nat Med. 1​0​.​1​0​0​7​/​1​-​4​0​2​0​-​3​5​9​3​-​4​_​3​​​​​​​
2019;25(9):1337–40.
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 18 of 19

112. World Health Organization. Ethics and governance of artificial intelligence for 131. Bettache M, Foisy L. Intelligence artificielle et transformation des emplois.
health [Internet]. Geneva: World Health Organization. 2021 [cited 2023 Feb 1]. Question(s) de Manage. 2019;25(3):61.
Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​w​h​o​​.​i​n​t​​/​p​u​​b​l​​i​c​a​t​i​o​n​s​-​d​e​t​a​i​l​-​r​e​d​i​r​e​c​t​/​9​7​8​9​2​4​0​0​2​ 132. Mazurowski MA. Artificial Intelligence May Cause a Significant Disruption to
9​2​0​0​​​​​​​ the Radiology Workforce. J Am Coll Radiol. 2019;16(8):1077–82.
113. Canada H. Good machine learning practice for medical device development: 133. Benhamou S, Janin L. Intelligence artificielle et travail [Internet]. Paris, France:
Guiding principles [Internet]. 2021 [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​ France Stratégie; 2018 [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​s​t​r​​a​t​e​g​​
/​w​w​​w​.​​c​a​n​​a​d​a​.​​c​a​/​​e​n​​/​h​e​​a​l​t​h​​-​c​a​​n​a​​d​a​/​​s​e​r​v​​i​c​e​​s​/​​d​r​u​​g​s​-​h​​e​a​l​​t​h​​-​p​r​o​d​u​c​t​s​/​m​e​d​i​c​a​ i​e​.​​g​o​​u​v​.​f​r​/​p​u​b​l​i​c​a​t​i​o​n​s​/​i​n​t​e​l​l​i​g​e​n​c​e​-​a​r​t​i​f​i​c​i​e​l​l​e​-​t​r​a​v​a​i​l​​​​​​​
l​-​d​e​v​i​c​e​s​/​g​o​o​d​-​m​a​c​h​i​n​e​-​l​e​a​r​n​i​n​g​-​p​r​a​c​t​i​c​e​-​m​e​d​i​c​a​l​-​d​e​v​i​c​e​-​d​e​v​e​l​o​p​m​e​n​t​.​h​t​ 134. Schwendicke F, Samek W, Krois J. Artificial Intelligence in Dentistry: Chances
m​l​​​​​​​ and Challenges. J Dent Res. 2020;99(7):769–74.
114. Cousineau M, Castonguay A, Définitions. et usages de l’IA en santé [Internet]. 135. PARO Therapeutic Robot [Internet]. [cited 2023 Sep 8]. Available from: ​h​t​t​p​:​/​/​
Montréal: International Observatory on the societal impacts of AI and digital w​w​w​.​p​a​r​o​r​o​b​o​t​s​.​c​o​m​/​​​​​​​
technology; 2022 Mar [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​d​o​c​​d​r​o​ 136. Artificial Intelligence and Employment [Internet]. OECD. 2021 Dec [cited 2023
i​​d​.​c​​o​m​​/​X​6​k​n​v​z​Z​/​d​e​f​i​n​i​t​i​o​n​s​-​e​t​-​u​s​a​g​e​s​-​d​e​-​l​i​a​-​e​n​-​s​a​n​t​e​-​p​d​f​​​​​​​ Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​o​e​c​​d​.​o​r​​g​/​f​​u​t​​u​r​e​-​o​f​-​w​o​r​k​/​r​e​p​o​r​t​s​-​a​n​d​-​d​a​t​
115. Enarsson T, Enqvist L, Naarttijärvi M. Approaching the human in the loop – a​/​A​I​-​E​m​p​l​o​y​m​e​n​t​-​b​r​i​e​f​-​2​0​2​1​.​p​d​f​​​​​​​
legal perspectives on hybrid human/algorithmic decision-making in three 137. Ellahham S. Artificial Intelligence: The Future for Diabetes Care. Am J Med.
contexts. Inform Commun Technol Law. 2022;31(1):123–53. 2020;133(8):895–900.
116. Proposal for a Regulation of the European Parliament and of the Council 138. Jarrahi MH. the age of the smart artificial intelligence: AI’s dual capacities for
laying down harmonised rules on Artificial Intelligence. (Artificial Intelligence automating and informating work. Bus Inform Rev. 2019;36(4):178–87.
Act) and amending certain Union Legislative Acts [Internet]. COM/2021/206 139. Chen Y, Stavropoulou C, Narasinkan R, Baker A, Scarbrough H. Profession-
final 2021. Available from: ​h​t​t​​p​s​:​/​​/​e​u​​r​-​​l​e​x​​.​e​u​r​​o​p​a​​.​e​​u​/​l​e​g​a​l​-​c​o​n​t​e​n​t​/​E​N​/​A​L​L​/​?​u​ als’ responses to the introduction of AI innovations in radiology and their
r​i​=​C​E​L​E​X​%​3​A​5​2​0​2​1​P​C​0​2​0​6​​​​​​​ implications for future adoption: a qualitative study. BMC Health Serv Res.
117. Tessier C. Éthique et IA: analyse et discussion. In: PFIA 2021 [Internet]., Bor- 2021;21(1):813.
deaux F. 2021 [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​h​a​​l​.​​s​c​i​e​n​c​e​/​h​a​l​-​0​3​2​8​ 140. Bernier J. L’intelligence artificielle et les mondes du travail. Perspectives
0​1​0​5​​​​​​​ sociojuridiques et enjeux éthiques. Presses de l’Université Laval; 2021. p. 232.
118. Brundage M, Avin S, Jack Clark H, Toner P, Eckersley, Garfinkel B et al. The Mali- 141. IA et emploi en santé : quoi de neuf docteur ? [Internet]. Paris, France: Institut
cious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation Montaigne; 2019 Jan [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​i​n​s​​t​i​t​u​​
[Internet]. Future of Humanity Institute; University of Oxford; Centre for the t​m​o​​n​t​​a​i​g​​n​e​.​o​​r​g​/​​r​e​​s​s​o​​u​r​c​e​​s​/​p​​d​f​​s​/​p​u​b​l​i​c​a​t​i​o​n​s​/​i​a​-​e​t​-​e​m​p​l​o​i​-​e​n​-​s​a​n​t​e​-​q​u​o​i​-​d​
Study of Existential Risk; University of Cambridge; Center for a New American e​-​n​e​u​f​-​d​o​c​t​e​u​r​-​n​o​t​e​.​p​d​f​​​​​​​
Security Electronic Frontier Foundation; OpenAI; 2018 Feb [cited 2022 Dec 142. Hamoni R, Lin O, Matthews M, Taillon PJ. Construire la future main-d’œuvre
15] p. 99. Available from: ​h​t​t​​p​s​:​/​​/​a​r​​x​i​​v​-​o​​r​g​.​e​​z​p​r​​o​x​​y​.​u​​-​p​a​r​​i​s​.​​f​r​​/​p​d​f​/​1​8​0​2​.​0​7​2​2​8​.​p​ canadienne dans le domaine de l’intelligence artificielle [Internet]. Ottawa,
d​f​&​s​a​=​D​&​u​s​t​=​1​5​5​0​7​3​9​4​7​1​1​0​9​0​0​0​.​p​d​f​​​​​​​ Canada: Conseil des technologies de l’information et des communications;
119. Evans B, Ossorio P. The Challenge of Regulating Clinical Decision Support [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​i​c​t​​c​-​c​t​​i​c​.​​c​a​​/​w​p​-​c​o​n​t​e​n​t​/​u​p​l​o​
Software After 21st Century Cures. Am J Law Med. 2018;44(2–3):237–51. a​d​s​/​2​0​2​1​/​0​3​/​I​C ​T​C​_​R​e​p​o​r​t​_​B​u​i​l​d​i​n​g​_​F​R​E​.​p​d​f​​​​​​​
120. Health C, for D. and R. Artificial Intelligence and Machine Learning in Software 143. Dupuis M, Hesbeen W. L’éthique organisationnelle dans le secteur de la santé:
as a Medical Device. FDA [Internet]. 2023 Apr 8 [cited 2023 Sep 8]; Available Ressources et limites contextuelles des pratiques soignantes [Internet]. 1er
from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​f​d​a​​.​g​o​v​​/​m​e​​d​i​​c​a​l​​-​d​e​v​​i​c​e​​s​/​​s​o​f​​t​w​a​r​​e​-​m​​e​d​​i​c​a​l​-​d​e​v​i​c​e​-​s​a​m​d​/​ édition. Paris: SELI ARSLAN; 2014. 182 p. Available from: ​h​t​t​​p​s​:​/​​/​s​c​​h​o​​l​a​r​​.​g​o​o​​g​l​e​​
a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​a​n​d​-​m​a​c​h​i​n​e​-​l​e​a​r​n​i​n​g​-​s​o​f​t​w​a​r​e​-​m​e​d​i​c​a​l​-​d​e​v​i​c​e​​​​​​​ .​c​​o​m​/​s​c​h​o​l​a​r​?​h​l​=​f​r​&​a​s​_​s​d​t​=​0​%​2​C​5​&​q​=​L​%​2​7​%​C​3​%​A​9​t​h​i​q​u​e​+​o​r​g​a​n​i​s​a​t​i​o​n​n​e​
121. Health C, for D. and R. Artificial Intelligence and Machine Learning (AI/ML)- l​l​e​+​d​a​n​s​+​l​e​+​s​e​c​t​e​u​r​+​d​e​+​l​a​+​s​a​n​t​%​C​3​%​A​9​%​3​A​+​R​e​s​s​o​u​r​c​e​s​+​e​t​+​l​i​m​i​t​e​s​+​c​o​n​
Enabled Medical Devices. FDA [Internet]. 2022 May 10 [cited 2023 Sep 8]; t​e​x​t​u​e​l​l​e​s​+​d​e​s​+​p​r​a​t​i​q​u​e​s​+​s​o​i​g​n​a​n​t​e​s​+​P​o​c​h​e​+​%​E​2​%​8​0​%​9​3​+​1​4​+​m​a​i​+​2​0​1​4​&​
Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​f​d​a​​.​g​o​v​​/​m​e​​d​i​​c​a​l​​-​d​e​v​​i​c​e​​s​/​​s​o​f​​t​w​a​r​​e​-​m​​e​d​​i​c​a​l​-​d​e​v​ b​t​n​G​​​​=​ ​.​​​
i​c​e​-​s​a​m​d​/​a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​a​n​d​-​m​a​c​h​i​n​e​-​l​e​a​r​n​i​n​g​-​a​i​m​l​-​e​n​a​b​l​e​d​-​m​e​d​i​c​a​ 144. Décret n°2016 – 1672 du 5 décembre 2016 relatif aux actes et activités réali-
l​-​d​e​v​i​c​e​s​​​​​​​ sés par les manipulateurs d’électroradiologie médicale [Internet]. 2016–1672
122. FDA. Proposed Regulatory Framework for modifications to Artificial Intelli- Dec 5, 2016. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​l​e​g​​i​f​r​a​​n​c​e​​.​g​​o​u​v​.​f​r​/​j​o​r ​f​/​i​d​/​J​O​R​F​T​E​X​T​
gence/Machine Learning (AI/ML) based software as a medical device (SaMD) 0​0​0​0​3​3​5​3​7​9​2​7​​​​​​​
[Internet]. U.S. Food and Drug Administration; [cited 2023 Sep 8]. Available 145. Ho MT, Le NTB, Mantello P, Ho MT, Ghotbi N. Understanding the acceptance
from: https:/​/www.fd​a.gov/m​edia​/122535/download of emotional artificial intelligence in Japanese healthcare system: A cross-
123. U.S. Food and Drug Administration. Artificial Intelligence & Medical Products: sectional survey of clinic visitors’ attitude. Technol Soc. 2023;72:102166.
How CBER, CDER, CDRH, and OCP are Working Together [Internet]. Silver 146. Tomašev N, Glorot X, Rae JW, Zielinski M, Askham H, Saraiva A, et al. A clini-
Spring, Maryland: U.S. Food and Drug Administration; 2024 Mar [cited 2024 cally applicable approach to continuous prediction of future acute kidney
Aug 16] p. 7. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​f​d​a​.​g​o​v​/​m​e​d​i​a​/​1​7​7​0​3​0​/​d​o​w​n​l​o​a​d​?​ injury. Nature. 2019;572(7767):116–9.
a​t​t​a​c​h​m​e​n​t​​​​​​​ 147. Fernandes M, Vieira SM, Leite F, Palos C, Finkelstein S, Sousa JMC. Clinical
124. Régis C, Flood CM. AI and Health Law [Internet]., Rochester NY. 2021 [cited Decision Support Systems for Triage in the Emergency Department using
2023 Sep 8]. Available from: https:/​/papers​.ssrn.c​om/a​bstract=3733964 Intelligent Systems: a Review. Artif Intell Med. 2020;102:101762.
125. Schönberger D. Artificial intelligence in healthcare: a critical analysis of the 148. Goldstein A, Shahar Y. An automated knowledge-based textual summariza-
legal and ethical implications. Int J Law Inform Technol. 2019;27(2):171–203. tion system for longitudinal, multivariate clinical data. J Biomed Inform.
126. Working group on 28th measure. Créer les conditions d’un développement 2016;61:159–75.
vertueux des objets connectés et des applications mobiles en santé [Inter- 149. Gray K, Slavotinek J, Dimaguila GL, Choo D. Artificial Intelligence Education
net]. French Ministry of Health; 2016 Oct [cited 2023 Sep 8]. Report No.: GT 28 for the Health Workforce: Expert Survey of Approaches and Needs. JMIR Med
CSF. Available from: ​h​t​t​​p​s​:​/​​/​s​a​​n​t​​e​.​g​​o​u​v​.​​f​r​/​​I​M​​G​/​p​d​f​/​r​a​p​p​o​r​t​-​g​t​2​8​-​o​c​t​o​b​r​e​-​2​0​1​ Educ. 2022;8(2):e35223.
6​-​v​f​-​f​u​l​l​.​p​d​f​​​​​​​ 150. Laï MC, Brian M, Mamzer MF. Perceptions of artificial intelligence in health-
127. The Artificial Intelligence and Data Act (AIDA) [Internet]. Canadian Govern- care: findings from a qualitative survey study among actors in France. J
ment. 2023 Mar [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​i​s​​e​d​​-​i​s​​d​e​.​c​​a​n​a​​d​a​​.​c​a​​ Translational Med. 2020;18(1):14.
/​s​i​t​​e​/​i​​n​n​​o​v​a​​t​i​o​n​​-​b​e​​t​t​​e​r​-​c​a​n​a​d​a​/​e​n​/​a​r​t​i​f​i​c​i​a​l​-​i​n​t​e​l​l​i​g​e​n​c​e​-​a​n​d​-​d​a​t​a​-​a​c​t​-​a​i​d​a​-​c​ 151. Teng M, Singla R, Yau O, Lamoureux D, Gupta A, Hu Z, et al. Health Care Stu-
o​m​p​a​n​i​o​n​-​d​o​c​u​m​e​n​t​​​​​​​ dents’ Perspectives on Artificial Intelligence: Countrywide Survey in Canada.
128. Yoon JH, Kim EK. Deep Learning-Based Artificial Intelligence for Mammogra- JMIR Med Educ. 2022;8(1):e33390.
phy. Korean J Radiol. 2021;22(8):1225–39. 152. Harish V, Bilimoria K, Mehta N, Morgado F, Aissiou A, Eaton S et al. Prepar-
129. Seibold M, Maurer S, Hoch A, Zingg P, Farshad M, Navab N, et al. Real-time ing Medical Students for the Impact of Artificial Intelligence on Healthcare
acoustic sensing and artificial intelligence for error prevention in orthopedic [Internet]. Canadian Federation of Medical Students; Available from: ​h​t​t​​p​s​:​/​​/​w​
surgery. Sci Rep. 2021;11(1):3993. w​​w​.​​c​f​m​​s​.​o​r​​g​/​f​​i​l​​e​s​/​p​o​s​i​t​i​o​n​-​p​a​p​e​r​s​/​A​G​M​_​2​0​2​0​_​C​F​M​S​_​A​I​.​p​d​f​​​​​​​
130. Snyder B. Our Misplaced Fear of Job-Stealing Robots [Internet]. Stanford 153. Wilson R, Bennett J. RNAO-AMS_Report-Nursing_and_Compassionate_
Graduate School of Business. 2019 [cited 2023 Sep 8]. Available from: ​h​t​t​​p​s​:​/​​/​ Care_in_the_Age_of_AI_Final_For_Media_Release_10.21.2020.pdf [Internet].
w​w​​w​.​​g​s​b​​.​s​t​a​​n​f​o​​r​d​​.​e​d​u​/​i​n​s​i​g​h​t​s​/​m​i​s​p​l​a​c​e​d​-​f​e​a​r​-​j​o​b​-​s​t​e​a​l​i​n​g​-​r​o​b​o​t​s​​​​​​​ Associated Medical Services (AMS) Healthcare; [cited 2023 Sep 8]. Available
Corfmat et al. BMC Medical Ethics (2025) 26:4 Page 19 of 19

from: ​h​t​t​​p​s​:​/​​/​r​n​​a​o​​.​c​a​​/​s​i​t​​e​s​/​​r​n​​a​o​-​​c​a​/​f​​i​l​e​​s​/​​R​N​A​​O​-​A​M​​S​_​R​​e​p​​o​r​t​-​N​u​r​s​i​n​g​_​a​n​d​_​C​o​ 156. Zouinar M. Évolutions de l’Intelligence Artificielle : quels enjeux pour l’activité
m​p​a​s​s​i​o​n​a​t​e​_​C​a​r​e​_​i​n​_​t​h​e​_​A​g​e​_​o​f​_​A​I​_​F​i​n​a​l​_​F​o​r​_​M​e​d​i​a​_​R​e​l​e​a​s​e​_​1​0​.​2​1​.​2​0​2​0​.​ humaine et la relation Humain–Machine au travail ? Activités [Internet]. 2020
p​d​f​​​​​​​ Apr 15 [cited 2023 Sep 8];(17–1). Available from: ​h​t​t​​p​s​:​/​​/​j​o​​u​r​​n​a​l​s​.​o​p​e​n​e​d​i​t​i​o​n​.​
154. Council on Medical Education. Report 4 on Augmented Intelligence in Medi- o​r​g​/​a​c​t​i​v​i​t​e​s​/​4​9​4​1​​​​​​​
cal Education (Resolution 317-A-18) [Internet], American Medical Association. 157. Cléret M, Le Beux P, Le Duff F. Les systèmes d’aide à la décision médicale. Les
2019. Report No.: CME Report 4-A-19. Available from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​a​m​a​​-​a​s​s​​n​.​ Cahiers du numérique. 2001;2(2):125–54.
o​​r​g​​/​s​y​s​t​e​m​/​f​i​l​e​s​/​c​m​e​-​r​e​p​o​r​t​-​4​-​a​1​9​-​a​n​n​o​t​a​t​e​d​.​p​d​f​​​​​​​
155. Reznick RK, Ken H, Tanya H, Mohsen Sheikh H. Task Force Report on Artificial
Intelligence and Emerging Digital Technologies [Internet]. Royal Collee of
Physicians and Surgeons of Canada; 2020 Feb [cited 2023 Sep 8]. Available Publisher’s note
from: ​h​t​t​​p​s​:​/​​/​w​w​​w​.​​r​o​y​​a​l​c​o​​l​l​e​​g​e​​.​c​a​​/​c​a​/​​e​n​/​​h​e​​a​l​t​h​-​p​o​l​i​c​y​/​i​n​i​t​i​a​t​i​v​e​s​-​d​r​i​v​e​n​-​b​y​-​r​ Springer Nature remains neutral with regard to jurisdictional claims in
e​s​e​a​r​c​h​/​a​i​-​t​a​s​k​-​f​o​r​c​e​.​h​t​m​l​​​​​​​ published maps and institutional affiliations.

You might also like