0% found this document useful (0 votes)
115 views19 pages

Computers in Biology and Medicine: Tahereh Saheb, Tayebeh Saheb, David O. Carpenter

This document summarizes a research article that mapped the research strands of ethics related to artificial intelligence in healthcare through bibliometric and content analysis. The analysis identified the most influential authors, countries, institutions, sources, and documents on the topic. It recognized four main ethical categories associated with 12 medical issues. Through content analysis, it identified seven additional ethical categories and 40 gaps in the current research literature. The analysis aims to further conversations on ethics of AI and related technologies in healthcare to help guide policymakers and technology developers.

Uploaded by

alfbiok
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views19 pages

Computers in Biology and Medicine: Tahereh Saheb, Tayebeh Saheb, David O. Carpenter

This document summarizes a research article that mapped the research strands of ethics related to artificial intelligence in healthcare through bibliometric and content analysis. The analysis identified the most influential authors, countries, institutions, sources, and documents on the topic. It recognized four main ethical categories associated with 12 medical issues. Through content analysis, it identified seven additional ethical categories and 40 gaps in the current research literature. The analysis aims to further conversations on ethics of AI and related technologies in healthcare to help guide policymakers and technology developers.

Uploaded by

alfbiok
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Computers in Biology and Medicine 135 (2021) 104660

Contents lists available at ScienceDirect

Computers in Biology and Medicine


journal homepage: www.elsevier.com/locate/compbiomed

Mapping research strands of ethics of artificial intelligence in healthcare: A


bibliometric and content analysis
Tahereh Saheb a, *, Tayebeh Saheb b, David O. Carpenter c
a
Management Studies Center, Tarbiat Modares University, Tehran, Iran
b
Assistant professor, Faculty of Law, Tarbiat Modares University, Tehran, Iran
c
Director for the Institute for Health and the Environment, School of Public Health, State University of New York, University at Albany, USA

A R T I C L E I N F O A B S T R A C T

Keywords: The growth of artificial intelligence in promoting healthcare is rapidly progressing. Notwithstanding its prom­
Artificial intelligence ising nature, however, AI in healthcare embodies certain ethical challenges as well. This research aims to
Healthcare delineate the most influential elements of scientific research on AI ethics in healthcare by conducting biblio­
Robotics
metric, social network analysis, and cluster-based content analysis of scientific articles. Not only did the bib­
Bibliometric analysis
Content analysis
liometric analysis identify the most influential authors, countries, institutions, sources, and documents, but it
Network visualization also recognized four ethical concerns associated with 12 medical issues. These ethical categories are composed of
Ethics normative, meta-ethics, epistemological and medical practice. The content analysis complemented this list of
ethical categories and distinguished seven more ethical categories: ethics of relationships, medico-legal concerns,
ethics of robots, ethics of ambient intelligence, patients’ rights, physicians’ rights, and ethics of predictive an­
alytics. This analysis likewise identified 40 general research gaps in the literature and plausible future research
strands. This analysis furthers conversations on the ethics of AI and associated emerging technologies such as
nanotech and biotech in healthcare, hence, advances convergence research on the ethics of AI in healthcare.
Practically, this research will provide a map for policymakers and AI engineers and scientists on what dimensions
of AI-based medical interventions require stricter policies and guidelines and robust ethical design and
development.

1. Introduction for patients, the health community and clinical outcomes [5]. In the last
decade, much research has been published on the promising application
Digital transformation by utilizing digital technologies is occurring of algorithmic decision-making in diagnosing and treating diseases,
around the world [1,2]. Of digital technologies, artificial intelligence which has increased the incorporation of algorithms in clinical and
has carried numerous advantages to humanity, especially in situations medical decision-making. However, the significant point that the pre­
where human capabilities cannot handle some intricate issues that are vious research has marginalized is that algorithmic decision-making has
beyond human intelligence [3]. With its superintelligence capabilities, epistemic and normative implications that ought to be tackled as early as
artificial intelligence can perform tasks that would typically take a great the life cycle of modelling processing initiates [6]. Robotic surgery, as
deal of time and cost in a brief time frame and for a minimal price. In any well, has sparked some concerns regarding their safety, trust [7],
case, in the face of all these benefits, artificial intelligence artifacts as accountability, liability and culpability [8]. Hence, it is necessary to
dependent, semi-, or fully autonomous agents [4] could have unex­ establish a regulatory and ethical system of artificial intelligence in the
pected consequences that, if left unchecked, will prompt catastrophes wake of the rapid development of the utilization of artificial intelligence
against humanity. in healthcare to forestall the potentially unintended consequences of
The health industry is quite possibly the most delicate industry artificial intelligence. An ethical AI is referred to as “the computational
where the indiscreet and uncontrolled utilization of artificial intelli­ process of evaluating and choosing among alternatives in a manner that
gence, whether in its virtual form (algorithms and models) or physical is consistent with societal, ethical and legal requirements” [9].
such as surgical robots, can have potentially unintended consequences The field of ethics of AI in healthcare has become a significant

* Corresponding author. Management Studies Center, Tarbiat Modares University, Tehran, Iran.
E-mail addresses: [email protected] (T. Saheb), [email protected] (T. Saheb), [email protected] (D.O. Carpenter).

https://doi.org/10.1016/j.compbiomed.2021.104660
Received 26 May 2021; Received in revised form 15 July 2021; Accepted 15 July 2021
Available online 19 July 2021
0010-4825/© 2021 Elsevier Ltd. All rights reserved.
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

concern progressively both in the academic and the business commu­ documents. We excluded reviews, which resulted in 473 articles. We
nities throughout the most recent decade. Numerous organizations and then manually screened the abstracts to check the papers’ relevancy,
hospitals are recruiting ethicists of AI in their laboratories to adjust to eliminate duplicates, and remove the reviews. This screening brought
the ethical guidelines of AI. Google, for instance, fostered a progression about 383 papers (Fig. 1). Two researchers checked each other’s work.
of ethical principles for the use of AI available through https://ai. We conducted this process two times: once by the author and the other
google/principles. The ethical concerns associated with AI-based med­ one by the co-author. The Kappa value was 0.878, which indicates
ical interventions in various diseases have assisted in generating massive reliability among the coding. The Kappa value is the percentage of
interest in the ethics of AI in healthcare in all cycles of disease man­ agreement among authors who coded the articles to 1 or 0, with 1
agement, from diagnosis [10] to treatment [11], from the medical and indicating that the paper should be included and 0 indicating that the
clinical decision-making [12] to workflow optimization in healthcare paper should be excluded. The percentage of agreement, or Kappa value,
clinics and hospitals, and from personalized medicine [13,14] to was 0.878, which is near to one, signifying that there was a high degree
robotic-assisted surgeries [7,15]. of agreement among authors regarding which articles were included and
To this end, there is a growing trend toward research on the ethics of which were omitted.
artificial intelligence, and several qualitative reviews of the previous This analysis performed a bibliometric and social network analysis of
studies on the ethics of AI have been conducted, for example, by Refs. research on AI ethics in healthcare and cluster-based content analysis.
[16–18]. However, the past reviews are mainly qualitative. The moti­ Scholars from various disciplines have incorporated bibliometric anal­
vation behind this research is to supplement previous reviews by ysis to uncover a field’s knowledge structure and identify the most sig­
providing a mixed-method study, quantitative (bibliometric) review, nificant elements [13,20,21]. In this analysis, we utilized the VOSviewer
and cluster-based content analysis to reveal the intellectual structure software [22] for our quantitative analysis, and we specifically used the
and research trajectory of knowledge production on ethics of AI in bibliographic coupling and co-occurrence analysis of keywords out of
healthcare. The second purpose of this study is to identify the most other bibliometric methods. This method shows static conceptual re­
influential entities, including hot research topics, influential countries, lationships between entities and embraces a retrospective approach
institutions, sources and documents in this field. This analysis will direct [23]. Regarding network visualization, the networks are formed of
us to discover the gaps and recommend future research agendas in the nodes and edges. Nodes can be entities such as countries, keywords,
field of AI ethics in healthcare. institutions, and edges, demonstrating the relationships between nodes
This research aims to answer the following inquiries to expand prior [22]. The distance between nodes shows the strength of their connec­
research on the ethics of AI in the healthcare industry: tion, meaning that the shorter distance means greater similarity and
strength of the connection [24]. In this paper, we depicted the networks
1 What are the most influential countries, institutions, sources, and based on their Total Link of Strength or TLS. The link and TLS attributes
documents in the field of ethical AI in healthcare? show the number of links of an item with other items and the total
2 What are the hot research topics and themes of research in ethical AI strength of this link [22]. Our normalization technique was the Associ­
in healthcare? ation Strength, which is the software’s default choice. This normaliza­
3 What diseases are mainly challenged with the ethics of AI and are tion technique clusters elements including keywords together depending
discussed in the literature? on the strength of their connections [22].
4 What are the main concerns of the literature regarding the ethics of To complement our quantitative analysis, we additionally conducted
AI in terms of normative and epistemic ethics? a content analysis of nine clusters that were identified in our co-
5 Is the literature mainly concerned with soft AI such as algorithms or occurrence analysis of keywords. We specifically focused on the recent
physical AI such as robots? papers to cover the latest debates and discussions. This mixed-method
6 What is the intellectual structure of ethical AI research, and how has analysis empowered us to foster our conceptual framework, conduct
it evolved? our gap analysis and recommend possible future research strands.
7 What are the literature gaps and future research agendas?
3. Results
The design of the paper is as follows: we initially describe our
research methodology. Then we move to the analysis part, in which we This section first starts with the bibliographic coupling analysis of co-
first explain our bibliometric and cluster-based content analysis. We occurrence analysis of keywords coupled with the cluster-based content
then develop and discuss the study’s conceptual framework before analysis. We then move into our bibliographic-coupling analysis of
moving into the discussion section to explain our gap analysis, potential influential elements, including authors, countries, institutions, sources,
future research strands, and limitations of the study. We finish the paper and documents. We then develop and explain the study’s conceptual
with the conclusion section. framework extracted from the bibliometric and cluster-based content
analysis.
2. Research Methodology
3.1. Co-occurrence analysis of keywords and cluster-based content
In this study, we retrieved data on May 1st, 2021, from the SCOPUS analysis
database. In terms of date, we did not limit our analysis to any period.
We searched for three keywords: “ethic” OR “ethical” AND “artificial In this section of our analysis, we conducted a co-occurrence analysis
intelligence,” OR “AI.” In order to narrow the focus of our study, we only of 3012 keywords. To show the keyword with the greatest TLS<, we set
included artificial intelligence and AI in this article. As a result, terms the minimum occurrence of a keyword at 3, and 546 keywords met the
such as deep learning and machine learning are excluded. Deep learning threshold. The method of normalization was association strength. Con­
and machine learning ethics are primarily epistemological in nature and cerning clustering, the resolution was set at three and the minimum
deal with virtual ethics, whereas artificial intelligence emphasizes both cluster size at 30. In total, 9 clusters (Fig. 2) with 14992 links and 26289
virtual and physical ethics. We searched in the “Title, Abstract, or total link strength were identified. Table 1 depicts the clusters in yet
Keywords” section. Furthermore, we limited our study subjects to another way. We listed a few of the highest-scoring keywords from each
Medicine, Nursing, Neuroscience, Health profession, Pharmacology, cluster. Then, using the keywords, we categorized the topics and iden­
Toxicology and Pharmaceutics, Dentistry and Immunology and Micro­ tified ethical concerns, diseases associated with each topic, and
biology. We included only English language papers in the format of healthcare issues. We also determined whether the topic involves virtual
articles, conference papers, and book chapters. This stage yielded 585 or physical ethical issues. The bibliometric analysis resulted in the

2
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Fig. 1. Data retrieval process.

Fig. 2. Co-occurrence analysis of keywords ended up in 9 clusters regarding the ethics of AI in healthcare.

3
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Table 1
Topics of research on the ethics of AI in healthcare, ethical concerns, and diseases and aspects of healthcare that are addressed.
# Some of the keywords in the cluster Topics Ethical Concerns Diseases Concerned Virtual or Addressed Aspects
physical of Healthcare
Ethical
concern

1 Adult, ageing, Alaska native, 1 Companion robots, AI -Epistemological factors (ethics 1 Mental Disorders 1 Virtual AI 1 Elderly care,
American Indian, social robots, medical interventions, of data and algorithms) such as 2 Anesthesiology 2 Physical 2 Detection of
anesthesiology, artificial neural and mental care in biographical sensitive and Robots disease by
network, automation, clinical vulnerable groups and personal features, information analyzing social
competence, data science, disease elderly processing and dissemination, media data,
association, education, electronic 2 Clinical competence, racial and gender factors, 3 Clinical
medical record, female, health medical education, and patient coding, validation competence and
information, information ethics of clinical data and process, trust education of
dissemination, information clinical practice -Normative factors such as data and
processing, learning systems, Legal Factors algorithms
legislation and jurisprudence, Patient safety ethics
policymaking, mental disorders, Sovereignty
natural language processing, patient Violence
coding, patient safety, sovereignty, - Ethics of medical practice
standard, suicide, trust, validation Competency, Education
process, violence
2 Aged 80 and over, the attitude of -3 -Epistemological factors such as - Brain Ischemia Virtual AI 1Imaging
health personnel, brain ischemia, Ethics of Deep Image image analysis in healthcare - Cancer therapy 2Decision-Making
cancer therapy, clinical decision Analysis Algorithms by using deep neural network, - Diabetic Retinopathy 3Image-based
support systems, clinical practice, diagnosis accuracy, patient - MRI diagnosis
computer-assisted diagnosis, information, data extraction, - Physiologic Monitoring
computer-assisted therapy, computer- information storage, - Neuroimaging
assisted tomography, data extraction, reproducibility of results - NMR
deep neural network, diabetic -Ethics of AI in Medical Practice - Ophthalmology,
retinopathy, diagnosis accuracy, such as Computer-assisted - Pathology
diagnostic imaging, diagnostic test diagnosis, sensitivity and
accuracy, evidence-based practice, specificity, clinical decision
health care policy, health insurance, support systems, predictive
image enhancement, information value, evidence-based
storage, learning algorithm, magnetic practice,
resonance imaging, medico-legal -Normative ethics such as
aspect, physiologic monitoring, medico-legal aspect, patient
neuroimaging, nuclear magnetic advocacy, healthcare policy
reasoning, ophthalmology, pathology,
patient advocacy, patient
information, physiologic monitoring,
predictive value, receiver operating
characteristics, reproducibility of
results, screening test, sensitivity and
specificity, stroke,
telecommunication, computer-
assisted therapy
3 Accountability, adoption, agency, 4-Meta-ethics of AI-based -Epistemological aspects related - Biotechnology Physical AI Human
artificial moral agents, attitude, big biomedical interventions (e. to models and algorithms such as - Big data enhancement
data, bioethics, biotechnology, g. biotechnology, nanotech­ Confidentiality, - Genetic Engineering
confidentiality, epistemology, nology and genetic engi­ explainability, accountability, - Human Experiment
ethicist, explainability, forecasting, neering) and Enhancing transparency, statistical bias - Nanotechnology
genetic engineering, human Humans -Normative aspects related to - Neuroscience
experiment, humanity, information social consequences such as - Primary Healthcare
science, moral agency, moral Racism, safety, social - Radiologist
obligations, morality, morals, responsibility, privacy, - Surgeon
nanotechnology, neuroethics, humanity, social behaviour
neurosciences, occupation, -Metaethics such as
perception, personhood, primary personhood, agency,
health care, privacy, racism, personhood, morality
radiologist, robot ethics, safety, social
behaviour, social responsibility,
statistical bias, surgeon, transparency,
vision
4 Adverse event, breast cancer breast 5 Breast Cancer, and -Ethics of Medical Practice - Breast cancer Virtual AI Detection,
neoplasma, breast tumor, cancer Personalized Medicine Personalized medicine, neoplasma, diagnosis and
diagnosis, cancer screening, clinical Precision medicine - cancer diagnosis treatment
evaluation, clinical protocol, cloud - Normative Ethics - cancer screening
computing, complication, consumer, Financial Concerns, - clinical evaluation
coronavirus, cost-benefit analysis, Depression, Social Media, - early detection of cancer
cost-effectiveness analysis, dementia, Health Promotion - health promotion
depression, early detection of cancer, - mammography
early diagnosis, healthcare cost, - pandemics
health promotion, image
interpretation, mammography, mass
screening, mobile application,
(continued on next page)

4
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Table 1 (continued )
# Some of the keywords in the cluster Topics Ethical Concerns Diseases Concerned Virtual or Addressed Aspects
physical of Healthcare
Ethical
concern

pandemics, personal experience,


personalized medicine, precision
medicine, smartphone, social media,
telemedicine, thorax radiography
5 Algorithm, altruism, awareness, 6 Altruistic and - Normative aspects of Psychology Virtual AI Patient perception
health equity, justice, male, middle- nonmaleficence AI algorithms such as Health
aged, nonmaleficence, patient 7 Patient-related Equity, Gender and age,
acceptance of healthcare, patient psychological-ethical justice, nonmaleficence
attitude, patient participation, factors
psychology, risk-benefit analysis, self-
evaluation, social justice,
standardization, uncertainty
6 Access to information, benchmarking, 8 Bioinformatics and Ethics - Normative Aspects: - Biomedical technology Virtual AI Databases and
biomedical technology, computer of Data Legal liability, Social value - Computer interface Physical AI human-computer
interface database, human-computer 9 Computer-Human - Epistemological factors: database interactions
interaction, informatics, legal Interactions in Medical error - Informatics
liability, medical error, medical Conversational User - medical oncology,
oncology, neoplasm, nursing care, Interface - neoplasm,
patient preference, radiation - nursing care
oncology, social psychology, social
values, stakeholder engagement
7 Brain, computer security, 10 Brain Studies, ICU, - Epistemological factors related - Electroencephalography Virtual AI Healthcare quality
consciousness, data analysis, data Ambient Intelligence, to data such as data privacy, - Epidemiology Physical AI in ambient spaces
collection, data mining, data privacy, and Ethics of data and data collection, computer - Intensive care unit such as ICU
disease surveillance, Algorithms security, - Operating room
electroencephalography, emotion, Metaethics aspects such as -
empathy, epidemiology, health care consciousness
quality, intensive care unit, non-
human, operating room, phenotype, - empathy
prevention and control - emotion
- non-human
Normative aspects such as -
disease surveillance

- prevention and control


8 Care behaviour, caregiver, chronic 11-Chronic Disease -Meta-Ethics such as Freedom, - Care behaviour Virtual AI Chronic disease
disease, cybernetics, devices, donor, Management, Cybernetics, human dignity, humanism - Chronic disease Physical AI management
freedom, government regulation, Equity and Humanity -Normative Aspects such as
health care disparity, human dignity, Concerns Government regulation,
humanism, regulation, social control Healthcare disparity, Social
control
9 Cerebrovascular accident, diffusion of 12 Global health and - Normative Aspects - Cerebrovascular Virtual AI Global Healthcare
innovation, disease severity, epilepsy, human rights Healthcare access, Human accident
genomics, global health, healthcare rights, Truth disclosure - disease severity
access, health service, human rights, - epilepsy
multiple sclerosis, neurologic disease, - genomics
neurology, public health, truth - global health
disclosure - multiple sclerosis
- neurologic disease
- neurology
- public health

identification of 12 hot medical topics related to the ethics of artificial mainly utilized by vulnerable individuals and elderly and people with
intelligence in healthcare (Fig. 3). The bibliometric analysis also cognitive disabilities, spark concerns regarding patient autonomy and
revealed how the field of AI in healthcare ethics has addressed a variety informed consent, quality of data management, and distributive justice
of health issues from a variety of ethical perspectives (Fig. 4). [27]. Different concerns can be identified with moral problems, such as
Cluster 1: who is in control? Who is responsible? Is the information correct? [28].
A study by Rubeis, 2020 sums up the dangers of AI in elderly care to 4Ds:
1.1) Companion robots, AI medical interventions, and mental care in the depersonalization of care through algorithm-based standardization,
vulnerable groups and elderly (ethics of practice) the discrimination of minority groups through generalization, the
dehumanization of the care relationship automatization, and the dis­
There is a great deal of sensitivity regarding the interaction of older ciplination of users through monitoring and surveillance.
people with robots and other smart solutions, specifically regarding The other significant concern concerning mental diseases is the uti­
companion social robots and their role in therapy and assistance, espe­ lization of AI Depression Detectors, which identify social media users
cially for elderly who suffer from mental diseases, such as dementia who are at risk of mental disease. This method sparks ethical concerns,
[25]. AI medical interventions in psychiatry, psychology, and psycho­ such as patient autonomy in the field [30]. Such new data sources in
therapy incite concerns such as data ethics, lack of sufficient training by public health surveillance, such as social media and cellphone signal
health professionals, absence of regulatory frameworks, and trans­ data, have additionally been reprimanded in pandemic management,
parency of algorithms [26]. Intelligent assistive technologies, which are such as Covid-19 [31]. During pandemics, AI-based mobile apps were

5
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Fig. 3. The bibliometric analysis identified 12 hottest medical topics associated with the ethics of AI in healthcare.

used to model and predict diseases, digitally track quarantined and [39]. As a result, clinical intelligence could serve as the “human guarantee”
isolated individuals, and optimize contact tracing. On the other hand, [40] of AI in medicine to deliver high-quality decisions, especially in
such AI-based medical interventions have sparked a debate about eq­ diagnosing and treating mental disorders [41].
uity, autonomy, privacy, the risk of errors, and accountability [31,32]. Researchers have also explored the bias and controversies of AI in
anesthesiology as computing and data analysis capabilities have
1.2) Clinical competence, medical education, and ethics of clinical data increased [42]. The application of AI in anesthesosiology is mainly
and clinical practice predictive analytics [43]; for illustration, AI use on intraoperative fea­
tures to predict in-hospital mortality in the ICU setting, early detection
Clinicians’ competence in artificial intelligence is essential since they and treatment of Sepsis, risk prediction and other patient-specific out­
are in charge of clinical decisions. Furthermore, AI has an impact on health comes, and automation of anesthesia maintenance. However, these ap­
record systems since it automates medical coding and data management plications pose some ethical concerns, such as the lack of access to
and governance. These AI-based procedures jeopardize patient privacy and high-quality treatment, the right to generalize data from a training set
confidentiality, as well as the education and competencies of the involved to a broader population, and bias in AI, which can be eliminated by
workforce [33]. According to research [34], clinicians should be aware of excluding sensitive attributes like ethnicity, gender, and sexual orien­
the following several ethical dilemmas posed by AI: 1) have a basic un­ tation [42,44].
derstanding of the technology underpinning healthcare AI, 2) build patient Cluster 2: Ethics of Deep Image Analysis Algorithms
confidence in the patient-clinician relationship, 3) analyze training data to One of the significant ethical issues in healthcare, as addressed in
understand better whether it is acquired ethically and 4) whether it rep­ Cluster 1, is related to epistemological aspects of image analysis. This
resents all patients regardless of their gender, race, or other attributes, and concern is heightened in Cluster 2, where fatal diseases such as brain
5) understand the degree to which the training data is associated with ischemia or cancer treatment rely heavily on early diagnosis and precision
specific patients to minimize prejudice. Images are one of the most of analysis to reduce the number of fatalities [45–47]. For example, AI is
contentious data formats in clinical settings, and sharing clinical imaging used in breast cancer detection, diagnosis, tracking, classification, treat­
data has sparked ethical debates [35]. These issues cover the entire life ment response prediction, and risk forecasting [48]. Due to the potential
cycle of information, from its collection to its processing, storage, dissem­ inherent prejudices and ethical considerations of AI in image processing,
ination, and use in clinical decision making [36]. Scholars highlight the researchers have identified a range of ethical concerns and solutions in the
ethical concerns associated with medical image analysis procedures, radiology field to mitigate the risks and optimize healthcare quality [49].
including data collection, model establishment, and model validation [37]. Several studies have concentrated on explainable AI for various medical
According to specific findings, all individuals in a healthcare environment imaging activities [50,51]. Other researchers ponder to what degree radi­
who have access to the clinical image are data stewards who must safe­ ologists should trust AI-generated knowledge, and if they do trust (the
guard patient privacy and use these data to assist future patients [38]. To black-box of algorithms), who is legally responsible if a patient is harmed?
ensure safeguarding ethical concerns, some studies stress the inclusion of The other significant ethical issue of image processing in healthcare is the
digital competencies in medical education to increase the familiarity of possibility of bias in data and algorithms and the exclusion of minorities
medical students with ethical challenges of digital technologies such as AI and marginalized people from data. Several evidences have been presented

6
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

that point to certain facial recognition algorithms incorrectly detecting neuroprosthetics [73]. Based on bioethics, artificial intelligence and
individuals with dark skin [52]. In medical contexts, a biased algorithm, transhumanism in therapeutic medicine or genetic modification aimed
like the mentioned example, can result in discriminatory decisions [53]. at eradicating diseases from the human genome are dangerous sources
Diabetic retinopathy and ophthalmology are two other disorders that due to the lack of boundaries [74]. The other primary concern in this
benefit from deep image processing. Automated DR detection algo­ domain is the liability [75], trustworthiness, responsibility [76] of these
rithms pose multiple hurdles, including patient acceptability and hybrid artificial systems.
confidentiality, medico-legal disputes, unlocking the algorithms “black Cluster 4: Breast Cancer and Personalized Medicine
box” nature [54]; and patients’ rights in terms of accessibility of infor­
mation, continuity of care, and non-discriminatory consequences [55]. 1.4) Ethics of medical practice
Clinical neuroscience is also being debated as researchers consider the
risks of discrimination, neuro-privacy, accountability, sovereignty, and One of the most critical diseases in which artificial intelligence plays
agency in diagnosis, modelling, and prediction [56]. The studies often a significant role is breast cancer, in which AI can be used for detection,
question whether training on limited data sets or data that are not diagnosis, segmentation, grading, prognosis and management, image
representative of the actual patient population will yield reliable results reading and screening and prediction of treatment response and
in rare eye diseases; or discuss performance measures such as sensitivity outcome [47,77]. AI also contributes significantly to the advancement of
and specificity on the test data sets [57]. This problem is linked to personalized and precise medicine (Saheb and Izadi, 2019; Saheb and
another issue: reproducibility of findings, which addresses questions Saheb, 2019). Studies argue that patients benefit from more personal­
such as whether the data is accessible to other researchers. Or, other­ ized care and treatment plans due to AI innovations [78]. Specific
wise, is the prediction modelling pipeline, such as code, available? And research highlights the automation of workflows augmented by AI in
if the flow of data and outcomes is transparent, or if the results are image segmentation, radiology report generation, semantic error
generalizable to situations beyond where the system was developed detection in reports, data mining in research, and enhanced business
[58]. intelligence systems for real-time alert systems [79]. These studies
Cluster 3: Meta-ethics of Artificial Innovations (e.g. biotechnology, advocate for the use of AI algorithms in pattern recognition [80], which
nanotechnology and genetic engineering) Enhancing Humans can be used not only in breast cancer but also in Covid-19 and CT-scan
A recent contentious trend involves the use of biotechnology and analysis [81].
nanotechnology to turn humans into more moral agents. Human Some studies challenge the use of AI in oncological hybrid medical
enhancement [59,60] or posthumanism [61] focuses on enhancing human imaging practices [77]. In terms of reading breast images, researchers
abilities and capacities outside their existing biological boundaries, such as compare the capabilities of algorithms vs breast radiologists, question­
the advancement of biohybrid and bioinspired nanorobots for targeted ing the superiority of algorithms [82] by posing an important question:
drug delivery (Singh et al., 2019) and other nano-biomedical innovations Can algorithms substitute or augment humans in detecting cancer cells?
[62]. Within the metaphysical questions about the nature of humanity [79,83]. Screening mammography, which helps detect cancer at an early
[63], some scholars attempt to shift attention from biotechnology to AI to stage, is heavily influenced by AI algorithms for more effective breast
enhance human morality [64]; and distinguish this trend from robot ethics, cancer treatment. Many scientists argue that interpreting mammog­
which implies converting robots into moral agents. AI is used to enhance raphy images is complicated, and some scholars challenge the accuracy
human morality by offering empirical support and intellectual consistency, of experts in cancer detection [84]. Prior research also inquires who or
comprehending argumentative reasoning, checking the ethical legitimacy what is responsible for the benefits and drawbacks of using AI in radi­
of people’s choices, increasing awareness of personal shortcomings, and ology? [85].
counselling humans about carrying out their decisions [64]. On the other
hand, some scholars contend that the probability of human extinction is 2.4) Normative ethics of breast cancer
increasing due to AI [65]. Another controversial aspect within this topic is
the use of humans as subjects in neuroscience research and the ethical is­ Some of the most significant ethical issues surrounding breast cancer are
sues that go along with it, such as privacy and consent, agency and identity, normative in nature, such as the financial burden of the disease. The
augmentation and prejudice [66]. Some scholars argue that efforts to build studies show that racial and ethnic minority patients [86] and the old
“moral replicas” or “moral enhancements” would exacerbate social patients [87] are the most vulnerable to financial costs of breast cancer.
inequality, especially for those who cannot afford moral replicas [67]. By According to one study, black women with breast cancer have experi­
moral replica, they mean the disproportionate increased of power of in­ enced significantly worse financial consequences [88]. Breast cancer is
dividuals by enhancing their ability to decide on a larger amount of moral linked to mental pressures such as depression and anxiety, in addition to
issues per period [67]. financial burden, particularly among young and non-white patients with
Nanotechnology and genetic engineering have also sparked ethical lower socioeconomic status [89].
debates [68,69]; these studies pose concerns about human characteris­ Cluster 5:
tics, prolonged lifespan, and controlled cellular metabolism, all of which
will be challenged by innovative emerging technologies [70]. Other 1.5) Altruistic and nonmaleficence AI
researchers question the extent to which these cutting-edge technologies
that combine humans and machines affect human conscience, dignity, This cluster addresses the idea of altruistic AI to improve the quality
liberty, and fundamental freedom [71]. of life and eventually replace competition with cooperation using arti­
The development of bio-inspired cognitive agents raises concerns ficial general intelligence [90]. Moreover, prior studies request the
such as what mechanisms an artificial bio or nano agent must have to embodiment of ethical values, such as nonmaleficence, in the design and
make moral decisions. In a study, the researchers [72] consider several deployment of AI. According to these studies, AI systems are similar to
difficulties that scientists can encounter when designing ethical cogni­ conventional sociotechnical systems in that they are composed of not
tive architectures. These difficulties range from the development of only technological artifacts, human agents, and institutions, but also
moral emotions to the development of moral agency and autonomy. artificial agents and certain technical norms that govern interactions
According to other researchers, accountability, liability, privacy, and between artificial agents and other system components [91]. Values
security are major ethical dimensions of neuro innovations such as such as altruism and nonmaleficence are the “what” elements of AI

7
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

ethics, rather than the “how” and practices of ethical AI, and measures Interaction (HCI) discusses how people behave and communicate with
are needed to be taken to bridge the gap between values and practices technology [106]. The authors of one study discuss the interaction
[92]. In one review, the authors argue that both medical values such as design of developing a chatbot, emphasizing the diversity and creativity
autonomy, beneficence, and nonmaleficence applied to AI and AI values of designing coherent conversational interfaces [107]. Another study
such as loyalty and diligence used to medicine should be included in the identified theoretical and methodological challenges of intelligent
future of the legal system of psychiatric advance directives (PADs), conversational user interface (CUI) interactions such as text-based
which is designed to provide treatment for patients with psychiatric chatbots or speech-based systems. In this study, the authors address
disorders [93]. In another study, the authors create a framework as an the impact of interface design on user behaviours and perceptions,
applied ethics tool during the design, development, implementation, personalization-related concerns, and the interplay between engineer­
and assessment of drones in public healthcare by integrating four ing aspects such as in Natural Language Processing [108,109]. In CUI
bioethics principles of beneficence, nonmaleficence, autonomy, and applications, the design of visual layout and interactions has shifted to
justice with an AI ethic of explicability. In another research, the authors the design of conversation [110]; and this shift has sparked challenges in
applied the four bioethics concepts stated earlier to schizophrenia [94]. terms of empathy and accessibility [111,112].
Cluster 7: Brain Studies, Ambient Intelligence, and Ethics of Data and
2.5) Patient-related psychological-ethical factors Algorithms
The first topic addressed in Cluster 7 is the ethical issues of
The second theme in this cluster is related to psychology and patient- researching the human brain using AI-based interventions. The main
related psychological-ethical factors such as patient resistance and question is about the epistemological problems of data analysis, notably
skepticism toward AI-based medical interventions. The technology MRI analysis. The first concern is granting patients the right to keep their
adoption literature is heavily focused on patient perceptions and neurological data private. Sharing this data with other parties should be
adoption of emerging technologies [95,96]; while have largely ignored done with the informed agreement of individuals suffering from
skepticism and resistance factors. This research position is critical neurological illnesses [66]. Another critical worry is the disruption of
because the literature on AI ethics in healthcare reveals widespread patients’ sense of identity and agency due to neuroethologies, which
negativity toward AI among people and patients. One research looked at should be taken into account when investigating the human brain [66].
patient acceptance of AI in healthcare. Regarding the patients’ adoption Another key challenge in neuroscience and the study of human brains is
of AI in healthcare, one study [97] conducted a qualitative study to the potential bias of interpretive algorithms [66,113,114]. Certain ex­
investigate patients’ initial perceptions of AI in radiology, their prior­ perts question whether AI approaches adhere to sufficient scientific
ities for how radiology can be enhanced using AI, and their perceptions validity requirements [56], given that some features of mental diseases
on how AI tools can be incorporated in a way that is compatible with and the brain are not quantifiable [41]. Another area of interest in this
patients’ values [97]. Prior research has mainly identified that misin­ domain is modelling empathy in artificial agents to elicit empathy ca­
formation about AI may lead to a lack of trust among some patients [97]. pacity, which comprises emotional communication ability, emotion
One study found that the general public has some negative views of regulation, and cognitive mechanisms [112]. Some advocates of
using health data in AI [98]. Another study confirms that most re­ emotional robots assert that the robotics of emotion has prompted re­
spondents have personally experienced moral foundation violations in searchers to consider the successful coordination of robots and humans
their interaction with an AI system, such as disclosing their data or [115].
exposure to undesirable content, indicating the significance of people’s The other issue addressed in this cluster is improving healthcare
daily interactions with personal technology [99]. Another survey of the quality in intensive care units, or ICUs, through artificial intelligence.
UK population’s perceptions of AI also revealed that AI triggers severe Ambient intelligence, defined as the constant awareness of activities in
anxiety [100]. specified physical locations such as the ICUs, surgical rooms, or elderly
Cluster 6: homes to support health care staff in providing high-quality treatment,
poses epistemological issues such as clinical validation, data privacy,
1.6) Bioinformatics and Ethics of Data and model transparency [116,117]. While the primary ethical focus in
this area is with epistemological obstacles, several research addressed
Previous research has linked bioinformatics to bioethical and legal non-epistemological constraints such as implementation, user accep­
concerns [71]. Bioinformatics is applying informatics to the study of tance, regulatory difficulties [43], consent and liability [118], or the use
genetic data, and it draws on current computers science principles such of cognitive technologies for mass surveillance; prompting researchers
as databases, internetworking, parallel computing, and image analysis to ask if cognitive technologies should be used for mass surveillance
[101]. Bioinformatics emphasizes the significance of data collection and [119].
management and real-time predictive analysis of big data and its related Cluster 8: Chronic Disease Management, Cybernetics, Equity and
technologies [20,102,103]. It is essential in bioinformatics that re­ Humanity Concerns
searchers have access to clinical data to understand the genetic basis of Cluster 8 addresses the ethical concerns associated with AI use in
phenotypes; thus, it can be a turning point in precision medicine because chronic disease care, primarily due to non-epistemological issues. Arti­
it stores clinical and genetic data in electronic health records [104]. ficial intelligence-based technologies such as interactive voice response,
Moreover, the legal liability of creating and sustaining life through or IVR, have been utilized to treat chronic diseases (Piette and Beard,
biomedical technology and artificial or semi-artificial life, such as robots 2012). While some studies depict both a promising picture of adopting
with biological brains, has been addressed [71,75,105]. AI-based interventions and also the ethical challenges [121,122],
particular research concentrate exclusively on the effective utilization of
2.6) Computer-human interactions in conversational user interfaces AI-based solutions for chronic diseases [123]. Some suggest that these
applications AI-based measures will enable chronic disease care to be sustained in the
long run [14]. One of the primary ethical concerns with AI-based
The second central theme in Cluster 6 is computer-human interactions, treatments in healthcare is disparity [124] given the historical exclu­
especially in bioinformatics and Conversational User Interfaces such as sion of minorities and disadvantaged groups from medical datasets,
chatbots and speech-based applications. The Human-Computer resulting in the advancement of biased algorithms and inaccurate

8
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

forecasts [125]. For instance, the authors of one study demonstrate how affects medical practice in low- and middle-income nations as well
machine bias toward gender and insurance type affects ICU mortality [131]. While some experts suggest that despite ethical concerns, AI has
and insurance policies [126]. According to some experts, artificial in­ the potential to significantly improve health outcomes in developing
telligence can be used to address gaps in health outcomes such as countries [132], several more authors direct attention to the human
maternal morbidity and infant mortality [127]. normative and human rights implications of AI in the context of global
Several experts discuss emerging AI-based cybernetics, such as brain- health. According to some prior research, human rights and social values
inspired cognitive systems, and how these systems offer platforms for are critical components of AI’s ethics and law to meet normative con­
humanities and cognitive cybernetics, enabling hybrid societies to be sequences posed by AI-based interventions [133]. This perspective
shared by humans and intelligent computers [128]. Throughout these claims that adopting an AI code of ethics and regulation will signifi­
dialogues, multiple researchers discuss the ethical issues surrounding cantly reduce misuse and minimize the likelihood of adverse conse­
the use of AI to control pandemics, the lack of accountability for AI, and quences [134]. According to scholars, an ethical, transparent, and
the lack of legal personality and legal capacity for AI algorithms. This responsible AI will deliver contextually relevant knowledge and insights
implies that only human beings possess the capacity to make moral and to aid in achieving the United Nations Sustainable Development Goals
ethical judgments [129]. [135]. Several significant health issues are tackled within this category,
Cluster 9: Global health, neurological diseases, and human rights including the genomics project, neurological and mental illnesses such
The final cluster examines the promise and risks of artificial intelli­ as epilepsy and multiple sclerosis. Scholars express concern about dis­
gence for global health, particularly for the world’s underserved and crepancies in the provision of care and respect for the human rights of
disadvantaged communities and the importance of improving equal people with mental diseases in developed and developing nations [136]
access to public health services to promote equity and alleviate human and encourage developing an ethical framework for a Good AI Society
rights concerns [130]. However, as a previous study indicates, the ma­ [137].
jority of AI applications in healthcare are geared toward high-income
countries, and additional research is needed to understand how AI

Fig. 4. The bibliometric analysis shows that the field of ethical AI in healthcare has addressed various health issues from four major ethical categories.

9
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Fig. 5. We combined the results of the bobliometric and cluster-basec content analysis, which resulted in 12 major ethical concerns in regard to the application of AI
in healthcare.

3.2. Conceptual framework of the study accessibility resulting from emerging technologies such as smartphones
and wearables are included in this category of ethical problems.
We constructed a conceptual framework based on cluster-based Patients’ rights and physician rights are two additional ethical
content analysis in which we classified the ethics of AI in healthcare problems in healthcare that are inextricably related. While the former is
into 12 general categories comprised of several ethical concerns (Fig. 5 a hotly contested subject in the literature, the latter is a mostly over­
& 6). looked issue. Concerning patient rights, topics such as patient access to
As seen in Fig. 5, the ethics of AI in healthcare is organized around physiological data, preference, continuity of care as a result of AI’s
twelve broad themes: data ethics, algorithm and model ethics, ambient convergence with emerging technologies such as ambient assistive
intelligence ethics, physician and patient rights, ethics of medical technologies or mobile applications, non-discriminatory consequences
practice, predictive analytics ethics, robot ethics, medico-legal concerns, of biased decisions, autonomy in selecting the type of treatment (for
normative ethics, relationship ethics, and metaphysical ethics. Each example, a robot or a physician), and confidentiality must be addressed.
category is subdivided into several sub-ethical notions, as seen in Fig. 6. However, very little research on physician rights, such as access to
The category of data ethics addresses concerns such as data types clinical data, competence, and training, is undertaken.
such as clinical radiological data, raw data, social media data, data As previously stated, artificial intelligence can be classified into phys­
ownership and stewardship, data sharing and usage, data privacy, data ical and virtual ethics. While virtual ethics focuses on epistemological is­
bias, data labelling, dataset skewness due to the exclusion of minorities sues, physical AI is concerned with robotics and machines. The primary
and vulnerable populations or small datasets, and sensitivity and spec­ focus of this category is on the design of robots and their incorporation of
ificity. The category of algorithm and trained model ethics addresses ethical values. The third concern is with the safety considerations associ­
issues such as machine decision-making, algorithm selection processes, ated with bringing autonomous robots into healthcare, as their moral
algorithm training, model evaluation and testing, issues of transparency, agency, legal identity, accountability, and responsibility remain unre­
interpretability, explainability, replicability, algorithm bias, error risk, solved. Additionally, the primary focus of the other inquiry is the blurred
explainability in image analysis, accessibility of the prediction model­ line between humans and machines and the incorporation of empathy and
ling pipeline, and transparency of data flow. These two elements can be emotions into their design.
combined to form a single category of epistemological ethics. The The third major field of AI ethics is metaphysical ethics, which ad­
category of predictive analytics ethics, which is heavily affected by data dresses topics such as moral agency, self-esteem, depersonalization of care,
and algorithm ethics, includes ethical concerns like discriminatory de­ dehumanization of care relationships, humanity and human morality,
cisions and contextually relevant insight, which significantly impact human enhancement morality, human conscience, and dignity. Medico-
patient treatment outcomes. The fourth category is concerned with legal concerns include disputes over the legal elements of medical AI,
medical practice ethics, which includes concerns such as the education such as the development of regulatory frameworks, the responsibility of AI,
of medical professionals and medical students and leveraging AI-related the control and surveillance of AI, their accountability and legal liability,
competencies among them. Other ethical concerns in this area include and the conditions in which human and fundamental rights are violated. As
bias introduced by medical practice automation, job disruption caused normative ethics demonstrates, AI can result in discrimination by the
by robot-assisted surgery, and workforce liability. Traceability and generalization of AI conclusions, affecting justice, fairness, and inequality.
resource inequality and sustainability are also included in this category Additionally, normative ethics promotes altruistic, non-maleficent, and
of ethical concerns. Furthermore, conflicts of interest are examined as beneficence AI. The final category of AI ethics concerns the ethics of
one of the additional issues in this area. ambient assistive technologies, which are primarily designed to benefit
Medical practices have been changed by AI and the relationships older populations and people suffering from mental and psychiatric dis­
between patients, physicians, and other healthcare stakeholders. This eases. Patients’ autonomy, informed consent, and surveillance by these
category tackles conversational user interfaces and human-computer technologies all play a significant role in this category. Additionally, these
interaction and their beneficence, nonmaleficence, and empathy. Like­ technologies will impact the administration of justice and the quality of
wise, the emergence of personalized medicine, relationships, and data management.

10
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Fig. 6. 12 major ethical categories of AI in healthcare and sub-themes of each category.

3.3. Bibliographic coupling analysis of influential elements represented by the visualizations. Each item has a unique color that is
determined by the density of items nearby. Items are red to blue in color.
In this section of the paper, we explore our findings to identify the Neighborhoods with a color close to red in the network contain more
most significant elements within the network of research on AI ethics in items with higher relationship weights. This indicates that the items in
healthcare. that neighborhood are more interconnected and have a greater impact
We did not specify a threshold for selecting the most influential on the network. When the number of items in a neighborhood is small
countries, and hence all 65 countries were included in the visualization and the weight of their network links is low, their color is close to blue,
network. As illustrated in Fig. 7, countries such as the United States, indicating that they are the network’s least influential items [22]. We
Canada, Switzerland, Spain, and the United Kingdom were among the represented the items based on their TLS score.
pioneers in producing scholarly documents on the ethics of artificial In regard to the most influential organizations, we included all 994
intelligence in healthcare. Since 2020, countries such as Greece, Cyprus, organizations (Fig. 8). As illustrated in Fig. 8, two dense clusters were
and Argentina have collaborated on the publication of articles on this discovered. The first dense cluster comprises ARUP Laboratories and the
subject. Bridgepoint Collaboratory for Research and Innovation. In contrast, the
At the next step, we visualized the density visualization of in­ second dense cluster includes the Alan Turning Institute, University of
stitutions, sources, and documents. The density of each item is Washington alcohol and drug abuse institute. This means that these

11
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Fig. 7. The overlay visualization of the most influential countries (with the greatest total link strength).

Fig. 8. Density visualization of the most influential institutions (with the greatest total link strength) working on the ethics of AI in healthcare.

institutions are the most influential organizations in the field with the fourth spot is Ajob NeuroScience, which has a TLS of 132. (not shown in
highest TLS score. the figure; however, it is located on the cluster on the left side of BMC
As illustrated in Fig. 9, six red and orange dense clusters of the most medical ethics). And in fifth place is the BMC medical ethics journal,
important sources are discovered. The following items are among the which has a TLS of 120.
most influential sources as their density color is close to red. The Science We established a minimum citation count of ten for each document
and Engineering Ethics Journal, with a TLS of 496, is part of one clusters. in terms of the most influential documents. 72 documents out of 382 met
The Journal of Medical Ethics is ranked second with a TLS of 344. the threshold. As illustrated in Fig. 10, eight dense clusters were
American journal of bioethics is placed 3rd with a TLS of 190. On the discovered. On the left is a collection of papers primarily from medical

12
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Fig. 9. Density visualization of the most influential sources (with the greatest total link strength) working on the ethics of AI in healthcare.

Fig. 10. Density visualization of the most influential documents (with the greatest total link strength) working on the ethics of AI in healthcare.

journals. The following is a list of some of the papers: questions on transparency, replicability, ethics, and effectiveness.”
bmj 368 (2020).
• Vollmer, Sebastian, Bilal A. Mateen, Gergo Bohner, Franz J. Király, • Char, Danton S., Michael D. Abràmoff, and Chris Feudtner. “Identi­
Rayid Ghani, Pall Jonsson, Sarah Cumbers et al. “Machine learning fying ethical considerations for machine learning healthcare appli­
and artificial intelligence research for patient benefit: 20 critical cations.” The American Journal of Bioethics 20, no. 11 (2020): 7–17.

13
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Table 2 Table 2 (continued )


Possible future research strands. The medical issue with the greatest Possible Future Research Strands
The medical issue with the greatest Possible Future Research Strands concentrations
concentrations
and patients) to oppose or distrust AI
Physical AI and the advancement of 1. The rights of patients, adoption (resistance and skepticism
companion robots among elderly and 2. Comparing the impact of companion theories from a psychological
patients with mental diseases robots in varied care contexts (e.g. perspective)
Hospital or at home or ambient Bioinformatics and Ethics of Data 25 Investigating the ethics of specialized
assistive living settings), deep algorithms from a different lens
3. Increasing patient awareness and due to unique properties of biological
education. data
4. More psychological studies into the Computer-Human Interactions in 26 Attempts at making conversational AI
consequences of a lack of empathy and Conversational User Interfaces more human,
personal contact in robot-human Applications 27 Using voice or AI-based language
interactions translation tools to augment human
5. Disabled patients and companion intelligence
robots 28 Personalization of human experience
Education and enhancing the 6. Comparing various disciplines’ with relation to their surroundings
competencies of medical students perceptions of ethical principles and environment
and staffs 7. Developing Convergence Training and 29 Data privacy and confidentiality due
Research to the production of the massive
Ethics of deep image analysis 8. The rights, autonomy, and authority of amount of personal data by
physicians and radiologists conversational applications,
9. Disruption of work and expertise in the 30 Responsible conversational AI or
future responsible developers
10. Image manipulation and concerns Brain Studies, ICU, Ambient 31 The ethical implications of ambient
about trustworthiness and Intelligence, and Ethics of data and technology in varied contexts, such as
authenticity Algorithms at home or in classrooms.
11. Techniques for detecting medical 32 The ethical implications of combining
picture manipulation ambient technology with virtual and
12. Emotion Detection (ED) in mental augmented reality technologies
treatments, diagnosis of illnesses 33 Context-ware, ubiquitous, and
utilizing “health mirrors” and Custom pervasive computing ethical problems
Image Recognition (CIR) software, Chronic Disease Management, 34 Patients’ rights such as trust, literacy,
and ethics of facial recognition Cybernetics, Equity and Humanity privacy, and confidentiality result
technology in administrative Concerns from their active engagement in AI-
processes assisted patient self-care.
13. Patient surveillance and privacy 35 Ethical problems at the nexus of
problems and camera-based moni­ cybernetics, design, and artificial
toring and video analysis in health­ intelligence
care for human behaviour and body Global health and human rights 36 More study on the epistemic ethics of
motion analysis. AI in global health management,
Meta-ethics of AI-based biomedical 14. Scams involving health products that particularly from the perspective of
interventions (e.g. biotechnology, promise to prevent, treat, or cure data ethics, such as data scarcity in
nanotechnology and genetic illnesses developing countries.
engineering) and Enhancing Humans 15. Religious disputes regarding human 37 Examining and contrasting different
augmentation, such as among nations’ medico-legal frameworks
disabled patients and discussions 38 Recognizing the asymmetrical
about “Playing God." linkages between AI in global
16. Genetic engineering ethics in gene healthcare and other contextual
therapy, cloning, and the factors such as nations’ socioeconomic
pharmaceutical sector characteristics.
17. Nanomedicine ethics, particularly Other possible future research strands 39 How AI ethics may supplement prior
smart tablets and smart sensor conversations on bioethics,
capsules nanoethics, and machine ethics, as
18. Ethics of nanofibers for wound well as how to identify common
dressings, surgical textiles, and ground to facilitate convergence
personal clothe and wearables research and teaching.
embedding nanosensors 40 Additional review research by using
19. Genetic discrimination by insurance other approaches such as topic
companies and discrimination and modelling or other bibliometric tools,
violation of patients’ confidentiality as well as a systematic literature
and privacy review method
Breast Cancer and Personalized 20 Global disparities in access to
Medicine personalized medication between
low-income, middle-income, and • Watson, David S., Jenny Krutzinna, Ian N. Bruce, Christopher EM
high-income countries Griffiths, Iain B. McInnes, Michael R. Barnes, and Luciano Floridi.
Altruistic and nonmaleficence AI 21 The role of non-governmental health
“Clinical applications of machine learning algorithms: beyond the
organizations (NGOs) in the adminis­
tration of AI for social good black box.” Bmj 364 (2019).
22 Assessing the long- and short-term
societal hazards of artificial In the center, there is a document by Floridi titled “How to Design AI
intelligence for Social Good: Seven Essential Factors,” and next to it is a document by
23 Private-sector self-governance to
mitigate AI’s societal dangers
Morley et al. named “From What to How: An Initial Review of Publicly
Patient-related psychological-ethical 24 Reasons that influence stakeholders Available AI Ethics Tools, Methods, and Research to Translate Principles
factors (such as citizens, physicians, nurses, into Practices.” On the right, we see a cluster of documents relating to
artificial moral agents and moral decision-making.

14
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

4. Discussion, gap analysis, and possible future research strands literature has concentrated on epistemological and medical challenges,
it is imperative to perform more study on doctors’ and radiologists’
The primary goal of this paper was to identify the most influential rights and the disruption of their future work and competence. Physi­
elements in the study on the ethical implications of artificial intelligence cians’ autonomy and authority in the face of physical and virtual AI
in healthcare. To accomplish this, we did a bibliometric and cluster- intervention should be properly evaluated. Another future thread might
based content analysis to ascertain the development patterns of be imagery manipulation by AI-based apps, casting doubt on the
research on this subject and to elucidate the research foundations for authenticity and trustworthiness of predictive insights and decision-
ethical AI in healthcare. This approach aided us in comprehending the making. It is necessary to develop new applications and inventions
gaps in the literature and determining future research directions in light capable of detecting any kind of picture alteration. Apart from the study
of the gaps and knowledge structures (Table 2). Moreover, this analysis of medical pictures such as MRI or CT scans, the development of face
assisted us in detecting the hot medical issues associated with the ethics recognition technology has enabled the diagnosis of illnesses via “health
of AI addressed in the literature. mirrors” that measure key bodily signals and customized image recog­
nition software and emotion detection in mental treatments. The second
1) Companion Robots and Vulnerable Populations under-researched area in this field is the ethics of camera-based moni­
toring and video analysis, such as human behaviour and body motion
At the first stage, the co-occurrence analysis of keywords identified analysis. Ethical concerns such as privacy and surveillance should be
12 medical issues involving the interventions of artificial intelligence. As addressed.
this research demonstrated, the primary issue exhibiting the entwined
nature of AI and healthcare is connected to physical AI and the growth of 4) Metaethics of AI- based biomedical interventions and enhancing
companion robots among the elderly and patients with mental disorders; humans
who are deemed to be the most vulnerable populations for whom AI
ethics is addressed via three lenses of epistemological, normative, and The fourth medical concern discussed in the literature is meta-ethics
medical practice aspects. This subset of research should also place a surrounding AI-based biomedical interventions (e.g. biotechnology,
greater emphasis on patients’ rights, preferences, and trust. Consider­ nanotechnology, and genetic engineering) and human enhancement.
ation must also be given to the setting of care. Comparing the ethics of This topic is examined from three distinct ethical perspectives:
companion robots in hospital and home settings, for example, is epistemic, normative, and meta. One prospective future study direction
worthwhile. If companion robots intervene in patients’ daily activities at might be the development of policies and laws to counteract the rise in
home, concerns such as increasing patient awareness and training are health fraud schemes involving products claiming to prevent, treat, or
also critical. Patients’ lack of literacy about their interactions with cure illnesses. The third avenue of study may be theological disputes on
companion robots may result in distorted data protection and erroneous human augmentation, such as among disabled patients, or arguments
insights. about “playing God.” Other prospective research direction in this area
Additionally, a further psychological study is necessary to fully grasp includes examining the ethical implications of genetic engineering in
the effects of a lack of empathy and human contact on the elderly and various applications, including cloning, the pharmaceutical business,
those with mental illness. Although companion robots may be beneficial and gene therapy. In this area, additional speculation is required on the
for physical problems, they may exacerbate mental health. Additionally, ethical implications of nanomedicine, notably smart pills and smart
it is necessary to comprehend the effects of companion robots on sensor capsules. The other issue is ethical considerations for nanofibers
disabled patients, a population that is highly underrepresented in the used in wound dressings, surgical textiles, personal clothing and wear­
field. The majority of research on the ethical implications of artificial ables embedded with nanosensors. The further probable inquiry into this
intelligence for disabled people has focused on prosthetic human bodies issue is workplace and insurance company discrimination based on ge­
rather than on ambient assistive technology. These studies have mostly netic data. Whereas this research demonstrated that the healthcare
overlooked the psychological consequences of semi-artificial or cyborg sector could anticipate and identify persons with mental illness using
concepts from meta-ethical views. social media analysis, the study also challenged the notion of whether a
person’s genetic profile might result in more prejudice and a breach of
2) Education and AI-associated competencies of medical staffs patients’ confidentiality and privacy.

The second critical medical challenge is education and the 5) Breast cancer and personalized medicine
enhancement of medical students’ and staffs’ competencies; yet, these
studies overlook the convergent nature of AI’s ethics in healthcare. Within the cluster of personalized medicine, one significant area of
Additionally, each profession has its own definition of ethical principles, future study might be comparing low-income nations to middle- and
such as privacy, which may take on various meanings across fields. high-income countries in terms of worldwide disparities in access to
Convergence research and education need the transdisciplinary inte­ personalized medicine; taking into account variables such as the high
gration of disciplines since ethical AI is a scientific and social problem cost of tailored medication and the absence of basic insurance for many
that lies at the intersections of multiple disciplines. Developing a citizens.
network of collaborations, combining diverse skills, and conducting
convergence research and training will result in good social benefits and 6) Altruistic AI
extensive cooperation and integration of disciplines working on AI
ethics in healthcare. Within cluster 5, the primary focus was on altruistic and non­
maleficence AI and psychological and ethical concerns associated with
3) Deep Analysis of Medical Images patients. In terms of altruistic AI, non-profit health organizations may
benefit significantly from AI used for social good. One primary study
The third significant medical issue is related to the ethics of deep question is if there are other effective ways to accomplish this objective.
learning and image analysis and the creation of deep algorithms. While Additionally, a different line of exploration might focus on assessing the
most of the literature has concentrated on the brain, cancer, and dia­ long-term, intermediate-term, and short-term hazards associated with
betes, the ethical considerations for AI algorithms in image analysis are AI to build and construct defensive mechanisms to alleviate the societal
critical in light of the recent advent of coronavirus pandemics and the cost associated with AI on patients and other stakeholders. Another
generation of massive volumes of CT scans. While the large body of probable future line of inquiry is self-governance among non-

15
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

governmental health organizations. Self-governance is the coordination cluster concern is cybernetics, which may provide frameworks to assist
of private actors to handle non-profit concerns, with a minimal role for and enhance designers in developing AI applications. As a result, further
government. study is needed to address the ethical challenges at the junction of cy­
bernetics, design, and artificial intelligence.
7) Patient-related psychological-ethical factors
12) Global Health and Human Rights
While the literature has performed a scant amount of research on the
factors influencing citizens’ acceptance of AI in healthcare, none has The next cluster examines global health and human rights from a
examined the elements that cause citizens, patients, doctors, and other normative perspective, including access and human rights. Considering
healthcare stakeholders to resist or distrust AI adoption. The majority of the contextual disparities across nations and the global population, it is
research has concentrated on perceptions and factors that facilitate vital to do more study from an epistemological standpoint, mainly
adoption; nevertheless, it is vital to examine obstacles and hurdles through the lens of data ethics, such as the paucity of data in developing
psychologically and emphasize skepticism and resistance theories rather countries. Furthermore, each country’s legal framework for data privacy
than the more prevalent adoption theories of the Technology Accep­ is distinct; the further study should be performed to study and compare
tance Model or Health Belief Model. medico-legal systems. New studies should go beyond symmetrical and
direct relationships, so investigating the asymmetrical associations be­
8) Bioinformatics and Ethics of Data tween AI in global healthcare and other contextual factors, such as the
socioeconomic characteristics of various nations, should be a future
The other cluster was concerned with two primary issues: bioinfor­ study focus.
matics and data ethics and computer-human interaction in conversa­ We identified four ethical topics addressed in the literature in the
tional user interfaces. This cluster focalized around the legal second part of this research, based on the bibliometric analysis:
responsibility, social value, and medical inaccuracy associated with AI normative, meta-ethics, epistemological (ethics of data and algorithms
in bioinformatics. AI is used in bioinformatics to design medications and and models), and medical practice. This conclusion was reinforced by
complicated systems in both scientific and clinical research. However, cluster-based content analysis, which led to the identification of five
biological data science deals with large amounts of data, making it hard additional ethical categories: ethics of relationships, ethics of ambient
to comprehend complicated correlations across disparate datasets. With intelligence, physicians’ rights, patients’ rights, ethics of predictive an­
the introduction of deep learning algorithms, one prospective investi­ alytics, ethics of robotics, and medico-legal challenges. More research
gation in this area is into the ethics of specialized deep algorithms via the can be conducted to understand how AI ethics may supplement prior
lens of biological data’s unique properties. discussions on bioethics, nanoethics, or machine ethics and their uni­
versal principles, as well as how to identify common ground to enable
9) Computer-Human Interaction in Conversational User Interfaces convergence research and education, when considering the convergence
of AI technologies with biotechnology, nanotechnology, or machines.
One of the other addressed medical issues is CHI in CUI applications. We identified a series of ethical problems inside each ethical cate­
Metaethics considerations for conversational AI include humanizing gory using cluster-based content analysis. Since these ethical issues were
these applications, augmenting human intelligence via speech or AI- derived via cluster-based content analysis, the list is not comprehensive.
based language translation applications, personalizing human experi­ Further review research needs to be done to develop a more compre­
ence with their ambient surroundings and environment, and data pri­ hensive list of ethical concerns across multiple ethical categories.
vacy and confidentiality due to the massive amount of personal data
generated by conversational AI. 5. Limitations of the study

10) Brain Studies, ICU, Ambient Intelligence, and Ethics of Data and This study has limitations of its own. The first and most significant
Algorithms limitation is that this study was undertaken by two researchers with
backgrounds in STS and Law. To solve the ethical difficulties of AI in
The other cluster, which focuses on ambient intelligence in intensive healthcare, convergence research is necessary, so professionals from
care units and the study of the human brain, addressed three ethical diverse disciplines can comprehend the ethical problems and likely
issues: epistemological, metaethical, and normative ethics. While most scientific inquiries in this sector. One of our study’s drawbacks is that we
research has been on the ethics of ambient intelligence in hospitals, it is only examined artificial intelligence, and we did not include terms like
critical to examine the ethical implications of ambient technology in deep learning, algorithm, machine learning, or machine learning models
residential settings. The other area of investigation might be the ethical in our query. We encourage further research explore the epistemological
implications of the convergence of ambient intelligence landscapes with and virtual ethical concerns of deep learning and ML algorithms and
virtual and augmented reality in various contexts. models. Another constraint was that we only examined articles written
Given that an ambient world is made feasible by the inclusion of in English and overlooked papers written in other languages. The other
context-aware, pervasive, and ubiquitous computing, particular atten­ constraint was that we could only look for papers in the Scopus database.
tion must be paid to the ethical problems associated with ubiquitous The other constraint was that we only included bibliographic coupling,
computing and ambient intelligence. keyword co-occurrence analysis, and cluster-based content analysis.
Topic modelling or other bibliometric or qualitative methodologies can
11) Chronic Disease Management, Cybernetics, Equity and Humanity be used to perform further review research. One of our study’s flaws was
Concerns that we restricted the number of clusters to eight. Future study may look
at other clustering values to extract more themes from the corpus at
The other cluster examined challenges of chronic illness manage­ different scopes. Unlike topic modelling techniques, VOSviewer’s
ment, cybernetics, equity, and humanity from metaethical and norma­ keyword occurrence analysis does not display the top documents asso­
tive viewpoints. In chronic illnesses, AI-assisted self-care has moved ciated with each cluster or the number of documents associated with
treatment from reactive to predictive and preventative. Because this each cluster. To address this issue, we propose that future research un­
kind of treatment includes active patient engagement in healthcare dertake topic modelling study of ethical AI in healthcare.
procedures and specific patient profiles, it is critical to explore patients’
rights such as trust, literacy, privacy, and confidentiality. The other

16
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

Declaration of competing interest [16] M.K.R. Ed, W.D.J. U R, M.N.C. Jc, L. V, G. J, Artificial Intelligence for Good
Health: A Scoping Review of the Ethics Literature (2020), https://doi.org/
10.21203/RS.3.RS-29373/V2.
The authors whose names are listed immediately below certify that [17] J. Morley, C.C.V. Machado, C. Burr, J. Cowls, I. Joshi, M. Taddeo, L. Floridi, The
they have NO affiliations with or involvement in any organization or ethics of AI in health care: a mapping review, Soc. Sci. Med. 260 (2020) 113172,
entity with any financial interest (such as honoraria; educational grants; https://doi.org/10.1016/j.socscimed.2020.113172.
[18] A. Lillywhite, G. Wolbring, Coverage of ethics within the artificial intelligence
participation in speakers’ bureaus; membership, employment, consul­ and machine learning academic literature: the case of disabled people, Assist.
tancies, stock ownership, or other equity interest; and expert testimony Technol. (2019), https://doi.org/10.1080/10400435.2019.1593259.
or patent-licensing arrangements), or non-financial interest (such as [20] T. Saheb, T. Saheb, Understanding the development trends of big data
technologies : an analysis of patents and the cited scholarly works, J. Big Data
personal or professional relationships, affiliations, knowledge or beliefs) (2020), https://doi.org/10.1186/s40537-020-00287-9.
in the subject matter or materials discussed in this manuscript. [21] T. Saheb, L. Izadi, Paradigm of IoT big data analytics in the healthcare industry: a
With best regards. review of scientific literature and mapping of research trends, Telematics Inf. 41
(2019) 70–85, https://doi.org/10.1016/j.tele.2019.03.005.
Tahereh Saheb, assistant professor of Management Studies Center, [22] N.J. van Eck, L. Waltman, Software survey: VOSviewer, a computer program for
Tarbiat Modares University, Iran. bibliometric mapping, Scientometrics 84 (2010) 523–538, https://doi.org/
Tayebeh Saheb, assistant professor of Faculty of Law, Tarbiat Mod­ 10.1007/s11192-009-0146-3.
[23] B.H. Weinberg, Bibliographic coupling: a review, Inf. Storage Retr. 10 (1974)
ares University, Iran. 189–196, https://doi.org/10.1016/0020-0271(74)90058-8.
David O. Carpenter, M.D. Director of Institute for Health and the [24] K.W. Boyack, R. Klavans, Co-citation analysis, bibliographic coupling, and direct
Environment, University at Albany. citation: which citation approach represents the research front most accurately?
J. Am. Soc. Inf. Sci. Technol. 61 (2010) 2389–2404, https://doi.org/10.1002/
asi.21419.
References [25] H.S. Sætra, First, they came for the old and demented: care and relations in the
age of artificial intelligence, SSRN Electron. J. (2019), https://doi.org/10.2139/
[1] M.S. Delgosha, T. Saheb, N. Hajiheydari, Modelling the asymmetrical ssrn.3494304.
relationships between digitalisation and sustainable competitiveness: a cross- [26] A. Fiske, P. Henningsen, A. Buyx, Your robot therapist will see you now: ethical
country configurational analysis, Inf. Syst. Front (2020), https://doi.org/ implications of embodied artificial intelligence in psychiatry, psychology, and
10.1007/s10796-020-10029-0. psychotherapy, J. Med. Internet Res. 21 (2019), e13216, https://doi.org/
[2] M. Soltani Delgosha, N. Haji Heydari, T. Saheb, The Configurational Impact of 10.2196/13216.
Digital Transformation on Sustainability : a Country-Level Perspective, 2020. [27] T. Wangmo, M. Lipps, R.W. Kressig, M. Ienca, Ethical concerns with the use of
https://aisel.aisnet.org/ecis2020_rp/33/. (Accessed 15 July 2021). intelligent assistive technology: findings from a qualitative study with
[3] A. Shuaib, H. Arian, A. Shuaib, The increasing role of artificial intelligence in professional stakeholders, BMC Med. Ethics 20 (2019) 98, https://doi.org/
health care: will robots replace doctors in the future, Int. J. Gen. Med. 13 (2020) 10.1186/s12910-019-0437-z.
891–896, https://doi.org/10.2147/IJGM.S268093. [28] J. Qingquan, H. Honggang, Rui Zhang, Research on the application and the ethic
[4] M.H. Arnold, Teasing out artificial intelligence in medicine: an ethical critique of problems of artificial intelligence technology in eldercare, in: 8th Int. Symp. Proj.
artificial intelligence and machine learning in medicine, J. bioeth. Inq. 18 (2021) Manag., 2020. https://www.researchgate.net/publication/345124313_Rese
121–139, https://doi.org/10.1007/s11673-020-10080-1. arch_on_the_Application_and_the_Ethic_Problems_of_Artificial_Intelligence_Tech
[5] T.P. Quinn, M. Senadeera, S. Jacobs, S. Coghlan, V. Le, Trust and medical AI: the nology_in_Eldercare. (Accessed 15 July 2021).
challenges we face and the expertise needed to overcome them, J. Am. Med. Inf. [30] S. Laacke, R. Mueller, G. Schomerus, S. Salloch, Artificial intelligence, social
Assoc. 28 (2021) 890–894, https://doi.org/10.1093/jamia/ocaa268. media and depression. A new concept of health-related digital autonomy, Am. J.
[6] T. Grote, P. Berens, On the ethics of algorithmic decision-making in healthcare, Bioeth. (2020), https://doi.org/10.1080/15265161.2020.1863515.
J. Med. Ethics 46 (2020) 205–211, https://doi.org/10.1136/medethics-2019- [31] M.M. Mello, C.J. Wang, Ethics and governance for digital disease surveillance,
105586. Science 368 (80) (2020) 951–954. https://science.sciencemag.org/content/368/
[7] J. Sullins, Ethical trust in the context of robot assisted surgery, ISB 2014-50th 6494/951.full. (Accessed 15 July 2021).
Annu. Conv. AISB. http://doc.gold.ac.uk/aisb50/AISB50-S17/AISB50-S17- [32] E. Horvitz, D. Mulligan, Data, privacy, and the greater good, Science (80) (2015)
Sullins-Paper.pdf, 2014. (Accessed 15 July 2021). 253–255. https://science.sciencemag.org/content/349/6245/253. (Accessed 15
[8] S. O’Sullivan, N. Nevejans, C. Allen, A. Blyth, S. Leonard, U. Pagallo, July 2021).
K. Holzinger, A. Holzinger, M.I. Sajid, H. Ashrafian, Legal, regulatory, and ethical [33] M.H. Stanfill, D.T. Marc, Health information management: implications of
frameworks for development of standards in artificial intelligence (AI) and artificial intelligence on healthcare data and information management, Yearb.
autonomous robotic surgery, Int. J. Med. Robot. Comput. Assist. Surg. 15 (2019), Med. Inform. 28 (2019) 56–64, https://doi.org/10.1055/s-0039-1677913.
e1968, https://doi.org/10.1002/rcs.1968. [34] J.A. Shaw, N. Sethi, B.L. Block, Five things every clinician should know about AI
[9] V. Dignum, The role and challenges of education for responsible AI, Lond. Rev. ethics in intensive care, Intensive Care Med. 47 (2021) 157–159, https://doi.org/
Educ. 19 (2021) 1–11, https://doi.org/10.14324/LRE.19.1.01. 10.1007/s00134-020-06277-y.
[10] J. Shen, C.J.P. Zhang, B. Jiang, J. Chen, J. Song, Z. Liu, Z. He, S.Y. Wong, P. [35] E.A. Krupinski, An ethics framework for clinical imaging data sharing and the
H. Fang, W.K. Ming, Artificial intelligence versus clinicians in disease diagnosis: greater good, Radiology 295 (2020) 683–684, https://doi.org/10.1148/
systematic review, JMIR Med. Informatics 7 (2019), e10010, https://doi.org/ radiol.2020200416.
10.2196/10010. [36] A. Pouloudi, G.D. Magoulas, Neural expert systems in medical image
[11] J. Wiltfang, H. Esselmann, U.B. Barnikol, The use of artificial intelligence in interpretation: development, use, and ethical issues, J. Intell. Syst. 10 (2000)
alzheimer’s disease - personalized diagnostics and therapy, Psychiatr. Prax. 48 451–472, https://doi.org/10.1515/JISYS.2000.10.5-6.451.
(2021) S31–S36, https://doi.org/10.1055/a-1369-3133. [37] J.-S. Cao, Z.-Y. Lu, M.-Y. Chen, B. Zhang, S. Juengpanich, J.-H. Hu, S.-J. Li,
[12] N. Biller-Andorno, A. Ferrario, S. Joebges, T. Krones, F. Massini, P. Barth, W. Topatana, X.-Y. Zhou, X. Feng, J.-L. Shen, Y. Liu, X.-J. Cai, Artificial
G. Arampatzis, M. Krauthammer, AI support for ethical decision-making around intelligence in gastroenterology and hepatology: status and challenges, World J.
resuscitation: proceed with care, J. Med. Ethics (2021), https://doi.org/10.1136/ Gastroenterol. 27 (2021) 1664–1690, https://doi.org/10.3748/wjg.v27.
medethics-2020-106786. i16.1664.
[13] T. Saheb, M. Saheb, Analyzing and visualizing knowledge structures of health [38] D.B. Larson, D.C. Magnus, M.P. Lungren, N.H. Shah, C.P. Langlotz, Ethics of using
informatics from 1974 to 2018: a bibliometric and social network analysis, and sharing clinical imaging data for artificial intelligence: a proposed
Healthc. Inform. Res. 25 (2019) 61–72, https://doi.org/10.4258/ framework, Radiology 295 (2020) 675–682, https://doi.org/10.1148/
hir.2019.25.2.61. radiol.2020192536.
[14] M. Barrett, J. Boyne, J. Brandts, H.P. Brunner-La Rocca, L. De Maesschalck, K. De [39] J. Aulenkamp, M. Mikuteit, T. Löffler, J. Schmidt, Overview of digital health
Wit, L. Dixon, C. Eurlings, D. Fitzsimons, O. Golubnitschaja, A. Hageman, teaching courses in medical education in Germany in 2020, GMS J. Med. Educ. 38
F. Heemskerk, A. Hintzen, T.M. Helms, L. Hill, T. Hoedemakers, N. Marx, (2021), https://doi.org/10.3205/zma001476.
K. McDonald, M. Mertens, D. Müller-Wieland, A. Palant, J. Piesk, A. Pomazanskyi, [40] C. Matuchansky, Clinical intelligence and artificial intelligence: a question of
J. Ramaekers, P. Ruff, K. Schütt, Y. Shekhawat, C.F. Ski, D.R. Thompson, nuance, M-S (Med. Sci.) 35 (2019) 797–803, https://doi.org/10.1051/medsci/
A. Tsirkin, K. van der Mierden, C. Watson, B. Zippel-Schultz, Artificial intelligence 2019158.
supported patient self-care in chronic heart failure: a paradigm shift from reactive [41] S. Uusitalo, J. Tuominen, V. Arstila, Mapping out the philosophical questions of
to predictive, preventive and personalised care, EPMA J. 10 (2019) 445–464, <scp>AI</scp> and clinical practice in diagnosing and treating mental
https://doi.org/10.1007/s13167-019-00188-9. disorders, J. Eval. Clin. Pract. (2020) 13485, https://doi.org/10.1111/jep.13485,
[15] S. O’Sullivan, N. Nevejans, C. Allen, A. Blyth, S. Leonard, U. Pagallo, jep.
K. Holzinger, A. Holzinger, M.I. Sajid, H. Ashrafian, Legal, regulatory, and ethical [42] V.N. O’Reilly-Shah, K.R. Gentry, A.M. Walters, J. Zivot, C.T. Anderson, P.J. Tighe,
frameworks for development of standards in artificial intelligence (AI) and Bias and ethical considerations in machine learning and the automation of
autonomous robotic surgery, Int. J. Med. Robot. Comput. Assist. Surg. 15 (2019), perioperative risk assessment, Br. J. Anaesth. 125 (2020) 843–846, https://doi.
e1968, https://doi.org/10.1002/rcs.1968. org/10.1016/j.bja.2020.07.040.

17
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

[43] M. Görges, J.M. Ansermino, Augmented intelligence in pediatric anesthesia and [68] J. Weckert, Lilliputian computer ethics, Metaphilosophy 33 (2002) 366–375,
pediatric critical care, Curr. Opin. Anaesthesiol. 33 (2020) 404–410, https://doi. https://doi.org/10.1111/1467-9973.00232.
org/10.1097/ACO.0000000000000845. [69] D. Malakoff, Nanotechnology research: congress wants studies of nanotech’s
[44] C. Canales, C. Lee, M.C. analgesia, Science without conscience is but the ruin of “dark side, Science 301 (80-) (2003), https://doi.org/10.1126/
the soul: the ethics of big data and artificial intelligence in perioperative science.301.5629.27a, 27a – 27.
medicine, anesth, Analg. 130 (2020). https://www.ncbi.nlm.nih.gov/pmc/artic [70] R.M. Satava, Disruptive visions: moral and ethical challenges from advanced
les/PMC7519874/. (Accessed 15 July 2021). technology and issues for the new generation of surgeons, Surg. Endosc. Other
[45] S. Dhundass, J. Savatovsky, L. Duron, R. Fahed, S. Escalard, M. Obadia, K. Zuber, Interv. Tech. 16 (2002) 1403–1408, https://doi.org/10.1007/s00464-002-8587-
M.A. Metten, M. Mejdoubi, R. Blanc, J.C. Sadik, A. Collin, A. Lecler, Improved 2.
detection and characterization of arterial occlusion in acute ischemic stroke using [71] A. Alexiou, M. Psixa, P. Vlamos, Ethical issues of artificial biomedical
contrast enhanced MRA, J. Neuroradiol. 47 (2020) 278–283, https://doi.org/ applications, in: IFIP Adv. Inf. Commun. Technol, Springer New York LLC, 2011,
10.1016/j.neurad.2019.02.011. pp. 297–302, https://doi.org/10.1007/978-3-642-23960-1_36.
[46] D.R. Douglas, V. Luoma, U. Reddy, Acute management of ischaemic stroke, [72] S. Cervantes, S. López, J.A. Cervantes, Toward ethical cognitive architectures for
Anaesth. Intensive Care Med. 21 (2020) 1–7, https://doi.org/10.1016/j. the development of artificial moral agents, Cognit. Syst. Res. 64 (2020) 117–125,
mpaic.2019.10.013. https://doi.org/10.1016/j.cogsys.2020.08.010.
[47] S.M. Carter, W. Rogers, K.T. Win, H. Frazer, B. Richards, N. Houssami, The [73] J. Clausen, E. Fetz, J. Donoghue, J. Ushiba, U. Spörhase, J. Chandler,
ethical, legal and social implications of using artificial intelligence systems in N. Birbaumer, S.R. Soekadar, Help, hope, and hype: ethical dimensions of
breast cancer care, Breast 49 (2020) 25–32, https://doi.org/10.1016/j. neuroprosthetics, Science 356 (80) (2017) 1338–1339, https://doi.org/10.1126/
breast.2019.10.001. science.aam7731.
[48] S.E. Hickman, G.C. Baxter, F.J. Gilbert, Adoption of artificial intelligence in breast [74] V. Fonseca, J. Caeiro, Bioethics and healthcare policies. The benefit of using
imaging: evaluation, ethical constraints and limitations, Br. J. Canc. (2021) 1–8, genetic tests of BRCA 1 and BRCA 2 in elderly patients, Int. J. Health Plann.
https://doi.org/10.1038/s41416-021-01333-w. Manag. 36 (2021) 18–29, https://doi.org/10.1002/hpm.3072.
[49] T.A. D’antonoli, Ethical considerations for artificial intelligence: an overview of [75] A. Krausová, H. Hazan, Robots with biological brains: autonomy an Liability of a
the current radiology landscape, Diagnostic Interv. Radiol. 26 (2020) 504–511, semi-artificial life form. https://tlq.ilaw.cas.cz/index.php/tlq/article/view/243,
https://doi.org/10.5152/dir.2020.19279. 2017. (Accessed 15 July 2021).
[50] A. Singh, S. Sengupta, V. Lakshminarayanan, Explainable deep learning models in [76] D. Cancila, J. Gerstenmayer, H. Espinoza, P. R, Sharpening the Scythe of
medical image analysis, J. Imaging. 6 (2020), https://doi.org/10.3390/ Technological Change: Socio-Technical Challenges of Autonomous and Adaptive
JIMAGING6060052. Cyber-Physical Systems, vol. 2, Designs, 2018. https://www.mdpi.com/2411-966
[51] B. Heinrichs, S.B. Eickhoff, Your evidence? Machine learning algorithms for 0/2/4/52. (Accessed 15 July 2021).
medical diagnosis and prediction, Hum. Brain Mapp. 41 (2020) 1435–1444, [77] M. Sollini, F. Bartoli, A. Marciano, R. Zanca, R.H.J.A. Slart, P.A. Erba, Artificial
https://doi.org/10.1002/hbm.24886. intelligence and hybrid imaging: the best match for personalized medicine in
[52] P. Reinbold, Facing discrimination: choosing equality over technology, SSRN oncology, Eur. J. Hybrid Imaging 4 (2020) 1–22, https://doi.org/10.1186/
Electron. J. (2021), https://doi.org/10.2139/ssrn.3766259. s41824-020-00094-8.
[53] M. DeCamp, C. Lindvall, Latent bias and the implementation of artificial [78] M. Cui, D.Y. Zhang, Artificial intelligence and computational pathology, Lab.
intelligence in medicine, J. Am. Med. Inf. Assoc. 27 (2020) 2020–2023, https:// Invest. 101 (2021) 412–422, https://doi.org/10.1038/s41374-020-00514-0.
doi.org/10.1093/jamia/ocaa094. [79] C.W.L. Ho, D. Soon, K. Caals, J. Kapur, Governance of automated image analysis
[54] A. Grzybowski, P. Brona, G. Lim, P. Ruamviboonsuk, G.S.W. Tan, M. Abramoff, D. and artificial intelligence analytics in healthcare, Clin. Radiol. 74 (2019)
S.W. Ting, Artificial intelligence for diabetic retinopathy screening: a review, Eye 329–337, https://doi.org/10.1016/j.crad.2019.02.005.
34 (2020) 451–460, https://doi.org/10.1038/s41433-019-0566-0. [80] E. Kotter, E. Ranschaert, Challenges and solutions for introducing artificial
[55] R. Shahbaz, M. Salducci, Law and order of modern ophthalmology: intelligence (AI) in daily clinical workflow, Eur. Radiol. 31 (2021) 5–7, https://
teleophthalmology, smartphones legal and ethics, Eur. J. Ophthalmol. 31 (2021) doi.org/10.1007/s00330-020-07148-2.
13–21, https://doi.org/10.1177/1120672120934405. [81] K.Y. Arga, COVID-19 and the futures of machine learning, OMICS A J. Integr.
[56] M. Ienca, K. Ignatiadis, Artificial intelligence in clinical neuroscience: Biol. 24 (2020) 512–514, https://doi.org/10.1089/omi.2020.0093.
methodological and ethical challenges, AJOB Neurosci 11 (2020) 77–87, https:// [82] I. Sechopoulos, R.M. Mann, Stand-alone artificial intelligence - the future of
doi.org/10.1080/21507740.2020.1740352. breast cancer screening? Breast 49 (2020) 254–260, https://doi.org/10.1016/j.
[57] D.S.W. Ting, L.R. Pasquale, L. Peng, J.P. Campbell, A.Y. Lee, R. Raman, G.S. breast.2019.12.014.
W. Tan, L. Schmetterer, P.A. Keane, T.Y. Wong, Artificial intelligence and deep [83] M. Anderson, S.L. Anderson, How should AI Be developed,validated and
learning in ophthalmology, Br. J. Ophthalmol. 103 (2019) 167–175, https://doi. implemented in patient care? AMA J. Ethics. 21 (2019) 125–130, https://doi.org/
org/10.1136/bjophthalmol-2018-313173. 10.1001/amajethics.2019.125.
[58] S. Vollmer, B.A. Mateen, G. Bohner, F.J. Király, R. Ghani, P. Jonsson, S. Cumbers, [84] S.M. McKinney, M. Sieniek, V. Godbole, J. Godwin, N. Antropova, H. Ashrafian,
A. Jonas, K.S.L. McAllister, P. Myles, D. Granger, M. Birse, R. Branson, K.G. T. Back, M. Chesus, G.C. Corrado, A. Darzi, M. Etemadi, F. Garcia-Vicente, F.
M. Moons, G.S. Collins, J.P.A. Ioannidis, C. Holmes, H. Hemingway, Machine J. Gilbert, M. Halling-Brown, D. Hassabis, S. Jansen, A. Karthikesalingam, C.
learning and artificial intelligence research for patient benefit: 20 critical J. Kelly, D. King, J.R. Ledsam, D. Melnick, H. Mostofi, L. Peng, J.J. Reicher,
questions on transparency, replicability, ethics, and effectiveness, BMJ 368 B. Romera-Paredes, R. Sidebottom, M. Suleyman, D. Tse, K.C. Young, J. De Fauw,
(2020), https://doi.org/10.1136/bmj.l6927. S. Shetty, International evaluation of an AI system for breast cancer screening,
[59] B. Dan, K. Pelc, Ethics of human enhancement in cerebral palsy, Ann. Phys. Nature 577 (2020) 89–94, https://doi.org/10.1038/s41586-019-1799-6.
Rehabil. Med. 63 (2020) 389–390, https://doi.org/10.1016/j. [85] E. Neri, F. Coppola, V. Miele, C. Bibbolino, R. Grassi, Artificial intelligence: who is
rehab.2019.03.002. responsible for the diagnosis? Radiol. Medica. 125 (2020) 517–521, https://doi.
[60] S. Ziesche, R. Yampolskiy, Introducing the concept of ikigai to the ethics of AI and org/10.1007/s11547-020-01135-9.
of human enhancements, in: Proc. - 2020 IEEE Int. Conf. Artif. Intell. Virtual [86] R. Jagsi, J.A.E. Pottow, K.A. Griffith, C. Bradley, A.S. Hamilton, J. Graff, S.J. Katz,
Reality, AIVR 2020, Institute of Electrical and Electronics Engineers Inc., 2020, S.T. Hawley, Long-term financial burden of breast cancer: experiences of a diverse
pp. 138–145, https://doi.org/10.1109/AIVR50618.2020.00032. cohort of survivors identified through population-based registries, J. Clin. Oncol.
[61] M.J. Lamola, The Future of Artificial Intelligence, Posthumanism and the 32 (2014) 1269–1276, https://doi.org/10.1200/JCO.2013.53.0956.
Inflection of Pixley Isaka Seme’s African Humanism, AI Soc., 2021, pp. 1–11, [87] M. Pisu, M.Y. Martin, R. Shewchuk, K. Meneses, Dealing with the financial burden
https://doi.org/10.1007/s00146-021-01191-3. of cancer: perspectives of older breast cancer survivors, Support, Care Cancer 22
[62] K.R. Jongsma, A.L. Bredenoord, Ethics parallel research: an approach for (early) (2014) 3045–3052, https://doi.org/10.1007/s00520-014-2298-9.
ethical guidance of biomedical innovation, BMC Med. Ethics 21 (2020) 81, [88] S.B. Wheeler, J.C. Spencer, L.C. Pinheiro, L.A. Carey, A.F. Olshan, K.E. Reeder-
https://doi.org/10.1186/s12910-020-00524-z. Hayes, Financial impact of breast cancer in black versus white women, J. Clin.
[63] B.C. Stahl, A. Andreou, P. Brey, T. Hatzakis, A. Kirichenko, K. Macnish, S. Laulhé Oncol. 36 (2018) 1695–1701, https://doi.org/10.1200/JCO.2017.77.6310.
Shaelou, A. Patel, M. Ryan, D. Wright, Artificial intelligence for human [89] M.C. Politi, R.W. Yen, G. Elwyn, A.J. O’Malley, C.H. Saunders, D. Schubbe,
flourishing – beyond principles for machine learning, J. Bus. Res. 124 (2021) R. Forcino, M.A. Durand, Women who are young, non-white, and with lower
374–388, https://doi.org/10.1016/j.jbusres.2020.11.030. socioeconomic status report higher financial toxicity up to 1 Year after breast
[64] F. Lara, J. Deckers, Artificial intelligence as a socratic assistant for moral cancer surgery: a mixed-effects regression analysis, Oncol. 26 (2021) e142–e152,
enhancement, Neuroethics 13 (2020) 275–287, https://doi.org/10.1007/s12152- https://doi.org/10.1002/onco.13544.
019-09401-y. [90] K. Atreides, External experimental training protocol for teaching AGI/mASI
[65] T. Lorence, Artificial Intelligence and the ethics of human extinction, systems effective altruism, in: Adv. Intell. Syst. Comput, Springer Verlag, 2020,
J. Conscious. Stud. 9 (2015) 194–214. pp. 28–35, https://doi.org/10.1007/978-3-030-25719-4_5.
[66] R. Yuste, S. Goering, B. Agüeray Arcas, G. Bi, J.M. Carmena, A. Carter, J.J. Fins, [91] I. van de Poel, Embedding values in artificial intelligence (AI) systems, Minds
P. Friesen, J. Gallant, J.E. Huggins, J. Illes, P. Kellmeyer, E. Klein, A. Marblestone, Mach. 30 (2020) 385–409, https://doi.org/10.1007/s11023-020-09537-4.
C. Mitchell, E. Parens, M. Pham, A. Rubel, N. Sadato, L.S. Sullivan, M. Teicher, [92] J. Morley, L. Floridi, L. Kinsey, A. Elhalal, From what to how: an initial review of
D. Wasserman, A. Wexler, M. Whittaker, J. Wolpaw, Four ethical priorities for publicly available AI ethics tools, methods and research to translate principles
neurotechnologies and AI, Nature 551 (2017) 159–163, https://doi.org/10.1038/ into practices, Sci. Eng. Ethics 26 (2020) 2141–2168, https://doi.org/10.1007/
551159a. s11948-019-00165-5.
[67] C. Herzog, Three risks that caution against a premature implementation of [93] S. Mouchabac, V. Adrien, C. Falala-Séchet, O. Bonnot, R. Maatoug, B. Millet, C.-
artificial moral agents for practical and economical use, Sci. Eng. Ethics 27 (2021) S. Peretti, A. Bourla, F. Ferreri, Psychiatric advance directives and artificial
3, https://doi.org/10.1007/s11948-021-00283-z.

18
T. Saheb et al. Computers in Biology and Medicine 135 (2021) 104660

intelligence: a conceptual framework for theoretical and ethical principles, Front. [114] R.A. Poldrack, The role of fMRI in Cognitive Neuroscience: where do we stand?
Psychiatr. 11 (2021) 622506, https://doi.org/10.3389/fpsyt.2020.622506. Curr. Opin. Neurobiol. 18 (2008) 223–227, https://doi.org/10.1016/j.
[94] G. Starke, E. De Clercq, S. Borgwardt, B.S. Elger, Computing schizophrenia: conb.2008.07.006.
ethical challenges for machine learning in psychiatry, Psychol. Med. (2020) 1–7, [115] L. Damiano, P.G. Dumouchel, Emotions in relation. Epistemological and ethical
https://doi.org/10.1017/S0033291720001683. scaffolding for mixed human-robot social ecologies, humnana, Mente J. Philos.
[95] T. Saheb, An empirical investigation of the adoption of mobile health Stud. 13 (2020) 181–206. https://www.humanamente.eu/index.php/HM/
applications: integrating big data and social media services, Health Technol. 10 article/view/321. (Accessed 15 July 2021).
(2020) 1063–1077, https://doi.org/10.1007/s12553-020-00422-9. [116] A. Haque, A. Milstein, L. Fei-Fei, Illuminating the dark spaces of healthcare with
[96] T. Saheb, T. Saheb, Predicting the adoption of health wearables with an emphasis ambient intelligence, Nature 585 (2020) 193–202, https://doi.org/10.1038/
on the perceived ethics of biometric data, Asia Paci J of Infor Sys 31 (2021) s41586-020-2669-y.
121–140. [117] X. Chapalain, O. Huet, Is artificial intelligence (AI) at the doorstep of Intensive
[97] S.J. Adams, R. Tang, P. Babyn, Patient perspectives and priorities regarding Care Units (ICU) and operating room (OR)? Anaesth. Crit. Care Pain Med. 38
artificial intelligence in radiology: opportunities for patient-centered radiology, (2019) 337–338, https://doi.org/10.1016/j.accpm.2019.05.003.
J. Am. Coll. Radiol. 17 (2020) 1034–1036, https://doi.org/10.1016/j. [118] S. Gerke, S. Yeung, I.G. Cohen, Ethical and legal aspects of ambient intelligence in
jacr.2020.01.007. hospitals, JAMA, J. Am. Med. Assoc. 323 (2020) 601–602, https://doi.org/
[98] M.D. McCradden, T. Sarker, P.A. Paprica, Conditionally positive: a qualitative 10.1001/jama.2019.21699.
study of public perceptions about using health data for artificial intelligence [119] M. Ienca, Democratizing cognitive technology: a proactive approach, Ethics Inf.
research, BMJ Open 10 (2020) 39798, https://doi.org/10.1136/bmjopen-2020- Technol. 21 (2019) 267–280, https://doi.org/10.1007/s10676-018-9453-9.
039798. [121] D.D. Ebert, M. Harrer, J. Apolinário-Hagen, H. Baumeister, Digital interventions
[99] D.B. Shank, A. Gott, Exposed by AIs! People personally witness artificial for mental disorders: key features, efficacy, and potential for artificial intelligence
intelligence exposing personal information and exposing people to undesirable applications, in: Adv. Exp. Med. Biol., Springer New York LLC, 2019,
content, Int. J. Hum. Comput. Interact. 36 (2020) 1636–1645, https://doi.org/ pp. 583–627, https://doi.org/10.1007/978-981-32-9721-0_29.
10.1080/10447318.2020.1768674. [122] L. Chan, A. Vaid, G.N. Nadkarni, Applications of machine learning methods in
[100] S. Cave, K. Coughlan, K. Dihal, “Scary robots” examining public responses to AI, kidney disease: hope or hype? Curr. Opin. Nephrol. Hypertens. 29 (2020)
in: AIES 2019 - Proc. 2019 AAAI/ACM Conf. AI, Ethics, Soc., Association for 319–326, https://doi.org/10.1097/MNH.0000000000000604.
Computing Machinery, Inc, New York, NY, USA, 2019, pp. 331–337, https://doi. [123] J.P.T. Hernandez, Network diffusion and technology acceptance of a nurse
org/10.1145/3306618.3314232. Chatbot for chronic disease self-management support: a theoretical perspective,
[101] L. da Fontoura Costa, Bioinformatics: perspectives for the future, Genet, Mol. J. Med. Invest. 66 (2019) 24–30, https://doi.org/10.2152/jmi.66.24.
Res. 3 (2004) 564–574. https://www.ncbi.nlm.nih.gov/pubmed/15688322. [124] L. Nordling, A fairer way forward for AI in health care, Nature 573 (2019).
(Accessed 15 July 2021). S103–S103, https://www.nature.com/articles/d41586-019-02872-2. (Accessed
[102] A.V. Vasilakos, G. Spyrou, Computational intelligence in medicine and biology: a 15 July 2021).
survey, J. Comput. Theor. Nanosci. 5 (2008) 2365–2376, https://doi.org/ [125] A.R. Rajkomar, G. Corrado, M. Chin, M. Howell, M. Hardt, Ensuring Fairness in
10.1166/jctn.2008.1204. Machine Learning to Advance Health Equity. https://pubmed.ncbi.nlm.nih.gov
[103] T. Saheb, Big data analytics in the context of internet of things and the emergence /30508424, 2018. (Accessed 15 July 2021).
of real-time systems: a systematic literature review, Int. J. High Perform. Syst. [126] I.Y. Chen, P. Szolovits, M. Ghassemi, Can AI help reduce disparities in general
Architect. 8 (2018) 34–50, https://doi.org/10.1504/IJHPSA.2018.094143. medical and mental health care? AMA J. Ethics. 21 (2019) 167–179, https://doi.
[104] P. Sethi, K. Theodos, Translational bioinformatics and healthcare informatics: org/10.1001/amajethics.2019.167.
computational and ethical challenges, Perspect. Health Inf. Manag. 6 (2009). [127] I.Y. Chen, S. Joshi, M. Ghassemi, Treating health disparities with artificial
https://www.ncbi.nlm.nih.gov/pubmed/20169020. (Accessed 15 July 2021). intelligence, Nat. Med. 26 (2020) 16–17, https://doi.org/10.1038/s41591-019-
[105] S.A. Garcia, Sociocultural and legal implications of creating and sustaining life 0649-2.
through biomedical technology, J. Leg. Med. 17 (1996) 469–525, https://doi.org/ [128] Y. Wang, W. Kinsner, S. Kwong, H. Leung, J. Lu, M.H. Smith, L. Trajkovic,
10.1080/01947649609511019. E. Tunstel, K.N. Plataniotis, G.G. Yen, Brain-inspired systems: a transdisciplinary
[106] Y. Kou, X. Gui, Y. Chen, B. Nardi, Turn to the self in human-computer interaction: exploration on cognitive cybernetics, humanity, and systems science toward
care of the self in negotiating the human-technology relationship, in: Conf. Hum. autonomous artificial intelligence, IEEE Syst. Man, Cybern. Mag. 6 (2020) 6–13,
Factors Comput. Syst. - Proc., Association for Computing Machinery, 2019, https://doi.org/10.1109/msmc.2018.2889502.
https://doi.org/10.1145/3290605.3300711. [129] P. Cantarini, Artificial intelligence and pandemic control: digital biopolitics and
[107] F. Fogliano, F. Fabbrini, A. Souza, G. Fidélio, J. Machado, R. Sarra, Edgard, the the end of the era of humanism, Revista Juridica (2020) 261–277.
chatbot: questioning ethics in the usage of artificial intelligence through [130] E.D. Gibbons, Toward a more equal world: the human rights approach to
interaction design and electronic literature, in: Lect. Notes Comput. Sci. extending the benefits of artificial intelligence, IEEE Technol. Soc. Mag. 40 (2021)
(Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), Springer 25–30, https://doi.org/10.1109/MTS.2021.3056295.
Verlag, 2019, pp. 325–341, https://doi.org/10.1007/978-3-030-22219-2_25. [131] A. Hosny, H.J.W.L. Aerts, Artificial intelligence for global health, Science 366
[108] P.R. Doyle, D.J. Rough, J. Edwards, B.R. Cowan, L. Clark, M. Porcheron, (80) (2019) 955–956, https://doi.org/10.1126/science.aay5189.
S. Schlögl, M.I. Torres, C. Munteanu, C. Murad, J. Sin, M. Lee, M.P. Aylett, [132] A. Cossy-Gantner, S. Germann, N.R. Schwalbe, B. Wahl, Artificial intelligence (AI)
H. Candello, CUI@IUI: theoretical and methodological challenges in intelligent and global health: how can AI contribute to health in resource-poor settings? BMJ
conversational user interface interactions, in: 26th Int. Conf. Intell. User Glob. Heal. 3 (2018) 798, https://doi.org/10.1136/bmjgh-2018-000798.
Interfaces, ACM, New York, NY, USA, 2021, pp. 12–14, https://doi.org/10.1145/ [133] G. Sartor, Artificial intelligence and human rights: between law and ethics,
3397482.3450706. Maastricht J. Eur. Comp. Law 27 (2020) 705–719, https://doi.org/10.1177/
[109] H. Candello, C. Munteanu, L. Clark, J. Sin, M.I. Torres, M. Porcheron, C.M. Myers, 1023263X20981566.
B. Cowan, J. Fischer, S. Schlögl, C. Murad, S. Reeves, CUI@CHI: Mapping grand [134] O. Yara, A. Brazheyev, L. Golovko, V. Bashkatova, Legal regulation of the use of
challenges for the conversational user interface community, in: Conf. Hum. artificial intelligence: problems and development prospects, Eur. J. Sustain. Dev.
Factors Comput. Syst. - Proc., Association for Computing Machinery, 2020, 10 (2021) 281–289, https://doi.org/10.14207/ejsd.2021.v10n1p281.
https://doi.org/10.1145/3334480.3375152. [135] J.A. Singh, Artificial Intelligence and global health: opportunities and challenges,
[110] A. Følstad, P.B. Brandtzaeg, Chatbots and the new world of HCI, Interactions 24 Emerg. Top. Life Sci. 3 (2019) 741–746, https://doi.org/10.1042/
(2017) 38–42, https://doi.org/10.1145/3085558. ETLS20190106.
[111] Z. Wang, Future challenges in the next generation of voice user interface, in: Proc. [136] V. Patel, M. Prince, Global mental health: a new global health field comes of age,
- 2020 Int. Conf. Comput. Data Sci. CDS 2020, Institute of Electrical and JAMA, J. Am. Med. Assoc. 303 (2010), https://doi.org/10.1001/jama.2010.616,
Electronics Engineers Inc., 2020, pp. 191–193, https://doi.org/10.1109/ 1976–1977.
CDS49703.2020.00045. [137] L. Floridi, J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum,
[112] O.N. Yalcin, S. Dipaola, A computational model of empathy for interactive agents, C. Luetge, R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke, E. Vayena,
Biol. Inspired Cogn. Archit. 26 (2018) 20–25, https://doi.org/10.1016/j. AI4People—an ethical framework for a good AI society: opportunities, risks,
bica.2018.07.010. principles, and recommendations, Minds Mach. 28 (2018) 689–707, https://doi.
[113] F.X. Shen, S.M. Wolf, R.G. Gonzalez, M. Garwood, Ethical issues posed by field org/10.1007/s11023-018-9482-5.
research using highly portable and cloud-enabled neuroimaging, Neuron 105
(2020) 771–775, https://doi.org/10.1016/j.neuron.2020.01.041.

19

You might also like