Developing A Framework For Self Regulatory Governance in 1x93kypihq
Developing A Framework For Self Regulatory Governance in 1x93kypihq
https://doi.org/10.1007/s41649-024-00281-w
PERSPECTIVE
Junhewk Kim1 · So Yoon Kim2 · Eun‑Ae Kim3 · Jin‑Ah Sim4 · Yuri Lee5 ·
Hannah Kim2
Abstract
This paper elucidates and rationalizes the ethical governance system for healthcare
AI research, as outlined in the ‘Research Ethics Guidelines for AI Researchers in
Healthcare’ published by the South Korean government in August 2023. In devel-
oping the guidelines, a four-phase clinical trial process was expanded to six stages
for healthcare AI research: preliminary ethics review (stage 1); creating datasets
(stage 2); model development (stage 3); training, validation, and evaluation (stage
4); application (stage 5); and post-deployment monitoring (stage 6). Researchers
identified similarities between clinical trials and healthcare AI research, particu-
larly in research subjects, management and regulations, and application of research
results. In the step-by-step articulation of ethical requirements, this similarity bene-
fits from a reliable and flexible use of existing research ethics governance resources,
research management, and regulatory functions. In contrast to clinical trials, this
procedural approach to healthcare AI research governance effectively highlights
the distinct characteristics of healthcare AI research in research and development
process, evaluation of results, and modifiability of findings. The model exhibits
limitations, primarily in its reliance on self-regulation and lack of clear delinea-
tion of responsibilities. While formulated through multidisciplinary deliberations,
its application in the research field remains untested. To overcome the limitations,
the researchers’ ongoing efforts for educating AI researchers and public and the
revision of the guidelines are expected to contribute to establish an ethical research
governance framework for healthcare AI research in the South Korean context in
the future.
13
Vol.:(0123456789)
392 Asian Bioethics Review (2024) 16:391–406
Introduction
The rapid progress of machine learning and artificial intelligence (AI) poses new
and unprecedented challenges to the entire healthcare sector. Particularly, as a
critical extension of the foundational discussions on the technology adoption
in healthcare (Rajpurkjar et al. 2022), the focus now shifts towards the practi-
cal governance and regulation of AI development and its application in health-
care landscape. South Korea has swiftly embraced biomedical technologies,
showcasing a clear inclination in integrating AI in healthcare. The ‘2022 Medi-
cal Device License Report’ from the Ministry of Food and Drug Safety (MFDS)
of the Republic of Korea unveils that a total of 149 AI-based medical devices
obtained approval and certification in the country, with 10 receiving approval and
38 attaining certification in 2022 (MFDS 2023).
Corresponding with this trend, the Korean National Institutes of Health
(KNIH) published the ‘Research Ethics Guidelines for AI Researchers in Health-
care’ in August 2023, marking an initial effort to offer an actionable guidance
to healthcare AI researchers in the country (KNIH 2023). The guidelines aim to
establish ethical standards for all stages of healthcare AI development by present-
ing ethical principles and detailed values. The researchers mainly participated in
developing the guidelines using robust research methodologies, such as literature
reviews, interdisciplinary consultations, and a public hearing as well as providing
empirical research evidence from surveys for the lay public and experts. Conse-
quently, the guidelines present six principles with corresponding codes and expla-
nations. The principles, stemmed from the World Health Organization (WHO)
report ‘Ethics and governance of artificial intelligence for health’, are tailored
for the national context, providing a framework for researchers to evaluate their
research practices. Importantly, it is noted that while bioscientists are well-versed
in the ethical procedures and legal regulations related to human subjects research,
those in computer science and data science engaged in healthcare AI research
may lack familiarity with these standards (Metcalf and Crawford 2016; Throne
2022). Consequently, these guidelines are designed to assist support healthcare
AI researchers in conducting ethical research by presenting providing part I prin-
ciples to consider in relation to research, part II corresponding relevant research
codes, regulations and related ethical cases, and part III an expanded framework,
aligning with that applies for the existing governance framework for phase I–IV
clinical research, tailored for to the context of healthcare AI research.
The purpose of this paper is to outline and provide rationales for the ethical
governance system introduced in the part III of the guidelines. At present, we are
in the process of translating the guidelines for an official English version. Amidst
this ongoing endeavour, this paper preliminarily introduces the final section of
the guidelines, which is under linguistic review. Subsequently, we describe the
governance framework, comprising six steps, accompanied by ethical and insti-
tutional explanations for each stage. In conclusion, this paper presents a health-
care AI research governance system, expanding upon the existing human subjects
research. It advocates for the establishment of a robust, secure, and sustainable
13
Asian Bioethics Review (2024) 16:391–406 393
13
394
13
Stage Requisites Reviewers Related principles
1. Preliminary ethics review Self-designed ethics framework based on Researchers and developers, research Respect and protect human autonomy
the guidelines institutions Promoting human happiness, safety, and
the public interest
Transparency, explainability, and reliability
Accountability and legal obligations
Inclusivity and equity
Responsiveness and sustainability
2. Creating datasets Data management protocols Data Creator, DRB Respect and protect human autonomy
Data Review Board (DRB) review Inclusivity and equity
Responsiveness and sustainability
3. Model development Set-up algorithms Researchers and developers, Research Transparency, explainability, and reliability
Model development plans institutions, IRB Inclusivity and equity
Institutional Review Board (IRB) review
4. Training, validation, and evaluation Model training and testing Researchers and developers, Research Transparency, explainability, and reliability
Internal validation institutions Accountability and legal obligations
Inclusivity and equity
5. Application External validation Researchers and developers, Government Respect and protect human autonomy
Compliance with ethical and legal regula- Transparency, explainability, and reliability
tions Accountability and legal obligations
6. Post-deployment monitoring Open communications Researchers and developers, Suppliers, Respect and protect human autonomy
Continuous monitoring through feed- Users, Government Promoting human happiness, safety, and
backs the public interest
Transparency, explainability, and reliability
Accountability and legal obligations
Inclusivity and equity
Responsiveness and sustainability
Asian Bioethics Review (2024) 16:391–406
Asian Bioethics Review (2024) 16:391–406 395
and informing stakeholders about these regulations, and maintaining open commu-
nication for ongoing revisions and amendments as required.
Furthermore, through such feedback and societal discussion, the developers of
this guidelines strive for continuous refinement, aiming to foster a research environ-
ment that esteems ethical principles and values.
(a) Does the plan include sensitive objectives? Is the objective to develop a medical
device or other health and public health objectives? (Specify clinical diagnosis-
treatment decision, patient decision support, prevention, behavioural interven-
tion, public health, and if others, additional descriptions should be included in
the protocol.)
(b) Is it human subject research or research utilizing datasets? (check bioethics
exemptions and compliance requirements.) If human subjects research, does the
plan include interventions or interactions?
(c) Does the plan address potential or manifest harms? (Provide a risk-benefit analy-
sis.)
(d) Is there evidence or potential for sample bias in the plan?
In the process of collecting and processing data for healthcare AI model develop-
ment, several key considerations must be addressed. Initially, it is essential to evalu-
ate the collectability, availability, and intended use of the data. Depending on the
potential risk for privacy infringement, appropriate measures such as anonymization
or pseudonymization should be employed for the dataset. A detailed data collection
plan is crucial to outline the methods and objectives clearly. Additionally, conduct-
ing ongoing quality control is imperative to minimize data bias and ensure the diver-
sity and representativeness of the datasets, which are fundamental for the develop-
ment of fair and effective healthcare AI systems.
13
396 Asian Bioethics Review (2024) 16:391–406
Related questions:
(a) Is the data collection plan comprehensive? (identification and consultation with
data subjects or maintaining organizations, data types and details, collection tech-
niques, frequency selection, inclusion and appropriateness of purposes of use)
(b) Are anonymization measures considered? (detailed technical and administra-
tive/physical measures; if not anonymized, justification and additional measures
required)
(c) Is the dataset size aligned with the learning task and model complexity?
(d) Is the data quality recognized as high?
(e) Are the data appropriately visualized and exploratory analyses conducted?
(f) Is the raw data collected according to approved clinical standards and protocols,
utilizing valid and reliable techniques?
(g) Are regular and continuous data quality control measures implemented?
(a) Does the plan provide an adequate accounting of human subjects and data sub-
jects?
(b) Are the methods of split cross-validation of datasets and datasets utilized in the
plan appropriate? (correcting erroneous data, resolving inconsistencies in data,
deleting unnecessary data, ensuring quality assurance and accuracy of data)
(c) Are potential issues with privacy addressed? (review for possible data breach)
(d) Does the plan assess the sources or likelihood of sampling/evaluation/algorith-
mic bias? (considering resampling, algorithmic fairness, etc.)
The phase of training and validating algorithms using the collected data, followed
by an evaluation of their applicability for research purposes, is crucial for crafting
robust AI systems. Training AI models meticulously is fundamental to boost their
13
Asian Bioethics Review (2024) 16:391–406 397
reliability and accuracy. It is also critical to ensure that the AI models undergo thor-
ough internal validation through appropriate procedures to confirm its effectiveness
and safety in practical applications. Moreover, implementing measures to assess
clinical reliability is necessary for healthcare AI development. This includes evalu-
ating the AI’s accuracy, its relevance to clinical applications, the fairness of its deci-
sion-making processes, and the level of trust or acceptance these systems receive
from both patients and healthcare professionals.
Related questions:
(a) Does the model use a transparent methodology for AI data mining and project
implementation? (e.g., CRISP-DM,1 KDD,2 SEMMA,3 CPMAI4)
(b) What is the model’s purpose? (specify predictive models, text mining, automa-
tion, record abstraction, biometrics, and if others, additional descriptions should
be in the protocol)
(c) What kind of technology is utilized? (specify machine learning, deep learning,
natural language processing, unsupervised learning, reinforcement learning, and
if others, additional descriptions should be included in the protocol.)
(d) Can any unexpected results be analysed or tracked?
Stage 5. Application
(a) Is there a match between the dataset and the population setting for model appli-
cation?
(b) Are the results interpretable?
1
Cross-Industry Standard Process for Data Mining
2
Knowledge Discovery in Database
3
Sample, Explore, Modify, Model, and Assess
4
Cognitive Project Management for AI
13
398 Asian Bioethics Review (2024) 16:391–406
(c) Have they been assessed for major biases? (e.g., gender, race)
(d) Has the model been externally validated using datasets from other settings?
(e) Has the model been empirically evaluated for validity, clinical utility, and cost-
effectiveness?
Continuing engagement with model users and refining the model based on their
feedback is essential in this stage. It involves regularly reviewing the model’s perfor-
mance in real-world applications, aligning with the self-constructed ethical frame-
work previously established. Maintaining open communication and collaboration
with all stakeholders, including AI providers, users, patients, the public, and govern-
ment agencies, is crucial for ongoing development and alignment with user needs
and ethical standards. Furthermore, ensuring that the models can be seamlessly
integrated into existing production environments is vital for effective decision-mak-
ing based on real data. This stage emphasizes the importance of adaptability and
responsiveness to the evolving landscape of AI applications and societal impacts.
Related questions:
(a) Do you regularly monitor the product whether the entire data process is correctly
aligned or when the entire process is performed automatically without the need
for human intervention?
(b) Does the user (healthcare provider), user organization (healthcare organization)
regularly disclose usage results, both positive and negative?
(c) Are there communication and recovery protocols established for model applica-
tion errors?
(d) Are there improvements needed in the relevant ethical framework and guide-
lines?
13
Asian Bioethics Review (2024) 16:391–406 399
(c) application of research results. On the other hand, there are differences between
clinical trials and healthcare AI research, including (a) the research and develop-
ment process, (b) evaluation of research results, and (c) the modifiability of research
results.
Firstly, human subjects, biospecimens, or populations in clinical trials share qual-
itative similarity with health data, their constructs, or databases utilized in health-
care AI research. For instance, biospecimens are recognized for their uniqueness—
characteristics derived from the individuals they originate from—and then, health
data collected from human subjects possess the same ontological nature as deriva-
tives of individuals. They inherently refer to persons and are intricately connected to
them (Cha and Kim 2022). Health datasets encapsulate various biological, behav-
ioural, and socioeconomic records of a specific data subject, directly linked with
the human body. The linkage of whole genome sequencing (WGS) data to personal
identity intertwines the human body with the data presenting (Li et al. 2014). In
population studies, the population database reflects the target population group, and
eventually, they should become ontologically and practically identical.
Secondly, both clinical trials and healthcare AI research aim to derive results that
benefit humans—whether it is treatments, new drugs, medical technologies, and bio-
materials in clinical trials, or algorithms and applications in healthcare AI research.
Just as clinical research with human subjects has established protocols to ensure
respect and protection of individuals involved and affected by research process and
its outcome (National Commission for the Protection of Human Subjects of Biomed-
ical & Behavioral Research 1978), healthcare AI research also confronts to address
ethical considerations arising from both the research process and the utilization of
its outcomes. The considerations encompass aspects ranging from the respecting
and protection of individuals to issue of accountability and sustainability. Similar
to the human subjects research oversight by Institutional Review Boards (IRBs),
which review and monitor all biomedical research, healthcare AI research neces-
sitates a robust review and monitoring process. This process is crucial even when
certain research activities might be exempt from regulatory requirements, acknowl-
edging the unique challenges and potential risks associated with AI. A tailored over-
sight mechanism for healthcare AI is imperative that all research involving human
subjects—or their data—is conducted responsibly and ethically. As human clinical
trials aim to apply developed treatments and new drugs to humans by assessing effi-
cacy and safety, healthcare AI research endeavours to apply developed algorithms
and applications to humans to demonstrate effectiveness.
Recognizing the identified similarities, it could be argued that the governance
framework established for human clinical research can be directly applied to health-
care AI research. However, significant differences between human clinical research
and healthcare AI research necessitate a tailored approach.
Primarily, a distinction lies in the development process between human clinical
research and healthcare AI research. Human clinical research focuses on developing
of treatments or new drugs, validated through assessments of safety and effective-
ness and comparative benefit analyses. Upon affirming these steps, a treatment or
drug is considered developed, thereafter maintained through post-marketing/appli-
cation monitoring or management. Conversely, healthcare AI research entails an
13
400 Asian Bioethics Review (2024) 16:391–406
Given these considerations, this guidance extends the traditional four-phase clini-
cal research process (phase I: safety; phase II: efficacy and side-effects; phase III:
large trials; phase IV: post-market surveillance) by introducing a six-stage pro-
cess for healthcare AI research. The introduction of <Stage 1: preliminary eth-
ics review > and < Stage 2: creating datasets > reflects the unique nature of health-
care AI research and emphasizes the necessity for comprehensive and sustainable
research guidelines from data collection stage onwards. < Stage 3: model develop-
ment > , < Stage 4: training, validation, and evaluation > , < Stage 5: application > ,
and < Stage 6: post-deployment monitoring > align with the concepts of phases I–IV
of clinical research but are specifically tailored to address the characterized process
of developing and applying healthcare AI algorithms.
13
Asian Bioethics Review (2024) 16:391–406 401
necessary. Their collective input serves to establish guiding principles and rules
crucial for the ethical conduct of research. This proactive approach aims to pro-
mote self-regulated ethical practices among researchers, distinct from mere com-
pliance with legal regulations. Notably, the established ethical framework in stage
1 should be consistently referenced in most subsequent documentation.
Stage 2 specifies plans for data collection and processing, mandating the crea-
tion of suitable datasets by designated data creator or “data curators” responsible
for assembling and maintaining datasets (Leonelli 2016). The data collection and
processing activities of researchers undergo to review by the Data Review Boards
(DRBs). This board, established to oversight the ethical conduct of data-related
procedures, evaluated data collection plan, anonymization methods, dataset size,
quality, and management. The DRB operates within the research institution or as
an independent body. Proposed by the Ministry of Health and Welfare of South
Korea in the “Guidelines for Utilization of Healthcare Data,” the DRBs function
as a committee of five or more individuals. Its responsibilities include assessing
the suitability of processing pseudonymized information within an institution,
reviewing the adequacy of pseudonymization, and managing the use of pseu-
donymized information within and outside the institution (Ministry of Health and
Welfare of South Korea Dec 2022). This paper proposes that the DRB or a data
appropriateness review entity comprising researchers, developers, and external
members. This entity would review the data collection and management system
before commencing healthcare AI research. Such proactive review aims to ensure
the safety, appropriateness, feasibility, and absence of biases in data utilization
for healthcare AI research.
13
402 Asian Bioethics Review (2024) 16:391–406
and test sets, and a pre-prepared validation set, distinct from the training data, is
essential for validating healthcare AI algorithms to prevent overfitting and assess
real-world applicability. The management of the validation process is impera-
tive to avoid the exportation of models that are only useful during the training
process to the actual application phase. and it is recommended that the research
and development organization check this process. In the context of healthcare AI
applications such as diagnostic imaging, patient risk prediction, and personalized
treatment planning, each employing base algorithms ranging from deep learn-
ing to decision trees, the need for tailored validation processes becomes clear.
For diagnostic imaging or patient risk prediction models, the validation process
should primarily focus on rigorous statistical evaluation to ensure accuracy and
reliability. Personalized treatment planning systems necessitate validation that
emphasizes clinical relevance and the improvement of patient outcomes. These
validation processes are essential for assessing the reliability of healthcare AI
models. This stage can be seen as akin to phase II in clinical research, the phase
that evaluates the effectiveness of a medical device or drug against a placebo.
The emphasis is particularly placed on validating the trained algorithm and its
relevance to clinical procedures.
Stage 6 mandates all parties involved to review the process of the continued
deployment and ongoing development once the developed algorithm or model
has been put into operation in a healthcare setting. Continuous review of use of
the model and the functionality of the ethical framework remains pivotal. Main-
taining transparent and collaborative communication among all stakeholders
emerges as a necessity. In addition, vigilant monitoring of ongoing evolution
of the model is imperative to prevent that decision-making based on real-world
data might lead to unintended harms. This phase emphasized the follow-up and
surveillance of algorithms and models post-launch, analogous to phase IV, post-
13
Asian Bioethics Review (2024) 16:391–406 403
market Surveillance in clinical research, which refers to the follow-up phase after
clinical implementation of a medical device or drug.
The governance guidelines bear inherent limitations. Foremost, they do not deci-
sively address the liability associated with possible harm resulting from healthcare
AI applications. In the case of healthcare AI research and application involving mul-
tiple parties, it is necessary to examine whether the harm caused can be assumed the
same as the existing medical liability process. For example, if a patient is physically
harmed in the process of utilizing a healthcare AI device, but it turns out to be a
problem with the algorithm rather than the fault of the medical practitioner or device
user, who should be held liable?
Navigating liability questions amidst the overlapping influences of vari-
ous actors poses challenges (Kim 2017). While the governance of healthcare AI
research needs to address the issue of liability, it is limited by the fact that the
guidelines in the study focus on proposing an ethical model grounded in self-
regulation, addressing the intricacies of liability remains a significant challenge.
Moreover, the procedures are set to be adjusted according to each country’s regu-
latory procedures, which is because the procedures correspond to existing clinical
research guidelines, but it is necessary to examine whether they can be properly
operated in real-life situations. This is an area that requires empirical verification
by applying the guidelines to actual healthcare AI research governance. Therefore,
this paper calls for further research on the healthcare AI governance guidelines
presented here to address the issues identified above, especially linking it the legal
standard to regulation.
13
404 Asian Bioethics Review (2024) 16:391–406
Conclusion
The aims of this study are to present a healthcare AI research governance system
founded on the South Korean ‘Research Ethics Guidelines for AI Researchers
in Healthcare’ and to elucidate each procedural step. The six-stage healthcare AI
research governance framework mirrors the healthcare AI research and development
process, and is designed in harmony with the existing clinical research management
systems. This parallel structure facilitates the utilization of established research
management resources and foster mutual understanding among researchers and
institutions for conducting ethical research procedures. Nonetheless, the guidelines
are likely to reflect the specificities of the Korean healthcare environment, empha-
sizing the need for further international dialogue and refinement.
Acknowledgements The authors wish to thank Dr Jung-Im Lee and Dr Sumin Kim for their contribu-
tion in developing the guidelines. The first project (2022-ER0807-00) conducted consultation meetings of
two panels of interdisciplinary expert participants from law, public health policy, ethics, AI, and patients
group for four times from August, 2022, to February, 2023, and a public hearing at February 2023. We
deeply express our gratitude for all participants for their valuable opinions.
Author Contribution J. K. and H. K. were responsible for the conception, design, acquisition of data or
analysis, and interpretation of data. J. K. was responsible for manuscript writing, subsequent revisions
of the manuscript and funding (2023-ER0808-00). H. K. was responsible for reviewing the manuscript,
funding (2022-ER0807-00), and developing the guidelines. S. Y. K., E. A. K., J. A. S., and Y. L. partici-
pated in developing the guidelines and reviewing the manuscript. All authors have read and agreed to the
published version of the manuscript.
Funding This work was supported by the ‘Development of Ethics Guidelines and Education Program
for the Use of Artificial Intelligent in Healthcare Research’ and ‘Operation of Education Program and
Improvement of Ethics Guidelines for the Use of Artificial Intelligent in Healthcare Research’ from the
Korean National Institutes of Health (Grant numbers: 2022-ER0807-00 and 2023-ER0808-00).
13
Asian Bioethics Review (2024) 16:391–406 405
Data Availability The framework employed in our research is included in the English version of
“Research Ethics Guidelines for Healthcare AI Researchers” (KNIH 2023). This document is currently in
the process of being published. Upon its publication, we will promptly provide the relevant link.
Declarations
Disclaimer During the research, the main responsibilities of the funding agency included managing the
project progress and making decisions regarding the publication of the guidelines and the agency had
no role in the study design, data collection and analysis, preparation of the manuscript, and decision to
publish it.
Ethics Approval As this study did not involve human participants, ethics approval was not needed.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative
Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended
use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permis-
sion directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/
licenses/by/4.0/.
References
Cha, Hyun-Jae., and Junhewk Kim. 2022. The ethical approach to health data donation and sharing:
From the process of human tissue donation. Bio, Ethics and Policy 6 (2): 101–137.
Gerke, Sara, Boris Babic, I. Theodoros Evgenious, and Glenn Cohen. 2020. The need for a system
view to regulated artificial intelligence/machine learning-based software as medical device. NPJ
Digital Medicine 3: 53. https://doi.org/10.1038/s41746-020-0262-2.
Higgins, David, and Vince I. Madai. 2020. From bit to bedside: A practical framework for artificial
intelligence product development in healthcare. Advanced Intelligent Systems 2 (10): 2000052.
https://doi.org/10.1002/aisy.202000052.
Kim, Junhewk. 2017. Autonomous decision medical system and moral responsibility. Philosophy of
Medicine 24: 147–182.
Kim, Hannah, Jung Im Lee, Jinah Sim, Yuri Lee, So Yoon Kim, Eun-Ae Kim, Soo Min Kim, and Jun-
hewk Kim. 2023. Ethical guidelines for artificial intelligence research in healthcare: Introducing
South Korean Perspectives. Korean Journal of Medicine and Law 31 (1): 85–110. https://doi.org/
10.17215/kaml.2023.06.31.1.85.
Korean National Institute of Health. 2023. Research ethics guidelines for AI researchers in health-
care. Cheongju: Korea Disease Control and Prevention Agency.
Leonelli, Sabina. 2016. Data-centric biology: A philosophical study. Chicago: The University of Chi-
cago Press.
Li, Hong, Gustavo Glusman, Hao Hu, Shankaracharya, Juan Caballero, and Robert Hubley, et al.
2014. Relationship estimation from whole-genome sequence data. PLoS Genetics 10(1):
e1004144. http://dx.doi.org/10.1371/journal.pgen.1004144.
13
406 Asian Bioethics Review (2024) 16:391–406
Metcalf, Jacob, and Kate Crawford. 2016. Where are human subjects in Big Data research? The
emerging ethics divide. Big Data & Society 3 (1): 1–14. https://doi.org/10.1177/2053951716
650211
Ministry of Food and Drug Safety of the Republic of Korea. 2023. 2022 Medical device license
report. Cheongju: Ministry of Food and Drug Safety of the Republic of Korea.
Ministry of Health and Welfare of South Korea. 2022. Guidelines for utilization of healthcare data.
Sejong: Ministry of Health and Welfare of South Korea.
National Commission for the Protection of Human Subjects of Biomedical & Behavioral Research.
1978. The Belmont report: Ethical principles and guidelines for the protection of human subjects
of research. Bethesda, MD: National Commission for the Protection of Human Subjects of Bio-
medical & Behavioral Research.
Pianykh, Oleg S., Georg Langs, Marc Dewey, Dieter R. Enzmann, Christian J. Herold, and Stefan O.
Schoenberg, et al. 2020. Continuous learning AI in radiology: Implementation principles and
early application. Radiology 297 (1): 6–14. https://doi.org/10.1148/radiol.2020200038.
Rajpurkjar, Pranav, Emma Chen, Oishi Banerjee, and Eric J. Topol. 2022. AI in health and medicine.
Nature Medicine 28: 31–38. https://doi.org/10.1038/s41591-021-01614-0.
Sujan, Mark, Cassius Smith-Frazer, Christina Malamateniou, Joseph Connor, Allison Garner, Harriet
Unsworth, et al. 2023. Validation framework for the use of AI in healthcare: Overview of the
new British standard BS30440. BMJ Health & Care Informatics 30: e100749. https://doi.org/10.
1136/bmjhci-2023-100749.
Throne, Robin. 2022. Adverse trends in data ethics: The AI Bill of Rights and Human Subjects Pro-
tections. SSRN, 30 November 2022. https://doi.org/10.2139/ssrn.4279922.
Wu, Eric, Kevin Wu, Roxana Daneshjou, David Ouyang, Daniel E. Ho, and James Zou. 2021. How
medical AI devices are evaluated: Limitations and recommendations from an analysis of FDA
approvals. Nature Medicine 27: 582–584. https://doi.org/10.1038/s41591-021-01312-x.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
Junhewk Kim1 · So Yoon Kim2 · Eun‑Ae Kim3 · Jin‑Ah Sim4 · Yuri Lee5 ·
Hannah Kim2
* Hannah Kim
[email protected]; [email protected]
1
Department of Dental Education, College of Dentistry, Yonsei University, Seoul, South Korea
2
Asian Institute for Bioethics and Health Law, Department of Medical Humanities and Social
Sciences, College of Medicine, Yonsei University, Seoul, South Korea
3
Center for Research Compliance, Ewha Womans University, Seoul, South Korea
4
Department of AI Convergence, Hallym University, Chuncheon, South Korea
5
Department of Health and Medical Information, Myongji College, Seoul, South Korea
13