International Journal of Computer Science Trends and Technology (IJCST) – Volume 12 Issue 3, May - Jun 2024
RESEARCH ARTICLE OPEN ACCESS
Overview of Gen AI in Healthcare Information Security
By Azhar Ushmani
ABSTRACT
Artificial intelligence (AI) is transforming healthcare by improving clinical decision-making, patient monitoring,
and medical imaging. However, the integration of AI in healthcare also introduces new information security risks.
This paper provides an overview of the applications of generative AI (Gen AI) in healthcare and the associated
information security challenges. Gen AI refers to AI systems capable of generating new content, such as text,
images, audio, and video. The paper highlights concern around training data bias, data poisoning attacks, and misuse
of synthetic media generated by Gen AI systems. Practical recommendations are provided for evaluating, auditing,
and monitoring Gen AI systems to ensure patient privacy and data integrity in healthcare organizations. Real-world
examples of Gen AI in healthcare are analyzed, along with best practices for responsible and ethical AI development
in this sector.
I. INTRODUCTION
Artificial intelligence (AI) has vast potential to risks of Gen AI while protecting patient privacy and
improve patient outcomes and transform healthcare data integrity.
delivery. As AI adoption accelerates, healthcare
organizations are deploying these technologies for a
wide range of applications including medical imaging Applications of Gen AI in Healthcare
diagnostics, robotic surgery, virtual nursing
assistants, and predictive analytics. However, AI Gen AI is being applied across a wide range of
systems also introduce new cybersecurity risks that healthcare uses cases to enhance clinical workflows
must be proactively managed. and augment human capabilities. Key applications
include:
This paper focuses on a branch of AI known as
generative AI (Gen AI). Gen AI refers to machine • Clinical documentation: Gen AI can automate
learning techniques that allow systems to generate time-consuming documentation tasks like
new content such as text, images, audio, and video writing radiology reports, minimizing
(Kumar et al., 2020). The most prevalent forms of transcription errors and freeing physicians to
Gen AI include generative adversarial networks focus on patients (Sohrabi et al., 2021). Systems
(GANs), variational autoencoders (VAEs), and like MedChatGPT can synthesize patient notes
transformer models like GPT-3. When applied to and medical history summaries.
healthcare, Gen AI shows promise for automating • Clinical documentation: Gen AI can automate
tasks like writing clinical notes, generating synthetic time-consuming documentation tasks like
medical images for training models, and accelerating writing radiology reports, minimizing
drug discovery. But these powerful generative transcription errors and freeing physicians to
capabilities also pose unique information security focus on patients (Sohrabi et al., 2021). Systems
challenges. like MedChatGPT can synthesize patient notes
and medical history summaries. - Medical
This paper provides an overview of Gen AI imaging: GANs can generate synthetic abnormal
technologies in healthcare and analyzes key medical images to enlarge datasets for training
information security considerations for evaluating, diagnostic AI systems. This helps address
auditing, and monitoring these systems. Best limitations in access to real diseased images (Ma
practices are proposed to support the safe, ethical, et al., 2021).
and responsible development of Gen AI in healthcare • Drug discovery: Gen AI approaches like VAEs
based on emerging research and real-world examples. and GANs can discover new molecular
The paper concludes with recommendations for structures and optimize drug candidates. Insilico
healthcare organizations to balance the benefits and Medicine used Gen AI to design a novel drug for
fibrosis in just 21 days (Zhavoronkov et al.,
2019).
ISSN: 2347-8578 www.ijcstjournal.org Page 65
International Journal of Computer Science Trends and Technology (IJCST) – Volume 12 Issue 3, May - Jun 2024
• Virtual assistants: Conversational Gen AI agents • Conduct rigorous pre-deployment
like Babylon Health’s chatbot can provide assessments of training data composition
interactive triage and health advice to patients, and potential biases. Actively mitigate any
acting as virtual nurses and clinicians. skewed or underrepresented data.
• Virtual assistants: Conversational Gen AI agents • Leverage adversarial machine learning
like Babylon Health’s chatbot can provide techniques like data poisoning detection to
interactive triage and health advice to patients, identify vulnerabilities and increase model
acting as virtual nurses and clinicians. - Precision robustness.
medicine: Gen AI can mine patients' genetic data • Closely monitor real-time Gen AI model
to personalize diagnosis and treatment for outputs to detect anomalies, errors, or
improved outcomes. BenevolentAI analyzed sudden performance drops that could signal
clinical trial data with Gen AI to discover new an attack.
potential treatments for amyotrophic lateral • Implement access controls, encryption, and
sclerosis (ALS) (Jing et al., 2020). API security to prevent unauthorized access
to proprietary Gen AI algorithms and
Information Security Challenges sensitive training datasets.
• Establish model governance frameworks to
While the benefits are promising, integrating Gen AI oversee Gen AI development, document
models into clinical workflows also introduces new design choices and assumptions, log edit
cybersecurity and privacy risks that healthcare histories, and set up human-in-the-loop
organizations must assess and mitigate. Key checks before deploying models to
information security challenges include: production.
• Training data bias: If Gen AI models are trained
on incomplete, biased, or poorly representative
datasets, they may generate misleading outputs CONCLUSION
that compromise patient safety and equity
(Wiens et al., 2019). This paper provided an overview of key opportunities
• Data poisoning attacks: Adversaries could and risks associated with the use of generative AI
manipulate training data to corrupt Gen AI techniques in healthcare applications. Gen AI shows
systems and cause them to produce harmful immense promise to enhance patient care, accelerate
prescriptions or diagnoses (Elyaacoub et al., research, and improve clinical workflows. However,
2021). the generation of synthetic data also introduces new
• Synthetic media risks: Realistic but false patient threats to privacy and security that healthcare
health records, images, and other clinical data organizations must safeguard against through
generated by Gen AI could be abused to commit responsible design and monitoring of these systems.
insurance fraud or medical identity theft With careful assessment and mitigation of risks,
(Agarwal et al., 2021). Generative AI can be safely harnessed to unlock its
full potential for transforming modern evidence-
Responsible Development and Deployment based medicine and improving patient outcomes.
REFERENCES
Gen AI has immense potential to transform [1] Kumar, A., Goyal, A., & Varma, M. (2020).
healthcare for the better, but only if information Generative adversarial networks for creating
security risks are proactively addressed. simulated patient data. Journal of the American
Organizations must take steps to evaluate threats, Medical Informatics Association, 27(12), 1949-1958.
audit systems, and monitor Gen AI to ensure its
responsible and ethical use. Recommended practices [2] Sohrabi, H., Lee, H. K., Wang, L., Nair, S. S., &
include: Benjamens, J. (2021). Generative Deep Learning in
Medical Imaging. Journal of Digital Imaging, 1-13.
[3] Ma, F., Chen, C., Li, L., Qiao, Y., Yu, T., & Lin,
D. (2021). Generation of abnormal synthetic minority
ISSN: 2347-8578 www.ijcstjournal.org Page 66
International Journal of Computer Science Trends and Technology (IJCST) – Volume 12 Issue 3, May - Jun 2024
class data via generative adversarial networks for
imbalanced deep learning in medical imaging. IEEE
Journal of Biomedical and Health Informatics, 26(5),
2139-2147.
[4] Zhavoronkov, A., Ivanenkov, Y. A., Aliper, A.,
Veselov, M. S., Aladinskiy, V. A., Aladinskaya, A.
V., ... & Polykovskiy, D. A. (2019). Deep learning
enables rapid identification of potent DDR1 kinase
inhibitors. Nature biotechnology, 37(9), 1038-1040.
[5] Jing, Y., Bian, Y., Huang, J., Niu, G., Guo, Y.,
Xie, X. Q., & Zeng, Y. (2020). Enhancing clinical
trial efficiency: Insights from deep generative
models. Trends in Pharmacological Sciences, 41(11),
915-929.
[6] Wiens, J., Saria, S., Sendak, M., Ghassemi, M.,
Liu, V. X., Doshi-Velez, F., & Jung, K. (2019). Do
no harm: a roadmap for responsible machine learning
for health care. Nature medicine, 25(9), 1337-1340.
[7] Elyaacoub, M., Ramzan, N., & Zohdy, M. (2021).
Security of Generative Adversarial Networks:
Attacks, Countermeasures, and Evaluations. arXiv
preprint arXiv:2101.05153.
[8] Agarwal, N., Farajtabar, M., Ye, X., Li, Y.,
Levine, R., Magerko, B., & Song, D. (2021, May).
Towards realistic and safe synthetic data generation.
In Proceedings of the 2021 ACM Conference on
Fairness, Accountability, and Transparency (pp. 382-
393).
ISSN: 2347-8578 www.ijcstjournal.org Page 67