0% found this document useful (0 votes)
55 views9 pages

A C AI M L S R: A R: Pplications and Hallenges of AND Icroscopy in IFE Cience Esearch Eview

This review discusses the integration of Artificial Intelligence (AI) and microscopy in life science research, highlighting their potential applications and challenges. It emphasizes the need for AI to analyze complex biological data generated by microscopy, addressing issues such as data heterogeneity and the scarcity of labeled data. The paper aims to foster interdisciplinary collaboration and innovation by providing insights into current advancements and open research questions in the field.

Uploaded by

sharmadepanshu8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views9 pages

A C AI M L S R: A R: Pplications and Hallenges of AND Icroscopy in IFE Cience Esearch Eview

This review discusses the integration of Artificial Intelligence (AI) and microscopy in life science research, highlighting their potential applications and challenges. It emphasizes the need for AI to analyze complex biological data generated by microscopy, addressing issues such as data heterogeneity and the scarcity of labeled data. The paper aims to foster interdisciplinary collaboration and innovation by providing insights into current advancements and open research questions in the field.

Uploaded by

sharmadepanshu8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

A PPLICATIONS AND C HALLENGES OF AI AND M ICROSCOPY IN

L IFE S CIENCE R ESEARCH : A R EVIEW

Himanshu Buckchash Gyanendra Kumar Verma Dilip K. Prasad


University of Applied Sciences National Institute of Technology UiT The Arctic University of Norway
arXiv:2501.13135v1 [q-bio.OT] 22 Jan 2025

Krems, Austria Raipur, India Tromsø, Norway


[email protected] [email protected] [email protected]

A BSTRACT
The complexity of human biology and its intricate systems holds immense potential for advancing
human health, disease treatment, and scientific discovery. However, traditional manual methods
for studying biological interactions are often constrained by the sheer volume and complexity of
biological data. Artificial Intelligence (AI), with its proven ability to analyze vast datasets, offers a
transformative approach to addressing these challenges. This paper explores the intersection of AI
and microscopy in life sciences, emphasizing their potential applications and associated challenges.
We provide a detailed review of how various biological systems can benefit from AI, highlighting
the types of data and labeling requirements unique to this domain. Particular attention is given
to microscopy data, exploring the specific AI techniques required to process and interpret this
information. By addressing challenges such as data heterogeneity and annotation scarcity, we outline
potential solutions and emerging trends in the field. Written primarily from an AI perspective, this
paper aims to serve as a valuable resource for researchers working at the intersection of AI, microscopy,
and biology. It summarizes current advancements, key insights, and open problems, fostering an
understanding that encourages interdisciplinary collaborations. By offering a comprehensive yet
concise synthesis of the field, this paper aspires to catalyze innovation, promote cross-disciplinary
engagement, and accelerate the adoption of AI in life science research.

Keywords microscopy · artificial intelligence · life science research · deep learning · synthetic data generation ·
applications

1 Introduction
Life science is a core research domain with profound implications for human well-being. It addresses myriad questions
across diverse directions, all ultimately geared towards advancing human health. However, resolving these challenges
is time intensive, as countless experiments are needed to develop effective therapies or robust understanding, each
generating extensive data. Manually analyzing such data demands substantial expertise (already in short supply) and
is exceedingly time consuming. A notable example is the decades of labor by researchers to empirically determine
around 100000 protein structures [1]. Recent years, however, have witnessed substantial advances through AI [2, 1].
Utilizing AI to propel life science research is inevitable, given their mutual synergy. AI relies on extensive data to refine
its predictive models, while life science generates vast datasets that exceed feasible manual analysis. This synergy is
reflected in numerous breakthroughs [1, 3, 4, 2]. This paper aims to elucidate key challenges in microscopy and life
science, their interconnections, how AI and microscopy can be harnessed to address them, and prospective research
directions.
As illustrated in Fig. 1, the major challenges in life science research (LSR) can be stratified into three hierarchical tiers.
The first category encompasses sequence- and structure-focused tasks, such as protein or nucleic acid sequence prediction
and omics data analysis, all of which revolve around the molecular level. Algorithms typically employed in these
tasks include recurrent neural networks (RNNs), transformers, or graph neural networks, due to their ability to capture
complex relationships within molecular data. At a higher spatial scale, the second tier involves subcellular components,
including organelles, and aims to elucidate their morphology and dynamics to better understand connections between
Stratification of Problems in Life Science Research

Molecular Level Organelle Level Organ Level

★ Structure prediction: 3D ★ Motion analysis: mitochondria, ★ Tumor detection: in tissues and


structure of biomolecules (RNA, vesicles organs
proteins) ★ Morphology analysis ★ Lesion segmentation: lesion
★ Sequence prediction: RNA, ★ Organelle segmentation: boundary detection
DNA, proteins nucleus, vesicle, microtubules ★ Organ segmentation: blood vessels,
★ Multi-omics data analysis and ★ Dynamics analysis in protein pancreas, liver, prostate, heart,
integration: genomics, processing and transport bones
proteomics, metabolomics, and ★ Object detection in live-cell ★ Classification of medical images:
transcriptomics imaging: lysosome study breast cancer, pulmonary disease,
★ Drug discovery diabetic retinopathy, heart disease
★ Molecular docking: target classification
protein drug binding ★ 3D modeling of organs: digital twins
★ Epigenomics analysis: DNA for simulation and analysis
modification for regulation of ★ Cardiac motion analysis
gene expression ★ Disease specific tissue analysis: liver
fibrosis or lung nodule detection
★ Neuroimaging analysis: epilepsy,
Alzheimer’s disease

Figure 1: A classification of challenges in life sciences, categorized into molecular, organelle, and organ levels. Each
tier highlights representative problems.

cellular function and overall well being of the organism. These investigations predominantly rely on microscopic
techniques, positioning the study of organelle behavior at the organelle tier, as highlighted in Fig. 1. Finally, the third
tier addresses problems at the tissue or organ scale, often employing advanced imaging modalities, such as microscopy
and high frequency waves, for visualization. These inquiries typically target organ level health issues, such as tumor
detection, and can therefore be classified under the organ level tier. Several examples of these challenges are illustrated
in Fig. 1.
In recent years, generative AI has significantly advanced solutions at the molecular level by providing superior
outcomes. Models such as [5, 6] have demonstrated outstanding performance in sequence prediction tasks for proteins,
nucleic acids, and related biomolecules. Moreover, graph-based approaches and recurrent neural networks have been
employed for modeling multiomics and other complex problems [1], leading to notable progress in disease prognosis,
diagnosis, and omics research. However, the integration of AI into microscopy based diagnostics has yet to match
these developments, leaving a broad scope of challenges that demand innovative AI driven solutions. Even existing AI
models can substantially accelerate microscopy related studies in LSR. This paper aims to explore the intersection of AI
and microscopy, emphasizing how these domains can converge to address significant hurdles in LSR. It discusses the
key challenges, highlights open research questions, and proposes potential strategies. Written primarily from an AI
perspective, this work provides a succinct yet thorough review for researchers in both AI and microscopy, maintaining
a balance between technical depth and accessibility. The main objective is to offer a critical perspective for quick
engagement with the field. The subsequent sections delve deeper into AI driven imaging, illustrating the synergy
between AI and microscopy, scrutinizing key challenges, examining potential remedies, identifying open research
directions, describing public resources, and finally presenting concluding remarks.

2 Synergetic relation between microscopy and AI

As depicted in Fig. 2a, a typical microscopy setup involves preparing the specimen on a slide and illuminating it
using an appropriate light or wave source. The objective lens then magnifies the resulting signal, and variations in
wave patterns or fluorescence provide insights into the sample’s structural and functional features. These interactions
enable the formation of detailed images that reveal critical information about the underlying biology. Such images
can be exceptionally large and complex, requiring substantial human effort to interpret. Indeed, a single dataset might
encompass millions or even billions of subcellular interactions, reflecting intricate biological dynamics [7]. Microscopy
outputs frequently take the form of 3D, 4D, or 5D tensor data, capturing spatial, temporal, and sometimes spectral
dimensions. The sheer scale and complexity of these multidimensional datasets highlight the growing need for AI,

2
Acquisition of Optical Sections
Optical Sections
x y

Image Plane Focal


Change
(𝜟z)

Microscope
Objective
Optical Axis

(i) (j)
z axis

Focal
Plane

Specimen (k)
(a) General microscope setup (b) Fluorescence vs Phase-contrast
Figure 2: Left: General assembly of a microscope: a schematic representation [10]. Right: Microscopic images of
culture of human lymphocyte cells. (i) fluorescence image of nuclear envelopes, (j) fluorescence image of interior
nuclei (DNA), and (k) phase-contrast image of whole cells. [11].

fueled both by the success of unsupervised and self-supervised algorithms and the scarcity of labeled data available for
training.
AI can play a pivotal role in microscopy by modeling the morphological state and dynamics of subcellular structures,
thereby facilitating a deeper understanding of underlying biological processes. Recent efforts [7] are dealing with
the enormous scale of these interactions to elucidate how diseases manifest in living organisms. A common strategy
in such investigations involves applying a perturbation — a controlled alteration or stimulus — to the organism or
system of interest [8, 9]. Subsequently, researchers monitor how this perturbation propagates across one or multiple
tiers of biological organization, from molecules and organelles to entire organs (see Fig. 1). By capturing the resulting
changes at one or multiple levels, this approach provides a more comprehensive perspective on the complex interplay
that governs health and disease.
Types of image based microscopy. Fig. 3 illustrates a comprehensive hierarchical classification of diverse imaging
methods in microscopy. This classification is structured around three primary functional objectives: (1) visualizing fine
structures through static imaging, (2) tracking dynamic processes such as motion and molecular interactions, and (3)
probing molecular composition to reveal chemical or elemental properties. Techniques are further grouped based on
their underlying principles, including spectroscopy-based, electron-based, force-based, and light-based approaches,
with subdivision into conventional, advanced, and specialized types. This organization captures the broad range of
tools available for structural visualization, molecular analysis, dynamic behavior tracking, and multimodal integration.
Among the commonly used techniques are confocal microscopy, phase-contrast microscopy, fluorescence microscopy,
quantitative phase microscopy, atomic force microscopy (AFM), and scanning electron microscopy (SEM). Each
method has unique advantages and limitations. For instance, confocal microscopy enables optical sectioning for 3D
imaging but requires longer acquisition times, whereas phase-contrast microscopy provides label-free visualization of
live cells but lacks molecular specificity. While the classification provides clarity, the boundaries between categories
are not absolute. Many methods exhibit properties that span multiple functions. For example, total internal reflection
fluorescence (TIRF) microscopy is fluorescence-based and can be kept under optical microscopy, but it is classified
under dynamic imaging due to its primary application in studying near-surface events. Also, this classification is not
exhaustive. Numerous techniques and their variants, including emerging modalities and niche applications, remain
outside the scope of this review. However, the techniques presented here represent some of the most widely used and
impactful methods in life science research.
There are also other noteworthy binary classifications that offer valuable perspectives. Fluorescence vs label-free:
fluorescence microscopy employs chemical or genetic tags to label specific structures, whereas label-free methods
(e.g. phase-contrast) do not require exogenous markers. In Fig. 2b, for instance, fluorescence distinctly visualizes
nuclear envelopes and nuclei, while phase-contrast records all features in a single image without labels from the

3
Structural, Molecular, and Functional Imaging Microscopy (Systems)

Optical Functional / Dynamic Force based Hybrid & Correlative Spectroscopy

➢ Live-cell imaging ➢ Scanning Probe ➢ Correlated Light and ➢ Raman Microscopy


➢ Total Internal Reflection Microscopy (SPM) Electron Microscopy ➢ Infrared (IR)
Fluorescence (TIRF) ○ Atomic Force (CLEM) ➢ Mass Spectrometry
➢ Fluorescence Recovery (AFM) ➢ Cryo-CLEM Imaging (MSI)
After Photobleaching ○ Magnetic Force ➢ Integrated Raman and ➢ Fluorescence Lifetime
(FRAP) Microscopy (MFM) AFM (Raman-SPM) Imaging Microscopy
➢ Photoacoustic ○ Scanning (FLIM)
➢ High-speed Microscopy Tunneling (STM) ➢ Energy Dispersive
X-ray Spectroscopy
(EDS/EDX)

Conventional Advanced 3D / Volumetric Electron Microscopy(EM)

➢ Brightfield (BF) ➢ Super-resolution ➢ Confocal ➢ Transmission Electron


➢ Phase-Contrast (PC) ○ Stimulated Emission ➢ Multiphoton Microscopy (TEM)
➢ Fluorescence Depletion (STED) ➢ Light-sheet (LS) ➢ Scanning Electron
➢ Differential Interference ○ Single-Molecule ➢ X-ray Microscopy (SEM)
Contrast (DIC) Localization (SML) ➢ Holographic ➢ Cryo-EM
○ PALM ➢ 3D Electron Microscopy
○ STORM

Figure 3: A comprehensive classification of diverse imaging and analytical methods, organized into structural, molecular,
functional, and hybrid approaches.

same sample. Transparent vs opaque samples: techniques based on light transmission (e.g. brightfield, phase-contrast,
fluorescence) require minimal preparation and support live specimen imaging but are ineffective for highly scattering
or opaque samples. In such cases, reflection or scattering methods (e.g. SEM, TEM, AFM, TIRF) are more suitable.
Thin vs thick specimens: techniques like TEM excel with thin samples, offering high resolution but limiting live or
intact specimen studies. Confocal and multiphoton microscopy handle thick samples, preserving 3D structures but with
reduced resolution in deeper layers. Live-cell vs static samples: live-cell imaging captures dynamic processes but is
limited by phototoxicity and lower resolution. Fixed-sample methods, such as electron microscopy, provide detailed
static images but lack the ability to track ongoing biological events. Overall, image-based microscopy is essential for
studying biological structures at various scales but faces many challenges due to its complexity, as further discussed in
the next section.

3 Challenges in image based microscopy and AI


Image based microscopy holds immense potential for transformative applications and advancements through the
integration of AI. However, the field faces several critical challenges, including the lack of sufficient labeled data,
variability in data quality, and the pervasive presence of noise and artifacts in microscopy datasets. In this section, we
delve into some of the key obstacles in the field, highlighting their implications.
Fluorescence vs label-free microscopy. In microscopy, both label-free and fluorescence techniques are vital for
imaging biological processes, yet each has inherent trade-offs and challenges. Label-free methods, such as phase-
contrast or quantitative phase microscopy, preserve the natural state and vitality of samples, making them ideal for
live-cell imaging. However, they often lack molecular specificity i.e blurred edges, making it difficult to isolate or
study particular biomolecules or pathways. Conversely, fluorescence microscopy offers high specificity and sensitivity,
enabling the visualization of targeted molecules. Yet, it can disrupt the sample’s natural state, reduce viability through
phototoxicity, and cause off-target binding, potentially compromising reproducibility [12]. These limitations highlight
the complementary nature of the two approaches and the need for innovative methods to bridge the gap between
specificity and sample preservation.
Data labeling problem. Labeling biological data for AI applications, such as segmentation and tracking, is a significant
challenge due to the complexity, size, and dynamic nature of biological systems. Biological entities often interact and
exhibit intricate dynamics, requiring extensive and precise annotations to define distinct features for algorithm training
[13]. This task is labor intensive and prone to errors, as biological images often involve large datasets with subtle
variations that are difficult to interpret consistently. The lack of sufficient labeled data hampers the ability to train and
verify robust AI systems, creating a bottleneck for advancements in biological image analysis [3, 9]. Addressing this
issue requires innovative solutions, such as semi-supervised learning, active learning, or leveraging synthetic data to
reduce the dependency on manual labeling.

4
Object
Convolution

Image

PSF
(a) Image formation (b) PSF types
Figure 4: Left: PSF affecting the formation of an image by blurring the real object. Right: Different types of empirical
PSFs in the green emission range, for a ×100 1.4 NA objective, with an oil immersion refractive index of n = 1.518
[14].

Point spread function (PSF). It is a fundamental characteristic of a microscopy system that quantifies how a point
source of light is imaged and represented by the system. It describes the response of the optical setup to a point source,
effectively encapsulating how the inherent physical and optical properties of the system distort or blur the representation
of an object. Mathematically, the PSF represents the intensity distribution of light in the image plane resulting from a
point source located at the focal plane [15]. Fig. 4a demonstrates the formation of the output image as the object is
convolved with the PSF, leading to image blurring. Fig. 4b shows different types of PSFs. To estimate the PSF, beads of
known shapes are imaged under the microscope (see details in Sec.2). Due to the diffraction limit and the nature of
PSF, multiple distinct objects can produce similar output images, making the inverse problem ill-posed. This ambiguity
presents significant challenges in training AI systems, even with labeled data, as the many-to-one mapping often leads
to suboptimal convergence of AI training algorithms.
Noise types and their impact. Various factors contribute to noise in microscopy images, including variation in PSFs,
the process of image formation, optical aberrations, disturbances caused by mismatched refractive indices within
the sample, out-of-focus sample and light originating from out-of-focus regions. Additionally, variations in sample
preparation methods further exacerbate these challenges [14]. Noise characteristics can range from well defined types,
such as textured noise, which can be modeled and estimated with known approaches [16], to more complex noise that
resists straightforward characterization and modeling [9]. AI algorithms trained to handle specific types of noise often
fail to generalize effectively, resulting in suboptimal performance when applied to images of the same biological sample
but affected by different noise types.
Dynamic nature of biological events. Modeling interactions among biological structures presents significant challenges
due to their inherently dynamic behavior. These structures often exhibit continuous motion, frequently moving in and
out of the imaging field of view. This characteristic of biological samples complicates their tracking and makes their
interaction analysis highly challenging. Furthermore, the rapid and often non linear nature of these movements adds
additional complexity to the extraction of meaningful insights [17].
Cellular vs subcellular level microscopy. Cellular level microscopy typically operates within the diffraction limit,
providing clear boundaries and well defined structures. In contrast, subcellular imaging often exceeds the diffraction
limit, leading to challenges in resolving fine structural details and demarcating boundaries. This limitation results
in reduced resolution, complicating data analysis and hindering AI model training. Similar issues arise in nanoscale
imaging of subcellular interactions, which is an active area of research. Advanced techniques, such as Single Molecule
Localization Microscopy and correlative microscopy, have been developed to overcome these challenges and enhance
resolution at the subcellular scale.
Toxicity and bleaching. Phototoxicity and cytotoxicity pose significant challenges in live-cell imaging, as prolonged
exposure to light or staining agents can compromise cellular viability and physiology, leading to artifacts. Additionally,
fluorophore bleaching during imaging results in signal loss, reducing the quality and reliability of long term observations
and quantitative analyses [11].
Multidisciplinary nature of research. Effective collaboration in this multidisciplinary field may be hindered by
communication gaps, including technical jargon and unclear task delegation, such as deciding who should label data for
preliminary analysis. Biologists typically have the domain expertise but limited time, whereas informaticians, while

5
more available, might lack the necessary expertise. This dilemma can impede project efficiency and compromise data
quality.

4 Possible solutions

Because these challenges are inherently multidisciplinary, strengthening collaboration among researchers from diverse
fields is crucial for addressing them effectively. Collaboration fosters innovative problem solving and efficient sharing
of resources. This section outlines potential solutions to the identified challenges.
Synthetic data. Synthetic data addresses the challenge of limited labeled data by providing an alternative source
for training and testing models. It can be generated through simulators [9, 16], which approximate real-world data
generation processes, or via generative AI models [18], offering flexibility to create abundant datasets with distributions
resembling real data.
Physics-based AI. Physics-based AI helps in training AI models by reducing their dependence on labeled datasets by
either incorporating the physics constraints during the training data generation process [16], or by modifying the design
of an AI model to bypass its dependency on labeled data by incorporating physics-based priors into the model [19, 4].
Physics-based models not only overcome data scarcity but also often yield models with improved interpretability.
Cross-microscope or cross-noise distribution models. Cross-modal (or cross-microscope) approaches can effectively
address challenges such as data scarcity or the lack of labeled datasets. These methods leverage complementary
modalities either for information fusion [13] or by incorporating priors extracted from one modality into the analysis of
another [4, 20]. Such cross-modality strategies are particularly valuable in tackling subcellular imaging challenges,
where the integration of information across modalities enhances resolution and interpretability, and reduces toxicity.
Additionally, these challenges can be conceptualized as problems of distributional shifts, where domain adaptation
techniques offer robust solutions to align disparate data distributions and mitigate noise [21].
Self supervised learning (SSL). SSL is a machine learning paradigm that leverages large amounts of unlabeled data to
derive meaningful representations through automatically generated supervisory signals. In microscopy, both live-cell
and static imaging applications benefit from SSL by employing tailored objective functions, such as contrastive learning,
to effectively extract biologically relevant features from unlabeled datasets [22, 23, 24].
Transfer learning. Because of large scale training on diverse datasets, pretrained models often capture robust and
broadly applicable feature representations, surpassing those learned from scratch. Consequently, they can be efficiently
finetuned for new tasks when domains overlap and labeled data are scarce [25]. Another progressive approach within
transfer learning is active learning, wherein each generation of models benefits from knowledge transferred by the
previous one. This methodology has enabled the development of foundational models such as [2], which demonstrate
high generalizability and can accelerate training in low-labeled-data contexts. Additionally, transfer learning facilitates
multi-task learning across various imaging applications [26].

5 Current and future areas of research

In this section, we examine current and prospective research areas in life science. Each area includes its primary
challenge, followed by secondary challenges, then its classification as either current or future, and key references.
Current areas are those undergoing active investigation, whereas future areas are those where minimal or no research
has yet been conducted.
Video analysis. Involves investigation of microscopic videos for activity recognition, event detection, tracking, event
detection in untrimmed videos, detection of dynamic events (e.g. entities moving in and out of the frame), and
unsupervised event clustering [27]. Future.
Amodal segmentation. Segmentation is an essential precursor to downstream tasks such as tracking or event analysis.
Contemporary methods, including semantic or instance segmentation, often fail to handle overlapping structures
adequately, resulting in incomplete analyses. These limitations can be addressed by developing amodal segmentation
algorithms, which infer hidden object regions beyond visible boundaries. Future.
Morphology analysis. Involves segmenting and quantifying shape and size of diverse biological structures. This
remains especially demanding when conducted fully automatically. Subfields may include segmentation, morphological
characterization, metric learning [9, 20]. Current.

6
Anomaly detection. Involves grouping or modeling irregular behaviors to identify anomalous events that may serve as
early disease indicators. Subfields include clustering, anomalous event analysis, and morphological anomaly detection
[28]. Future.
Application of foundation models. Involves leveraging large scale foundation models (e.g. LLMs, SAM variants) for
captioning, segmentation, and tracking in microscopy, using minimal labeled data. Subfields include refining transfer
learning strategies for diverse imaging applications; language based annotation of events and morphology, as well as
specialized segmentation frameworks [2]. Current.
Sustainable and efficient AI for microscopy. Microscopy data requires massive storage and computational power for
processing, necessitating the development of energy efficient algorithms. Subfields include data compression, efficient
model design, and lightweight models tailored for large scale image analysis; leading to scalable and environmentally
responsible solutions [29]. Future.
Event grounding. Involves prompt based (text or visual) retrieval of information within image or video data. Subfields
may include target event grounding, biological entity grounding, symptom grounding (to facilitate confirmatory or risk
factor analyses), specific interaction searches. Future.

6 Resources

Open science initiatives have significantly increased the availability of data repositories and tools for researchers in this
field. Prominent research institutions include the European Bioinformatics Institute (EBI), the National Cancer Institute
(NCI), the European Molecular Biology Laboratory (EMBL), the Janelia Research Campus (JRC), the RIKEN Center
for Biosystems Dynamics Research (BDR), and the MRC Laboratory of Molecular Biology (LMB). Below is a list of
useful data repositories and tools.

• BioImage Archive: https://www.ebi.ac.uk/bioimage-archive


• Dataverse (hosted by different countries): https://dataverse.org/installations
• Electron Microscopy Data Bank (EMDB): https://www.ebi.ac.uk/emdb
• The Cancer Imaging Archive (TCIA) https://www.cancerimagingarchive.net
• SciLifeLab: https://www.scilifelab.se/data/repository
• The cell: https://www.cellimagelibrary.org/pages/datasets
• Image Data Resource (IDR): https://idr.openmicroscopy.org
• Zenodo: https://zenodo.org
• NeuroVault: https://neurovault.org
• BrainMaps: https://brainmaps.org

• DeepCell https://www.deepcell.org
• PyTorch: https://pytorch.org
• https://imagej.net/ij
• Royal Microscopical Society (RMS): https://www.rms.org.uk
• MicroscopyDB: https://microscopydb.io
• ZEISS: https://zeiss-campus.magnet.fsu.edu/index.html

7 Conclusion

This work provides a comprehensive introduction to microscopy and its relationship to both life science and AI. A
detailed classification of different microscopy techniques, alongside key research challenges, is offered to establish a
foundational and experimental understanding of the imaging process. We have also explored various obstacles and
proposed potential solutions, while highlighting a number of future application ideas and relevant resources. It is our
hope that this work equips researchers with the necessary background, insights, and tools to accelerate their efforts.

7
References
[1] John Jumper, Richard Evans, Alexander Pritzel, et al. Highly accurate protein structure prediction with alphafold.
nature, 596(7873):583–589, 2021. 1, 2
[2] Yuhao Huang, Xin Yang, Lian Liu, Han Zhou, Ao Chang, Xinrui Zhou, Rusi Chen, Junxuan Yu, Jiongquan Chen,
Chaoyu Chen, et al. Segment anything model for medical images? Medical Image Analysis, 92:103061, 2024. 1,
6, 7
[3] Yoav N Nygate, Mattan Levi, Simcha K Mirsky, Nir A Turko, Moran Rubin, Itay Barnea, Gili Dardikman-Yoffe,
Miki Haifler, Alon Shalev, and Natan T Shaked. Holographic virtual staining of individual biological cells.
Proceedings of the National Academy of Sciences, 117(17):9223–9231, 2020. 1, 4
[4] Linjing Fang, Fred Monroe, Sammy Weiser Novak, Lyndsey Kirk, Cara R Schiavon, Seungyoon B Yu, Tong
Zhang, Melissa Wu, Kyle Kastner, Alaa Abdel Latif, et al. Deep learning-based point-scanning super-resolution
imaging. Nature methods, 18(4):406–416, 2021. 1, 6
[5] Liuzhenghao Lv, Zongying Lin, Hao Li, Yuyang Liu, Jiaxi Cui, Calvin Yu-Chian Chen, Li Yuan, and Yonghong
Tian. Prollama: A protein large language model for multi-task protein language processing. arXiv e-prints, pages
arXiv–2402, 2024. 2
[6] Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Michael Wornow, Callum Birch-Sykes, Stefano
Massaroli, Aman Patel, Clayton Rabideau, Yoshua Bengio, et al. Hyenadna: Long-range genomic sequence
modeling at single nucleotide resolution. Advances in neural information processing systems, 36, 2024. 2
[7] Organvision. https://www.organvision.eu, 2025. Accessed: 2025-01-07. 2, 3
[8] David Feldman, Luke Funk, Anna Le, Rebecca J Carlson, Michael D Leiken, FuNien Tsai, Brian Soong, Avtar
Singh, and Paul C Blainey. Pooled genetic perturbation screens with image-based phenotypes. Nature protocols,
17(2):476–512, 2022. 3
[9] Arif Ahmed Sekh, Ida S Opstad, Gustav Godtliebsen, Åsa Birna Birgisdottir, Balpreet Singh Ahluwalia, Krishna
Agarwal, and Dilip K Prasad. Physics-based machine learning for subcellular segmentation in living cells. Nature
Machine Intelligence, 3(12):1071–1080, 2021. 3, 4, 5, 6
[10] Pinaki Sarder and Arye Nehorai. Deconvolution methods for 3-d fluorescence microscopy images. IEEE signal
processing magazine, 23(3):32–45, 2006. 3
[11] Lucia Maddalena, Laura Antonelli, Alexandra Albu, Aroj Hada, and Mario Rosario Guarracino. Artificial
intelligence for cell segmentation, event detection, and tracking for label-free microscopy imaging. Algorithms,
15(9):313, 2022. 3, 5
[12] Natan T Shaked, Stephen A Boppart, Lihong V Wang, and Jürgen Popp. Label-free biomedical optical imaging.
Nature photonics, 17(12):1031–1041, 2023. 4
[13] Navid Borhani, Andrew J Bower, Stephen A Boppart, and Demetri Psaltis. Digital staining through the application
of deep neural networks to multi-modal multi-photon microscopy. Biomedical optics express, 10(3):1339–1350,
2019. 4, 6
[14] Jean-Baptiste Sibarita. Deconvolution Microscopy, pages 201–243. Springer Berlin Heidelberg, Berlin, Heidelberg,
2005. 5
[15] Kenneth R. Castleman and Ian T. Young. Chapter two - fundamentals of microscopy. In Fatima A. Merchant and
Kenneth R. Castleman, editors, Microscope Image Processing (Second Edition), pages 11–25. Academic Press,
second edition edition, 2023. 5
[16] Bill Zhao, Kehan Zhang, Christopher S Chen, and Emma Lejeune. Sarc-graph: Automated segmentation, tracking,
and analysis of sarcomeres in hipsc-derived cardiomyocytes. PLoS computational biology, 17(10):e1009443,
2021. 5, 6
[17] Vladimír Ulman, Martin Maška, Klas EG Magnusson, Olaf Ronneberger, Carsten Haubold, Nathalie Harder,
Pavel Matula, Petr Matula, David Svoboda, Miroslav Radojevic, et al. An objective comparison of cell-tracking
algorithms. Nature methods, 14(12):1141–1152, 2017. 5
[18] Wenhao Yuan, Bingqing Yao, Shengdong Tan, Fengqi You, and Qian He. Deep generative models-assisted
automated labeling for electron microscopy images segmentation. arXiv preprint arXiv:2407.19544, 2024. 6
[19] Colin L Cooke, Fanjie Kong, Amey Chaware, Kevin C Zhou, Kanghyun Kim, Rong Xu, D Michael Ando,
Samuel J Yang, Pavan Chandra Konda, and Roarke Horstmeyer. Physics-enhanced machine learning for virtual
fluorescence microscopy. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages
3803–3813, 2021. 6

8
[20] Abhinanda R Punnakkal, Suyog S Jadhav, Alexander Horsch, Krishna Agarwal, and Dilip K Prasad. Mishape: 3d
shape modelling of mitochondria in microscopy. arXiv preprint arXiv:2303.01546, 2023. 6
[21] Vishrut Goyal, Michael Liu, Andrew Bai, Neil Lin, and C Hsieh. Generalizing microscopy image labeling via
layer-matching adversarial domain adaptation. In ICML’24 Workshop ML for Life and Material Science: From
Theory to Industry Applications., 2024. 6
[22] Benjamin Gallusser, Max Stieber, and Martin Weigert. Self-supervised dense representation learning for live-
cell microscopy with time arrow prediction. In International Conference on Medical Image Computing and
Computer-Assisted Intervention, pages 537–547. Springer, 2023. 6
[23] Istvan Grexa, Zsanett Zsófia Iván, Ede Migh, Ferenc Kovács, Hella A Bolck, Xiang Zheng, Andreas Mund, Nikita
Moshkov, Vivien Miczán, Krisztian Koos, et al. Supercut, an unsupervised multimodal image registration with
deep learning for biomedical microscopy. Briefings in Bioinformatics, 25(2):bbae029, 2024. 6
[24] Aurélien Rizk, Grégory Paul, Pietro Incardona, Milica Bugarski, Maysam Mansouri, Axel Niemann, Urs Ziegler,
Philipp Berger, and Ivo F Sbalzarini. Segmentation and quantification of subcellular structures in fluorescence
microscopy images using squassh. Nature protocols, 9(3):586–596, 2014. 6
[25] Falko Lavitt, Demi J Rijlaarsdam, Dennet van der Linden, Ewelina Weglarz-Tomczak, and Jakub M Tomczak.
Deep learning and transfer learning for automatic cell counting in microscope images of human cancer cell lines.
Applied Sciences, 11(11):4912, 2021. 6
[26] Amir Abbasi, Erfan Miahi, and Seyed Abolghasem Mirroshandel. Effect of deep transfer and multi-task learning
on sperm abnormality detection. Computers in Biology and Medicine, 128:104121, 2021. 6
[27] Hugo Lachuer, Emmanuel Moebel, Anne-Sophie Mace, Arthur Masson, Kristine Schauer, and Charles Kervrann.
Deep learning detection of dynamic exocytosis events in fluorescence tirf microscopy. bioRxiv, pages 2024–09,
2024. 6
[28] Enea Prifti, Robert Klie, and James Buban. Deep learning computer vision for anomaly detection in scanning
transmission electron microscopy. Microscopy and Microanalysis, 28(S1):3018–3020, 2022. 7
[29] Chanjuan Wang, Huilan Luo, Jiyuan Wang, and Daniel Groom. Dddnet: A lightweight and robust deep learning
model for accurate segmentation and analysis of tem images. APL Materials, 12(11), 2024. 7

You might also like