Çallı Et Al. - 2021
Çallı Et Al. - 2021
Survey Paper
a r t i c l e i n f o a b s t r a c t
Article history: Recent advances in deep learning have led to a promising performance in many medical image analysis
Received 16 March 2021 tasks. As the most commonly performed radiological exam, chest radiographs are a particularly impor-
Revised 17 May 2021
tant modality for which a variety of applications have been researched. The release of multiple, large,
Accepted 27 May 2021
publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the
Available online 5 June 2021
number of publications. In this paper, we review all studies using deep learning on chest radiographs
Keywords: published before March 2021, categorizing works by task: image-level prediction (classification and re-
Deep learning gression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of
Chest radiograph all publicly available datasets are included and commercial systems in the field are described. A com-
Chest X-ray analysis prehensive discussion of the current state of the art is provided, including caveats on the use of public
Survey datasets, the requirements of clinically useful systems and gaps in the current literature.
© 2021 The Authors. Published by Elsevier B.V.
This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)
1. Introduction projects the X-ray from one side of the patient to the other, typi-
cally from right to left. Examples of these image types are depicted
A cornerstone of radiological imaging for many decades, chest in Fig. 1.
radiography (chest X-ray, CXR) remains the most commonly per- The interpretation of the chest radiograph can be challenging
formed radiological exam in the world with industrialized coun- due to the superimposition of anatomical structures along the pro-
tries reporting an average 238 erect-view chest X-ray images ac- jection direction. This effect can make it very difficult to detect
quired per 10 0 0 of population annually (United Nations, 2008). abnormalities in particular locations (for example, a nodule pos-
In 2006, it is estimated that 129 million CXR images were ac- terior to the heart in a frontal CXR), to detect small or subtle ab-
quired in the United States alone (Mettler et al., 2009). The de- normalities, or to accurately distinguish between different patho-
mand for, and availability of, CXR images may be attributed to logical patterns. For these reasons, radiologists typically show high
their cost-effectiveness and low radiation dose, combined with a inter-observer variability in their analysis of CXR images (Quekel
reasonable sensitivity to a wide variety of pathologies. The CXR et al., 2001; Balabanova et al., 2005; Young, 1994).
is often the first imaging study acquired and remains central to The volume of CXR images acquired, the complexity of their
screening, diagnosis, and management of a broad range of condi- interpretation, and their value in clinical practice have long moti-
tions (Raoof et al., 2012). vated researchers to build automated algorithms for CXR analysis.
Chest X-rays may be divided into three principal types, accord- Indeed, this has been an area of research interest since the 1960s
ing to the position and orientation of the patient relative to the when the first papers describing an automated abnormality detec-
X-ray source and detector panel: posteroanterior, anteroposterior, tion system on CXR images were published (Lodwick et al., 1963;
lateral. The posteroanterior (PA) and anteroposterior (AP) views are Becker et al., 1964; Meyers et al., 1964; Kruger et al., 1972; Tori-
both considered as frontal, with the X-ray source positioned to the waki et al., 1973). The potential gains from automated CXR anal-
rear or front of the patient respectively. The AP image is typically ysis include increased sensitivity for subtle findings, prioritization
acquired from patients in the supine position, while the patient of time-sensitive cases, automation of tedious daily tasks, and pro-
is usually standing erect for the PA image acquisition. The lateral vision of analysis in situations where radiologists are not available
image is usually acquired in combination with a PA image, and (e.g., the developing world).
In recent years, deep learning has become the technique of
choice for image analysis tasks and made a tremendous impact
∗
Corresponding author. in the field of medical imaging (Litjens et al., 2017). Deep learn-
E-mail address: [email protected] (E. Çallı). ing is notoriously data-hungry and the CXR research community
1
Erdi Çallı and Ecem Sogancioglu contributed equally.
https://doi.org/10.1016/j.media.2021.102125
1361-8415/© 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/)
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Fig. 1. Left: posterior-anterior (PA) view frontal chest radiograph. Middle: lateral chest radiograph. Right: Anterior-posterior (AP) view chest radiograph. All three CXRs are
taken from the CheXpert dataset (Irvin et al., 2019), patient 184.
has benefited from the publication of numerous large labeled pers (did not describe a deep-learning architecture, but rather just
databases in recent years, predominantly enabled by the genera- evaluated it) or they were not written in English. Publications that
tion of labels through automatic parsing of radiology reports. This were not peer-reviewed were also excluded (8). Finally, during the
trend began in 2017 with the release of 112,0 0 0 images from the review process 142 papers were excluded as the scientific content
NIH clinical center (Wang et al., 2017b). In 2019 alone, more than was considered unsound, as detailed further in Section 6, leaving
755,0 0 0 images were released in 3 labelled databases (CheXpert 296 papers in the final literature review.
(Irvin et al., 2019), MIMIC-CXR (Johnson et al., 2019), PadChest The remainder of this work is structured as follows:
(Bustos et al., 2020)). In this work, we demonstrate the impact of Section 2 provides a brief introduction to the concept of deep
these data releases on the number of deep learning publications in learning and the main network architectures encountered in the
the field. current literature. In Section 3, the public datasets available are
There have been previous reviews on the field of deep learning described in detail, to provide context for the literature study.
in medical image analysis (Litjens et al., 2017; van Ginneken, 2017; The review of the collected literature is provided in Section 4,
Sahiner et al., 2018; Feng et al., 2019) and on deep learning or categorized according to the major themes identified. Commercial
computer-aided diagnosis for CXR (Qin et al., 2018; Kallianos et al., systems available for chest radiograph analysis are described in
2019; Anis et al., 2020). However, recent reviews of deep learning Section 5. The paper concludes in Section 6, with a comprehensive
in chest radiography are far from exhaustive in terms of the lit- discussion of the current state of the art for deep learning in CXR
erature and methodology surveyed, the description of the public as well as the potential for future directions in both research and
datasets available, or the discussion of future potential and trends commercial environments.
in the field. The literature review in this work includes 296 papers,
published between 2015 and March 2021, and categorized by ap- 2. Overview of deep learning methods
plication. A comprehensive list of public datasets is also provided,
including numbers and types of images and labels as well as some This section provides an introduction to deep learning for im-
discussion and caveats regarding various aspects of these datasets. age analysis, and particularly the network architectures most fre-
Trends and gaps in the field are described, important contributions quently encountered in the literature reviewed in this work. For-
discussed, and potential future research directions identified. We mal definitions and more in-depth mathematical explanations of
additionally discuss the commercial software available for chest ra- fully-connected and convolutional neural-networks are provided in
diograph analysis and consider how research efforts can best be many other works, including a recent review of deep learning in
translated to the clinic. medical image analysis (Litjens et al., 2017). In this work, we pro-
vide only a brief overview of these fundamental details and refer
1.1. Literature search the interested reader to previous literature.
Deep learning is a branch of machine learning, which is a gen-
The initial selection of literature to be included in this review eral term describing learning algorithms. The algorithm underpin-
was obtained as follows: A selection of papers was created using a ning all deep learning methods is the neural network, in this case,
PubMed search for papers with the following query. constructed with many hidden layers (‘deep’). These networks may
be constructed in many ways with different types of layers in-
cluded and the overall construction of a network is referred to as
its ‘architecture’. Sections 2.3 to 2.6 describe commonly used ar-
chitectures categorized by types of application in the CXR litera-
A systematic search of the titles of conference proceedings from ture.
SPIE, MICCAI, ISBI, MIDL and EMBC was also performed, search-
ing paper titles for the same search terms listed above. In the case 2.1. Convolutional neural networks
of multiple publications of the same paper, only the latest publi-
cation was included. Relevant peer-reviewed articles suggested by In the 1980s, networks using convolutional layers were first in-
co-authors and colleagues were added. The last search was per- troduced for image analysis (Fukushima and Miyake, 1982), and
formed on March 3rd, 2021. the idea was formalized over the following years (LeCun and Ben-
This search strategy resulted in 767 listed papers. Of these, 61 gio, 1998). These convolutional layers now form the basis for all
were removed as they were duplicates of others in the list. A fur- deep learning image analysis tasks, almost without exception. Con-
ther 260 were excluded as their subject matter did not relate to volutional layers use neurons that connect only to a small ‘recep-
deep learning for CXR, they were commentary or evaluation pa- tive field’ from the previous layer. These neurons are applied to dif-
2
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
ferent regions of the previous layer, operating as a sliding window was demonstrated to improve performance compared to Inception
over all regions, and effectively detecting the same local pattern in V3.
each location. In this way, spatial information is preserved and the The majority of works surveyed in this review use one or more
learned weights are shared. of the model architectures discussed here with varying numbers of
hidden layers.
2.2. Transfer learning
2.4. Segmentation networks
Transfer learning investigates how to transfer knowledge ex-
Segmentation is a task where pixels are assigned a category la-
tracted from one domain (source domain) to another (target)
bel, and can also be considered as a pixel classification. In natural
domain. One of the most commonly used transfer learning ap-
image analysis, this task is often referred to as ‘semantic segmen-
proaches in CXR analysis is the use of pre-training.
tation’ and frequently requires every pixel in the image to have
With the pre-training approach, the network architecture is first
a specified category. In the medical imaging domain these labels
trained on a large dataset for a different task, and the trained
typically correspond to anatomical features (e.g., heart, lungs, ribs),
weights are then used as an initialization for the subsequent task
abnormalities (e.g., tumor, opacity) or foreign objects (e.g., tubes,
for fine-tuning (Yosinski et al., 2014). Depending on data avail-
catheters). It is typical in the medical imaging literature to seg-
ability from the target domain, all layers can be re-trained, or
ment just one object of interest, essentially assigning the category
only the final (fully connected) layer can be re-trained. This ap-
‘other’ to all remaining pixels.
proach allows neural networks to be trained for new tasks us-
Early approaches to segmentation using deep learning used
ing relatively smaller datasets since useful low-level features are
standard convolutional architectures designed for classification
learned from the source domain data. It has been shown that pre-
tasks (Chen et al., 2018b). These were employed to classify each
training on the ImageNet dataset (for classification of natural im-
pixel in a patch using a sliding window approach. The main draw-
ages) (Baltruschat et al., 2019b) is beneficial for chest radiography
back to this approach is that neighboring patches have huge over-
analysis and this type of transfer learning is prominently used in
lap in pixels, resulting in inefficiency caused by repeating the same
the research surveyed in this work. ImageNet pre-trained versions
convolutions many times. It additionally treats each pixel sepa-
of many architectures are publicly available as part of popular deep
rately which results in the method being computationally expen-
learning frameworks. The pre-trained architectures may also be
sive and only applicable to small images or patches from an image.
used as feature extractors, in combination with more traditional
To address these drawbacks, fully convolutional networks
methods, such as support vector machines or random forests. Do-
(FCNs) were proposed, replacing fully connected layers with con-
main adaptation is another subfield of transfer learning and is dis-
volutional layers (Shelhamer et al., 2017). This results in a net-
cussed thoroughly in Section 2.7.
work which can take larger images as input and produces a likeli-
hood map output instead of an output for a single pixel. In 2015,
2.3. Image-level prediction networks a fully convolutional architecture known as the U-Net was pro-
posed (Ronneberger et al., 2015) and this work has become the
In this work we use the term ‘image-level prediction’ to re- most cited paper in the history of medical image analysis. The U-
fer to tasks where prediction of a category label (classification) Net consists of several convolutional layers in a contracting (down-
or continuous value (regression) is implemented by analysis of an sampling) path, followed by further convolutional layers in an ex-
entire CXR image. These methods are distinct from those which panding (upsampling) path which restores the result to the input
make predictions regarding small patches or segmented regions of resolution. It additionally uses skip connections (feature forward-
an image. Classification and regression tasks are grouped together ing) between the same levels on the contracting and expanding
in this work since they typically use the same types of architec- paths to recover fine details that were lost during the pooling op-
ture, differing only in the final output layer. One of the early suc- eration. The majority of image segmentation works in this review
cessful deep convolutional architectures for image-level prediction employ a variant of the FCN or the U-Net.
was AlexNet (Krizhevsky et al., 2012), which consists of 5 convolu-
tional layers followed by 3 fully connected layers. AlexNet became 2.5. Localization networks
extremely influential in the literature when it beat all other com-
petitors in the ILSVRC (ImageNet) challenge (Deng et al., 2009) by This survey uses the term localization to refer to identifica-
a large margin in 2012. Since then many deep convolutional neu- tion of a specific region within the image, typically indicated by
ral network architectures have been proposed. The VGG family of a bounding box, or by a point location. As with the segmenta-
models (Simonyan and Zisserman, 2014) use 8–19 convolutional tion task, localization, in the medical domain, can be used to iden-
layers followed by 3 fully-connected layers. The Inception archi- tify anatomical regions, abnormalities, or foreign object structures.
tecture was first introduced in 2015 (Szegedy et al., 2015) using There are relatively few papers in the CXR literature reviewed here
multiple convolutional filter sizes within layered blocks known as that deal specifically with a localization method, however, since
Inception modules. In 2016, the ResNet family of models (He et al., it is an important task in medical imaging, and may be easier to
2016) began to gain popularity and improve upon previous bench- achieve than a precise segmentation, we categorize these works
marks. These models define residual blocks consisting of multiple together.
convolution operations, with skip connections which typically im- In 2014, the RCNN (Region Convolutional Neural Network) was
prove model performance. After the success of ResNet, skip con- introduced (Girshick et al., 2014), identifying regions of interest in
nections were widely adopted in many architectures. DenseNet the image and using a CNN architecture to extract features of these
models (Huang et al., 2017), introduced in 2017, also use skip regions. A support vector machine (SVM) was used to classify the
connections between blocks, but connect all layers to each other regions based on the extracted features. This method involves sev-
within blocks. A later version of the Inception architecture also eral stages and is relatively slow. It was later superseded by fast-
added skip connections (Inception-Resnet) (Szegedy et al., 2017). RCNN (Girshick, 2015) and subsequently by faster-RCNN (Ren et al.,
The Xception network architecture (Chollet, 2017) builds upon the 2017) which streamlined the processing pipeline, removing the
Inception architecture but separates the convolutions performed in need for initial region identification or SVM classification, and im-
the 2D image space from those performed across channels. This proving both speed and performance. In 2017, a further extension
3
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
was added to faster-RCNN to additionally enable a precise segmen- suggest data from a specific hardware (scanner), set of acquisition
tation of the item identified within the bounding box. This method parameters, reconstruction method or hospital. It could, less fre-
is referred to as Mask R-CNN (He et al., 2017). While this is tech- quently, also refer to characteristics of the population included, for
nically a segmentation network, we mention it here as part of the example the gender, ethnicity, age or even strain of some pathol-
RCNN family. Another architecture which has been popular in ob- ogy included in the dataset.
ject localization is YOLO (You Only Look Once), first introduced Domain adaptation methods consider a network trained for an
in 2016 (Redmon et al., 2016) as a single-stage object detection image analysis task on data from one domain (the source domain),
method, and improved in subsequent versions in 2017 and 2018 and how to perform this analysis accurately on a different domain
(Redmon and Farhadi, 2017; 2018). The original YOLO architecture, (the target domain). These methods can be categorized as super-
using a single CNN and an image-grid to specify outputs was sig- vised, unsupervised, and semi-supervised depending on the avail-
nificantly faster than its contemporaries but not quite as accurate. ability of labels from the target domain and they have been inves-
The improved versions leveraged both classification and detection tigated for a variety of CXR applications from organ segmentation
training data and introduced a number of training improvements to multi-label abnormality classification. There is no specific archi-
to achieve state of the art performance while remaining faster than tecture that is typical for domain adaptation, but rather architec-
its competitors. A final localization network that features in medi- tures are combined in various ways to achieve the goal of learn-
cal imaging literature is RetinaNet (Lin et al., 2017). Like YOLO, this ing to analyze images from unseen domains. The approaches to
is a single stage detector, which introduces the concept of a focal this problem can be broadly divided into three classes (following
loss function, forcing the network to concentrate on more difficult the categorization of Wang and Deng, 2018); discrepancy-based,
examples during training. Most of the localization works included reconstruction-based and adversarial-based.
in this review use one of the architectures described above. Discrepancy-based approaches aim to induce alignment be-
tween the source and target domain in some feature space by fine-
2.6. Image generation networks tuning the image analysis network and optimizing a measurement
of discrepancy between the two domains. Reconstruction-based
One of the tasks deep learning has been commonly used for approaches, on the other hand, use an auxiliary encoder-decoder
is the generation of new, realistic images, based on information reconstruction network that aims to learn domain invariant repre-
learned from a training set. There are numerous reasons to gen- sentation through a shared encoder. Adversarial-based approaches
erate images in the medical domain, including generation of more are based on the concept of adversarial training from GANs, and
easily interpretable images (by increasing resolution, or removal of use a discriminator network which tries to distinguish between
projected structures impeding analysis), generation of new images samples from the source and target domains, to encourage the use
for training (data augmentation), or conversion of images to em- of domain-invariant features. This category of approaches is the
ulate appearances from a different domain (domain adaptation). most commonly used in CXR analysis for domain adaptation, and
Various generative schemes have also been used to improve the consists of generative and non-generative models. Generative mod-
performance of tasks such as abnormality detection and segmen- els transform source images to resemble target images by operat-
tation. ing directly on pixel space whereas non-generative models use the
Image generation was first popularized with the introduc- labels on the source domain and leverage adversarial training to
tion of the generative adversarial network (GAN) in 2014 obtain domain invariant representations.
(Goodfellow et al., 2014). The GAN consists of two network archi-
tectures, an image generator, and a discriminator which attempts
to differentiate generated images from real ones. These two net- 3. Datasets
works are trained in an adversarial scheme, where the generator
attempts to fool the discriminator by learning to generate the most Deep learning relies on large amounts of annotated data. The
realistic images possible while the discriminator reacts by progres- digitization of radiological workflows enables medical institutions
sively learning an improved differentiation between real and gen- to collate and categorize large sets of digital images. In addition,
erated images. advances in natural language processing (NLP) algorithms mean
The training process for GANs can be unstable with no guar- that radiological reports can now be automatically analyzed to
antee of convergence, and numerous researchers have investigated extract labels of interest for each image. These factors have en-
stabilization and improvements of the basic method (Salimans abled the construction and release of multiple large labelled CXR
et al., 2016; Heusel et al., 2017; Karras et al., 2018; Arjovsky et al., datasets in recent years. Other labelling strategies have included
2017). GANs have also been adapted to conditional data generation the attachment of the entire radiology report and/or labels gen-
(Chen et al., 2016; Odena et al., 2017) by incorporating class la- erated in other ways, such as radiological review of the image,
bels, image-to-image translation (conditioned on an image in this radiological review of the report, or laboratory test results. Some
case) (Isola et al., 2017), and unpaired image-to-image translation datasets include segmentations of specified structures or localiza-
(CycleGAN Zhu et al. (2017)). tion information.
GANs have received a lot of attention in the medical imaging In this section we detail each public dataset that is encountered
community and several papers were published for medical image in the literature included in this review as well as any others avail-
analysis applications in recent years (Yi et al., 2019b). Many of the able to the best of our knowledge. Details are provided in Table 1.
image generation works identified in this review employed GAN Each dataset is given an acronym which is used in the literature
based architectures. review tables (Tables 2 to 7) to indicate that the dataset was used
in the specified work.
2.7. Domain adaptation networks
1. ChestX-ray14 (C) is a dataset consisting of 112,120 CXRs from
In this work we use the term ‘Domain Adaptation’, which is a 30,805 patients (Wang et al., 2017b). The CXRs are collected
subfield of transfer learning, to cover methods attempting to solve at the (US) National Institute of Health. The images are dis-
the issue that architectures trained on data from a single ‘domain’ tributed as 8-bit grayscale images scaled to 1024 × 1024 pixels.
typically perform poorly when tested on data from other domains. The dataset was automatically labeled from radiology reports,
The term ‘domain’ is weakly defined; In medical imaging it may indicating the existence of 14 types of abnormality.
4
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 1
CXR datasets available for research. Values above 10,0 0 0 are rounded and shortened using K, indicating thousand (such as 10K for 10,0 0 0). Labeling Methods: RP=Report
Parsing, RIR=Radiologist Interpretation of Reports, RI=Radiologist Interpretation of Chest X-Rays, RCI=Radiologist Cohort agreement on Chest X-Rays, LT=Laboratory Tests.
Annotation Types: BB=Bounding Box, CL=Classification, CLoc=Classification with Location label, R=Report, SE=Segmentation. Gold Standard Data: This refers to the num-
ber of images labeled by methods other than Report Parsing.
PadChest (P) P: 67K PA: 96K CL 193 110K DICOM RIR 27593
Bustos et al. (2020) S: 110K AP: 20K R 110K RP
I: 160K LL: 51K
2. CheXpert (X) is a dataset consisting of 224,316 CXRs from 3. MIMIC-CXR (M) is a dataset consisting of 371,920 CXRs from
65,240 patients (Irvin et al., 2019). The CXRs are collected at and 64,588 patients (Johnson et al., 2019). The CXRs are col-
Stanford Hospital between October 2002 and July 2017. The lected from patients admitted to the emergency department of
images are distributed as 8-bit grayscale images with original Beth Israel Deaconess Medical Center between 2011 and 2016.
resolution. The dataset was automatically labeled from radiol- In version 1 (V1) the images are distributed as 8-bit grayscale
ogy reports using a rule-based labeler, indicating the presence, images in full resolution. The dataset was automatically labeled
absence, uncertainty, and no-mention of 12 abnormalities, no from radiology reports using the same rule-based labeler sys-
findings, and the existence of support devices. tem (described above) as CheXpert. A second version (V2) of
5
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 2
Image-Level Prediction Studies (Section 4.1). Tasks: AA=Adversarial Attack, DA=Domain Adaptation, IC=Interval Change, IG=Image Generation, IR=Image Retrieval,
LC=Localization, OT=Other, PR=Preprocessing, RP=Report Parsing, SE=Segmentation, WS=Weak Supervision. Bold font in tasks implies that this additional task is cen-
tral to the work and the study also appears in another table in this paper. Labels: C=ChestX-Ray14, CM=Cardiomegaly, CV=COVID, E=Edema, GA=Gender/Age, L=Lung,
LC=Lung Cancer, LO=Lesion or Opacity, M=MIMIC-CXR, MN=Many, ND=Nodule, OR=Orientation, P=PadChest, PE=Effusion, PL=PLCO, PM=Pneumonia, PT=Pneumothorax,
Q=Image Quality, T=Triage/Abnormal, TB=Tuberculosis, TU=Catheter or Tube, X=CheXpert, Z=Other. Datasets: BL=Belarus, C=ChestX-ray14, CC=COVID-CXR, CG=COVIDGR,
J=JSRT+SCR, M=MIMIC-CXR, MO=Montgomery, O=Open-i, P=PadChest, PL=PLCO, PP=Ped-pneumonia, PR=Private, RP=RSNA-Pneumonia, S=Shenzen, SI=SIIM-ACR,
SM=Simulated CXR from CT, X=CheXpert.
Liu et al. (2019) Combines lung cropped CXR model and a CXR model to improve model performance SE,LC,PR C,L C,J
Sogancioglu et al. (2020) Comparison of image-level prediction and segmentation models for cardiomegaly SE CM C
Que et al. (2018) A network with DenseNet and U-Net for classification of cardiomegaly SE CM C
Li et al. (2019) U-Net based model for heart and lung segmentation for cardiothoracic ratio SE CM PR
Moradi et al. (2020) Combines lung cropped CXR model and a CXR model using the segmentation quality SE E,LO,PE,PT,Z M
E et al. (2019) Pneumonia detection is improved by use of lung segmentation SE PM J,MO,PP,PR
Hurt et al. (2020) U-Net based model to segment pneumonia SE PM RP
Wang et al. (2020c) Multi-scale DenseNet based model for pneumothorax segmentation SE PT PR
Liu et al. (2020b) DenseNet based U-Net for segmentation of the left and right humerus of the infant SE Z PR
Owais et al. (2020) Uses a database of the intermediate ResNet-50 features to find similar studies OT,IR TB MO,S
Ouyang et al. (2020) Uses activation and gradient based attention for localization and classification LC C,X C
Rajaraman et al. (2020b) Detects and localizes COVID-19 using various networks and ensembling LC CV C,CC,PP,RP,X
Samala et al. (2021) GoogleNet trained with CXR patches, correlates with COVID-19 severity score LC CV,PM C,PR
Park et al. (2020) Proposes a segmentation and classification model compares with radiologist cohort LC LO,ND,PE,PT PR
Nam et al. (2019) Trains a semisupervised network on a large CXR dataset with CT-confirmed nodule cases LC ND PR
Pesce et al. (2019) Defines a loss that minimizes the saliency map errors to improve model performance LC ND PR
Taghanaki et al. (2019b) A weakly supervised localization with variational model, leverages attention maps LC PM C
Li et al. (2019) Attention guided CNN for pneumonia detection with bounding boxes LC PM RP
Hwang et al. (2019b) A CNN for identification of abnormal CXRs and localization of abnormalities LC T PR
Lenis et al. (2020) Introduces a visualization method to identify regions of interest from classification LC TB PR
Hwang and Kim (2016) Weakly supervised framework jointly trained with localization and classification LC TB PR
Wang et al. (2018) Combines classification loss and autoencoder reconstruction loss IG,SE T J,MO,O,S
Seah et al. (2019) Wasserstein GAN to permute diseased radiographs to appear healthy IG,LC Z PR
Wolleb et al. (2020) Novel GAN model trained with healthy and abnormal CXR to predict difference map IG PE SM,X
Tang et al. (2019c) GANs with U-Net autoencoder and CNN discriminator and encoder for one-class learning IG T C
Mao et al. (2020) Autoencoder uses uncertainty for reconstruction error in one-class learning setting IG T PP,RP
Lenga et al. (2020) Continual learning methods to classify data from new domains DA C,M C,M
Tang et al. (2019a) CycleGAN model to adapt adult to pediatric CXR for pneumonia classification DA PM PP,RP
Gyawali et al. (2019) Trains a Variational Autoencoder, uses encoded features to train models WS X X
Gyawali et al. (2020) Predicts labels for unlabeled data using latent space similarity for semisupervision WS X X
McManigle et al. (2020) Y-Net to normalize image geometry for preprocessing SE,PR OR C,M,X
Blain et al. (2020) COVID-19 opacity localization and severity detection on CXRs SE,LC CV PR
Oh et al. (2020) ResNet-18 backbone for Covid-19 classification with limited data availability SE,LC CV,PM CC,J,MO
Tabik et al. (2020) Proposes a new dataset COVIDGR and a novel method using transformations with GANs SE,IG CV CG
Ferreira Junior et al. (2021) DenseNet for cardiomegaly detection given lung cropped CXR SE CM O,P
Tartaglione et al. (2020) Multiple models and combinations of CXR datasets used for COVID-19 detection SE CV C,CC,PR,RP
Kusakunniran et al. (2021) ResNet-101 trained for COVID-19, heatmaps are generated for lung-segmented regions SE CV,PM PR
Narayanan et al. (2020) Multiple architectures considered for two-stage classification of pediatric pneumonia SE PM PP
Rajaraman et al. (2019b) Compares visualization methods for pneumonia localization SE PM PP
Blumenfeld et al. (2018) Classifies patches and uses the positive area size to classify the image SE PT PR
Rajaraman et al. (2018b) Feature extraction from CNN models and ensembling methods SE TB MO,PR,S
Subramanian et al. (2019) Detection of central venous catheters using segmentation shape analysis SE TU C
Mansoor et al. (2016) Detection of air-trapping in pediatric CXRs using Stacked Autoencoders SE Z PR
Wang et al. (2020e) Pneumoconiosis detection using Inception-v3 and evaluation against two radiologists SE Z PR
Irvin et al. (2019) Introduces CheXpert dataset and model performance on radiologist labeled test set RP,LC X X
Oh et al. (2019) Curates data for interval change detection, proposes method comparing local features RP,IC Z PR
Daniels and Metaxas (2019) Parses reports to define a topic model and predicts those using CXRs RP CM,PE,Z O
Chauhan et al. (2020) Trains model using image and reports to improve image only performance RP E M
Karargyris et al. (2019b) Extracts ambiguity of labels from reports, proposes model that uses this information RP E,PT,Z M
Syeda-Mahmood et al. (2019) Creates and parses reports for ChestX-ray14 AP data to obtain 73 labels for training RP MN C
Laserson et al. (2018) Obtains findings by tagging common report sentences to train models RP MN PR
Annarumma et al. (2019) An ensemble of two CNNs to predict priority level for CXR queue management RP T PR
Baltruschat et al. (2019b) Evaluates bone suppression and lung segmentation, detection of 8 abnormalities PR,SE CM,PE,PT,Z O
Ferreira et al. (2020) Classification of pediatric pneumonia types using adaped VGG-16 architecture PR,SE PM PP
Vidya et al. (2019) Evaluates various image preprocessing algorithms on the performance of DenseNet-121 PR T C,MO,PR,S
Baltruschat et al. (2020) Detects 8 findings and analyzes how these can improve workflow prioritization OT CM,PE,PT,Z C,O
Hermoza et al. (2020) Proposes a model for weakly supervised classification and localization LC,WS C C
Saednia et al. (2020) Proposes a recurrent attention mechanism to improve model performance LC C C
Baltruschat et al. (2019a) Evaluates the use of various model configurations for classification LC C C
Cai et al. (2018) Attention mining and knowledge preservation for classification with localization LC C C
Wang et al. (2020b) Attention based model compared with well-known architectures LC C C
Ma et al. (2019) Minimizes the encoding differences of a CXR from multiple models LC C C,X
Cohen et al. (2020a) DenseNet used to predict COVID-19 severity as scored by radiologists LC CV CC
Wang et al. (2021b) Uses a ResNet-50 backed segmentation model to detect healthy, pneumonia, COVID-19 LC CV,PM CC,RP
Schwab et al. (2020) Uses multi instance learning for classification with localization LC E,PM,PT M,PR,RP
Yoo et al. (2020) Lung cancer and nodule prediction using ResNet-34 LC LC,ND PR
Kashyap et al. (2020) GradCam based attention mining loss, compared with labels extracted from reports LC LO M
Majkowska et al. (2019) Trains Xception using >750k CXRs, compares results with radiologist labels LC LO,ND,PT,Z C,PR
(continued on next page)
6
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 2 (continued)
Hosch et al. (2020) ResNet and VGG used to distinguish AP from PA images LC OR PR,RP
Rueckel et al. (2020) DenseNet-121 trained on public data evaluated using CT-based labels LC PE,PM C,PL
Ureta et al. (2020) Evaluates the performance of various models trained on pediatric CXRs on adult CXRs LC PM PP
Chen et al. (2020b) Evaluates ensembling methods and visualization on pediatric CXRs LC PM,PT,Z PR
Yi et al. (2020) Compares a ResNet-152 against radiologists and shows the statistical significance LC PT C
Crosby et al. (2020a) Compares GradCAM with radiologist segmentations for evaluation of VGG-19 LC PT C,PR
Crosby et al. (2020c) Apical regions and patches from them extracted to detect pneumothorax LC PT PR
Tang et al. (2020) Detection of abnormality, various networks compared with radiologist labeling LC T C,O,PP,RP
Rajaraman et al. (2020a) Evaluates pre-training on ImageNet and CheXpert on various models/settings LC T RP
Nakao et al. (2021) Proposes a GAN-based model trained only with healthy images for anomaly detection LC T RP
Pasa et al. (2019) Proposes a new model for faster classification of TB LC TB BL,MO,S
Hwang et al. (2019c) Evalutes the use of a ResNet based model on a large gold standard dataset LC TB PR
Singh et al. (2019) Evaluates multiple models for detection of feeding tube malpositioning LC TU PR
Chakravarty et al. (2020) Graph CNN solution with ensembling which models disease dependencies LC X X
Matsumoto et al. (2020) Curates a dataset of heart failure cases and evaluates VGG-16 on it LC Z C
Su et al. (2021) CNN for identifiying the presence of subphrenic free air from CXR LC Z PR
Zou et al. (2020) Evaluates several models to predict hypertension and artery systolic pressure LC Z PR
Zucker et al. (2020) Uses ResNet-18 to measure the Brassfield Score, predicts Cystic Fibrosis based on it LC Z PR
Campo et al. (2018) Simulates CXRs from CT scans and predicts emphysema scores LC Z PR
Toba et al. (2020) Inception network to predict pulmonary to systemic flow ratio from pediatric CXR LC Z PR
Li et al. (2020a) Predicts COVID-19 severity by comparing CXRs to previous ones IC,LC CV PR
Luo et al. (2020) Addresses domain and label discrepancies in multi-dataset training DA C,TB,X C,PR,X
Xue et al. (2019) Method to increase robustness of CNN classifiers to adversarial samples AA,IG PM RP
Li and Zhu (2020) Uses the features extracted from the training dataset to detect adversarial CXRs AA C C
Anand et al. (2020) Self-supervision and adversarial training improves on transfer learning AA PM PP
Khatibi et al. (2021) Claims 0.99 AUC for predicting TB, uses complex feature engineering and ensembling
Schroeder et al. (2021) ResNet model trained with frontal and lateral images to predict COPD with PFT results PR
Zhang et al. (2021a) One-class identification of viral pneumonia cases compared with binary classification PR
Balachandar et al. (2020) A distributed learning method that overcomes problems of multi-institutional settings C C
Burwinkel et al. (2019) Geometric deep learning including metadata with graph structure. Application to CXR C C
Nugroho (2021) Proposes a new weighting scheme to impove abnormality classification C C
DSouza et al. (2019) ResNet-34 used with various training settings for multi-label classification C C
Sirazitdinov et al. (2019) Investigates effect of data augmentations on classification with Inception-Resnet-v2 C C
Mao et al. (2018) Proposes a variational/generative architecture, demonstrates performance on CXRs C C
Rajpurkar et al. (2018) Evaluates the performance of an ensemble against many radiologists C C
Kurmann et al. (2019) Novel method for multi-label classification, application to CXR C C
Paul et al. (2020) Defines a few-shot learning method by extracting features from autoencoders C C
Unnikrishnan et al. (2020) Mean teacher inspired a probablistic graphical model with a novel loss C C
Michael and Yoon (2020) Examines the effect of denoising on pathology classification using DenseNet-121 C C
Wang et al. (2021a) Proposes integrating three attention mechanisms that work at different levels C C
Paul et al. (2021b) Step-wise trained CNN and saliency-based autoeencoder for few shot learning C C,O
Paul et al. (2021a) Uses CT and CXR reports with CXR images during training to diagnose unseen diseases C C,PR
Bustos et al. (2020) Proposes a new dataset PadChest with multi-label labels and radiology reports C P
Li et al. (2021a) Lesion detection network used to improve image-level classification C PR
Ghesu et al. (2019) Method to produce confidence measure alongside probability, uses DenseNet-121 C,PL C,PL
Haghighi et al. (2020) Uses self-supervised learning for pretraining, compares with ImageNet pretraining C,PT C,SI
Zhou et al. (2020a) Proposes a new CXR pre-training method, compares with pre-training on ImageNet C,X C,RP,X
Chen et al. (2020a) Proposes a graph convolutional network framework which models disease dependencies C,X C,X
Zhou et al. (2019) Compares several models for the detection of cardiomegaly CM C
Bougias et al. (2020) Tests four off-the-shelf networks for prediction of cardiomegaly CM PR
Brestel et al. (2018) Inception v3 trained to detect 4 abnormalities and compared with expert observers CM,E,LO,Z PR
Cicero et al. (2017) GoogLeNet to classify normal and 5 abnormalities on a large proprietary dataset CM,E,PE,PT,Z PR
Bar et al. (2015a) Compares the performance of deep learning with traditional feature extraction methods CM,PE PR
Bar et al. (2015b) ImageNet pre-training and feature extraction methods for pathology detection CM,PE,Z PR
Griner et al. (2021) An ensemble of DenseNet-121 networks used for COVID-19 classification CV C,PR
Hu et al. (2021) Investigates the value of soft tissue CXR for training DenseNet-121 for COVID-19 CV C,PR,RP
Zhu et al. (2020) Labels and predicts COVID-19 severity stage using CNN CV CC
Fricks et al. (2021) Uses a model trained on COVID-19 cases to evaluate the effect of an imaging parameter CV PR
Wehbe et al. (2020) COVID-19 detection based on RT-PCR labels, evaluates an ensamble against radiologists CV PR
Castiglioni et al. (2021) Ensemble of ResNet models for COVID-19 detection CV PR
Zhang et al. (2021c) Compares the performance of a DenseNet-121 ensemble to radiologists CV,PM PR
Wang et al. (2019) Various models and use of semi-supervised labels for edema severity estimation E M
Karargyris et al. (2019a) Age prediction on PA or AP images using DenseNet-169 GA C
Xue et al. (2018b) Gender prediction using features from deep-learning models in traditional classifiers GA J,MO,O,PR,S
Sabottke et al. (2020) Age prediction on AP images using DenseNet-121 and ResNet-50 GA X
Lu et al. (2020a) Combines the CXR with age/sex/smoking history to predict the lung cancer risk LC PL
Thammarach et al. (2020) Densenet-121 pre-trained with public data used to identify 6 classes LC,T,TB,Z PR
Kuo et al. (2021) Evaluates deep learning on pictures of CXRs captured with mobile phones M,X M,X
Wu et al. (2020) Ensemble of VGGNet and ResNet to detect various findings from AP CXRs MN C,M
Cohen et al. (2020b) Investigates the domain and label shift across publicly available CXR datasets MN C,M,O,P,RP,X
Hashir et al. (2020) Explores the use of the lateral view CXR for classification of 64 different labels MN,P P
Rajkomar et al. (2017) Classification of CXRs as Frontal or Lateral using GoogLeNet architecture OR PR
Crosby et al. (2019) Assesses the effect of imprinted labels on AP/PA classification OR PR
Crosby et al. (2020b) Distinguishes the CXR orientation, bone CXRs and soft tissue CXRs from dual energy OR,Z PR
Bertrand et al. (2019) Compares PA and Lateral images for pathology detection with DenseNet P P
(continued on next page)
7
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 2 (continued)
Chen et al. (2019) Introduces a loss term that uses the label hierarchy to improve model performance PL PL
Shah et al. (2020) Trains VGG-16 on Ped-pneumonia dataset PM PP
Qu et al. (2020) Methods to mitigate imbalanced class sizes. Applied to CXR using ResNet-18 PM PP
Yue et al. (2020) Evaluation of MobileNet to detect pneumonia on pediatric CXRs PM PP
Elshennawy and Compares multiple architectures for pneumonia detection PM PP
Ibrahim (2020)
Mittal et al. (2020) Evaluates various capsule network architectures for pediatric pneumonia detection PM PP
Longjiang et al. (2020) Uses ResNet-50 to classify paediatric pneumonia PM PR
Ganesan et al. (2019) Compares traditional and generative data augmentation techniques on CXRs PM RP
Ravishankar et al. (2019) Addresses catastrophic forgetting, application to pneumothorax detection using VGG-13 PT C
Taylor et al. (2018) Construction of large dataset, multiple architectures and hyperparameters optimized PT PR
Kitamura and Deible (2020) Model pre-trained with public data and fine-tuned for pneumothorax detection PT PR
Kashyap et al. (2019) DenseNet-121 used to detect CXRs with acquisition-based defects Q C
Takaki et al. (2020) GoogleNet combined with rule-based approach to determine the image quality Q PR
Pan et al. (2019a) Detects abnormal CXRs using several models. Evaluates on independent private data T C
Wong et al. (2019) Defines a model on top of features extracted from Inception-ResNet-v2 for triaging T C
Wong et al. (2020) Collects features from pretrained models and adds a CNN on top for triaging T C,M
Jang et al. (2020) Studies the effect of various label noise levels on classification with DenseNet-121 T C,PR,X
Dunnmon et al. (2019) Various models for detection of abnormal CXRs, effect of different training set sizes T PR
Nam et al. (2020) Defines 10 abnormalities to define a triaging model and uses CT based test labels T PR
Dyer et al. (2021) Ensembe of DenseNet and EffecientNet for identification of normal CXR T PR
Ogawa et al. (2019) Examines the use of data augmentation in small data setting T PR
Ellis et al. (2020) Evaluation of extra supervision in the form of localized region of interest T PR
Rajaraman et al. (2019a) Evalutates various models and ensembling methods for the triage task T RP
Lakhani and Sundaram (2017) Evaluates deep learning approaches for tuberculosis detection TB BL,MO,PR,S
Hwang et al. (2016) Evaluates the use of transfer learning for tuberculosis detection TB MO,PR,S
Sivaramakrishnan et al. (2018) Extracts feaures using off-the-shelf models and trains a model using those TB MO,PR,S
Ayaz et al. (2021) Combines hand-crafted features and CNN for tuberculosis diagnosis TB MO,S
Ul Abideen et al. (2020) Evaluates a Bayesian-based CNN for detection of TB TB MO,S
Rajpurkar et al. (2020) Evaluates assisting clinicians with an AI based system to improve diagnosis of TB TB PR
Heo et al. (2019) Various architectures, inclusion of patient demographics in model considered TB PR
Kim et al. (2018) Addresses preservation of learned data, application to TB detection using ResNet-21 TB PR
Gozes and Greenspan (2019) Pre-training using CXR pathology and metadata labels, application to TB detection TB S
Rajaraman and Antani (2020) Compares various models using various pretraining and ensembling strategies TB S
Lakhani (2017) Evaluates models on detecting the position of feeding tube in abdominal and CXRs TU PR
Mitra et al. (2020) Comparison of seven architectures and ensembling for detection of nine pathologies X X
Pham et al. (2020) A method to incorporate label dependencies and uncertainty data during classification X X
Rajan et al. (2021) Proposes self-training and student-teacher model for sample effeciency X X
Calli et al. (2019) Analyses the effect of label noise in training and test datasets Z C
Deshpande et al. (2020) Labels 6 different foreign object types and detects using various architectures Z M
Lu et al. (2019) Evaluates the use of CXRs to predict long term mortality using Inception-v4 Z PL
Zhang et al. (2021b) Low-res segmentation is used to crop high-res lung areas and predict pneumoconiosis Z PR
Devnath et al. (2021) Pneumoconiosis prediction with DenseNet-121 and SVMs applied to extracted features Z PR
Liu et al. (2017) Detection of coronary artery calcification using various CNN architectures Z PR
Hirata et al. (2021) ResNet-50 for detection of the presence of elevated pulmonary arterial wedge pressure Z PR
Kusunose et al. (2020) A network is designed to identify subjects with elevated pulmonary artery pressure Z PR,RP
MIMIC-CXR was later released including the anonymized radi- anonymized DICOMs. The radiological findings obtained by ra-
ology reports and DICOM files. diologist interpretation are available in MeSH format2 .
4. PadChest (P) is a dataset consisting of 160,868 CXRs from 7. Ped-Pneumonia (PP) is a dataset consisting of 5856 pediatric
109,931 studies and 67,0 0 0 patients (Bustos et al., 2020). The CXRs (Kermany, 2018). The CXRs are collected from Guangzhou
CXRs are collected at San Juan Hospital (Spain) from 2009 to Women and Children’s Medical Center, Guangzhou, China. The
2017. The images are stored as 16-bit grayscale images with images are distributed in 8-bit grayscale images scaled in vari-
full resolution. 27,593 of the reports were manually labeled ous resolutions. The labels include bacterial and viral pneumo-
by physicians. Using these labels, an RNN was trained and nia as well as normal.
used to label the rest of the dataset from the reports. The re- 8. JSRT dataset (J) consists of 247 images with a resolu-
ports were used to extract 174 findings, 19 diagnoses, and 104 tion of 2048 × 2048, 0.175mm pixel-size and 12-bit depth
anatomic locations. The labels conform to a hierarchical taxon- (Shiraishi et al., 20 0 0). It includes nodule locations (on 154 im-
omy based on the standard Unified Medical Language System ages) and diagnosis (malignant or benign). The reference stan-
(UMLS) (Bodenreider, 2004). dard for heart and lung segmentations of these images are pro-
5. PLCO (PL) is a screening trial for prostate, lung, colorectal and vided by the SCR dataset (van Ginneken et al., 2006) and we
ovarian (PLCO) cancer (Zhu et al., 2013). The lung arm of this group these datasets together in this work.
study has 185,421 CXRs from 56,071 patients. The NIH dis- 9. RSNA-Pneumonia (RP) is a dataset consisting of 30,0 0 0 CXRs
tributes a standard set of 25,0 0 0 patients and 88,847 frontal with pneumonia annotations (RSNA, 2018). These images are
CXRs. This dataset contains 22 disease labels with 4 abnormal- acquired from ChestX-ray14 and are 8-bit grayscale with 1024 ×
ity levels and the locations of the abnormalities. 1024 resolution. Annotations are added by radiologists using
6. Open-i (O) is a dataset consisting of 7910 CXRs from 3955 bounding boxes around lung opacities and 3 classes indicating
studies and 3955 patients (Demner-Fushman et al., 2012). normal, lung opacity, not normal.
The CXRs are collected from the Indiana Network for Patient
Care (McDonald et al., 2005). The images are distributed as
2
https://www.nlm.nih.gov/mesh/meshhome.html
8
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 3
Segmentation Studies (Section 4.2). Tasks: DA=Domain Adaptation, IG=Image Generation, IL=Image-level Predictions, LC=Localization, PR=Preprocessing, WS=Weak Su-
pervision. Bold font in tasks implies that this additional task is central to the work and the study also appears in another table in this paper. Labels: C=ChestX-Ray14,
CL=Clavicle, CM=Cardiomegaly, CV=COVID, E=Edema, H=Heart, L=Lung, LO=Lesion or Opacity, PE=Effusion, PM=Pneumonia, PT=Pneumothorax, R=Rib, TU=Catheter or
Tube, Z=Other. Datasets: BL=Belarus, C=ChestX-ray14, J=JSRT+SCR, M=MIMIC-CXR, MO=Montgomery, O=Open-i, PP=Ped-pneumonia, PR=Private, RP=RSNA-Pneumonia,
S=Shenzen, SI=SIIM-ACR, SM=Simulated CXR from CT.
Yu et al. (2020) A model based on U-Net and Faster R-CNN to detect PICC catether and its tip LC,PR TU PR
Zhang et al. (2019b) Tailored Mask R-CNN for simultaneous detection and segmentation LC L PR
Wessel et al. (2019) Uses Mask R-CNN iteratively to segment and detect ribs. LC R PR
Liu et al. (2019) Combines lung cropped CXR model and a CXR model to improve model performance IL,LC,PR C,L C,J
Sogancioglu et al. (2020) Comparison of image-level prediction and segmentation models for cardiomegaly IL CM C
Que et al. (2018) A network with DenseNet and U-Net for classification of cardiomegaly IL CM C
Li et al. (2019) U-Net based model for heart and lung segmentation for cardiothoracic ratio IL CM PR
Moradi et al. (2020) Combines lung cropped CXR model and a CXR model using the segmentation quality IL E,LO,PE,PT,Z M
E et al. (2019) Pneumonia detection is improved by use of lung segmentation IL PM J,MO,PP,PR
Hurt et al. (2020) U-Net based model to segment pneumonia IL PM RP
Wang et al. (2020c) Multi-scale DenseNet based model for pneumothorax segmentation IL PT PR
Liu et al. (2020b) DenseNet based U-Net for segmentation of the left and right humerus of the infant IL Z PR
Tang et al. (2019b) Attention-based network and CXR synthesis process for data augmentation IG,IG L J,MO,PR
Eslami et al. (2020) Conditional GANs for multi-class segmentation of heart,clavicles and lungs IG CL,H,L J
Onodera et al. (2020) Processing method to produce scatter-corrected CXRs and segments masses with U-Net IG LO SM
Oliveira et al. (2020b) MUNIT based DA model for lung segmentation DA CL,H,L J
Dong et al. (2018) Adversarial training of lung and heart segmentation for DA DA CM J,PR
Zhang et al. (2018) CycleGAN guided by a segmentation module to convert CXR to CT projection images DA H,L,Z PR
Chen et al. (2018a) CycleGAN based DA model with semantic aware loss for lung segmentation DA L MO
Oliveira et al. (2020a) Conditional GANs based DA for bone segmentation DA R SM
Shah et al. (2018) FCN based novel model incorporating weak landmarks and bounding boxes annotations WS CL,H,L J
Bortsova et al. (2019) U-Net segmentation model integrating unlabeled data through consistency loss WS CL,H,L J
Ouyang et al. (2019) Attention masks derived from classification model to guide the segmentation model IL PT PR
Frid-Adar et al. (2019) U-Net based network for classification and segmentation with simulated data IL Z C,J
Sullivan et al. (2020) U-Net based model for segmentation and a classification for existance of lines IL Z PR
Cardenas et al. (2021) U-Net for bone suppression given lung-segmented CXR image with patches PR
Zhang et al. (2020) Proposes teacher-student based learning with noisy segmentations CL,H,L J
Kholiavchenko et al. (2020) Various FCN based models explored for simultaneous pixel and contour segmentation CL,H,L J
Novikov et al. (2018) Investigates various FCN type architecture including U-Net for organ segmentation CL,H,L J
Bonheur et al. (2019) Capsule networks adapted for multi-class organ segmentation CL,H,L J
Arsalan et al. (2020) U-Net based architecture with residual connections for organ segmentation CL,H,L J,MO,S
Wang et al. (2020d) U-Net based architecture based on dense connections CL,R PR
Mortani Barbosa et al. (2021) CNN trained with CT projection images for quantification of airspace disease CV PR
Larrazabal et al. (2020) Denoising autoencoder as post-processing to improve segmentations H,L J
Holste et al. (2020) Evaluates U-Net performance with various loss functions, and data augmentation H,L PR
Mansoor et al. (2020) Stacked denoising autoencoder model for space and shape parameter estimation L BL,J,PR
Amiri et al. (2020) Investigates the effect of fine-tuning different layers for U-Net based model L J
Lu et al. (2020b) Proposes a human-in-the-loop one shot anatomy segmentor L J
Li et al. (2021b) U-Net with conditional random field post processing for lung segmentation L J
Portela et al. (2020) Investigates U-Net with different optimizer and dropout L J
Yahyatabar et al. (2020) U-Net with dense connections for reducing network parameters for lung segmentation L J,MO
Kim and Lee (2021) U-Net with self attention for lung segmentation L J,MO,S
Arbabshirani et al. (2017) Multi-scale and patch-based CNN to segment lungs L J,PR
Rahman et al. (2021) U-Net based model for lung segmentation trained with CXR patches L MO
Souza et al. (2019) Two stage patch based CNN for refined lung field segmentation L MO
Milletari et al. (2018) Encoder-decoder architecture with ConvLSTM and ResNet for segmentation L MO
Zhang et al. (2019c) Encoder-decoder based CNN with novel edge guidance module for lung segmentation L MO
Mathai et al. (2019) Proposes a convolutional LSTM model for ultrasound, uses CXR as a secondary modality L MO
Kitahara et al. (2019) U-Net based segmentation model for dynamic CXRs L PR
Furutani et al. (2019) U-Net for whole lung region segmentation including where heart overlaps L PR
Xue et al. (2020) Cascaded U-net with sample selection with imperfect segmentations L S
Wang et al. (2020a) ResNet-50 based architecture with segmentation and classification branches PT SI
Tolkachev et al. (2020) Investigates U-Net based models with various backbone encoders for pneumothorax PT SI
Groza and Kuzin (2020) Ensemble of three LinkNet based networks and with multi-step postprocessing PT SI
Xue et al. (2018d) Cascaded network with Faster R-CNN and U-Net for aortic knuckle Z J
Yi et al. (2019a) Multi-scale U-Net based model with recurrent module for foreign objects Z O
Lee et al. (2018) Two FCN to segment peripherally inserted central catheter line and its tip Z PR
Pan et al. (2019b) Two Mask R-CNN to segment the spine and vertebral bodies and calculate the Cobb angle Z PR
10. Shenzhen (S) is a dataset consisting of 662 CXRs (Jaeger et al., Services of Montgomery County, MD, USA. The images are dis-
2014). The CXRs are collected at Shenzhen No.3 Hospital in tributed as anonymized DICOMs, annotated for signs of tuber-
Shenzhen, Guangdong providence, China in September 2012. culosis and additionally include lung segmentation masks.
The images, including some pediatric images, are distributed as 12. BIMCV (B) is a COVID-19 dataset released by the Valencian Re-
8-bit grayscale with full resolution and are annotated for signs gion Medical ImageBank (BIMCV) in 2020 (Vayá et al., 2020). It
of tuberculosis. includes CXR images as well as CT scans and laboratory test re-
11. Montgomery (MO) is a dataset consisting of 138 CXRs sults. The dataset includes 3293 CXRs from 1305 COVID-19 pos-
(Jaeger et al., 2014). The CXRs are collected by the tuberculo- itive subjects. CXR images are 16-bit PNG format with original
sis control program of the Department of Health and Human resolution.
9
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
10
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Fig. 3. Number of publications reviewed for each task. 296 studies are included,
each study may perform at most two tasks. Tasks: IL=Image-level Predictions,
SE=Segmentation, LC=Localization, IG=Image Generation, DA=Domain Adaptation,
OT=Other.
11
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
12
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
13
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 4
Localization Studies (Section 4.3). Tasks: IC=Interval Change, IL=Image-level Predictions, PR=Preprocessing, RP=Report Parsing, SE=Segmentation, WS=Weak Supervi-
sion. Bold font in tasks implies that this additional task is central to the work and the study also appears in another table in this paper. Labels: C=ChestX-Ray14,
CM=Cardiomegaly, CV=COVID, L=Lung, LC=Lung Cancer, LO=Lesion or Opacity, ND=Nodule, PE=Effusion, PM=Pneumonia, PT=Pneumothorax, R=Rib, T=Triage/Abnormal,
TB=Tuberculosis, TU=Catheter or Tube, X=CheXpert, Z=Other. Datasets: C=ChestX-ray14, CC=COVID-CXR, J=JSRT+SCR, M=MIMIC-CXR, O=Open-i, PP=Ped-pneumonia,
PR=Private, RP=RSNA-Pneumonia, S=Shenzen, X=CheXpert.
Yu et al. (2020) A model based on U-Net and Faster R-CNN to detect PICC catether and its tip SE,PR TU PR
Zhang et al. (2019b) Tailored Mask R-CNN for simultaneous detection and segmentation SE L PR
Wessel et al. (2019) Uses Mask R-CNN iteratively to segment and detect ribs. SE R PR
Ouyang et al. (2020) Uses activation and gradient based attention for localization and classification IL C,X C
Rajaraman et al. (2020b) Detects and localizes COVID-19 using various networks and ensembling IL CV
C,CC,PP,RP,X
Samala et al. (2021) GoogleNet trained with CXR patches, correlates with COVID-19 severity score IL CV,PM C,PR
Park et al. (2020) Proposes a segmentation and classification model compares with radiologist cohort IL LO,ND,PE,PT PR
Nam et al. (2019) Trains a semisupervised network on a large CXR dataset with CT-confirmed nodule cases IL ND PR
Pesce et al. (2019) Defines a loss that minimizes the saliency map errors to improve model performance IL ND PR
Taghanaki et al. (2019b) A weakly supervised localization with variational model, leverages attention maps IL PM C
Li et al. (2019) Attention guided CNN for pneumonia detection with bounding boxes IL PM RP
Hwang et al. (2019b) A CNN for identification of abnormal CXRs and localization of abnormalities IL T PR
Lenis et al. (2020) Introduces a visualization method to identify regions of interest from classification IL TB PR
Hwang and Kim (2016) Weakly supervised framework jointly trained with localization and classification IL TB PR
Chen et al. (2020c) Extract nodule candidates using traditional methods and trains GoogleNet SE,PR ND J
Schultheiss et al. (2020) RetinaNet for detecting nodules incorporating lung segmentation SE ND J,PR
Tam et al. (2020) Combines reports and CXRs for weakly supervised localization and classification RP,WS PM,PT C,M
Moradi et al. (2018) Proposes a model using LSTM and CNN, combining reports and images as inputs IL,RP CM,ND C,O
Khakzar et al. (2019) Adversarially trained weakly supervised localization framework for interpretability IL C C
Kim et al. (2020) Evaluates the effect of image size for nodule detection with Mask R-CNN and RetinaNet IL ND PR
Cho et al. (2020) Evaluates the reproducibility of YOLO for disease localization in follow up exams IC PR
LO,ND,PE,PT,Z
Kim et al. (2019) Evaluates the reproducability of various detection architectures in follow up exams. IC ND PR
Cha et al. (2019) ResNet model using CT and surgery based annotations for lung cancer prediction LC PR
Takemiya et al. (2019) R-CNN for localization of lung nodules ND J
Wang et al. (2017a) Fuses AlexNet and hand-crafted features to improve random forest performance ND J
Li et al. (2020c) Patch-based nodule detectin, combines features from different resolutions ND J,PR
Park et al. (2019) Evaluates the detection of pneumothorax before, 3h and 1d after biopsy PT PR
Mader et al. (2018) Proposes a U-Net based model for localizing and labeling individual ribs R O
Xue et al. (2018c) AlexNet for localizing tuberculosis with patch-based approach TB S
von Berg et al. (2020) Localizes anatomical features for image quality check Z PR
nal dataset with CXRs from 1,319 patients which were obtained af- training. This research area is referred to as weakly supervised
ter percutaneous transthoracic needle biopsy (PTNB) for pulmonary learning, and has been investigated by numerous works (Hwang
lesions; it achieved an AUC of 0.898 and 0.905 on 3-h and 1- and Kim, 2016; Hwang et al., 2019b; Nam et al., 2019; Pesce et al.,
day follow-up chest radiographs, respectively. Similarly, other stud- 2019; Taghanaki et al., 2019b) for localization of a variety of ab-
ies (Kim et al., 2020; Schultheiss et al., 2020; Takemiya et al., normalities in CXR. Most of the works (Hwang et al., 2019b; Pesce
2019; Kim et al., 2019) harnessed architectures like RetinaNet, et al., 2019; Nam et al., 2019; Hwang and Kim, 2016) leveraged
Mask R-CNN and RCNN for localization of nodules and masses. weak image-level labels by adapting a CNN architecture to create
Kim et al. (2020) trained RetinaNet and Mask R-CNN for detection two branches for localization (heatmap predictions) and classifica-
of nodule and mass and investigated the optimal input size. The tion. A hybrid loss function was used, combining localization and
authors showed that, using a square image with 896 pixels as the classification losses, which enabled training of the networks using
edge length, RetinaNet and Mask R-CNN achieved FROC of 0.906 images without localization annotations.
and 0.869, respectively.
A number of papers adapted classification architectures (e.g., 4.4. Image generation
ResNet, DenseNet) to directly regress landmark locations for CXR
localization tasks (Hwang et al., 2019b; Cha et al., 2019). One com- There are 35 studies identified in this work whose main fo-
mon way of tackling this is to adapt the networks to produce cus is Image Generation, as detailed in Table 5. Image generation
heatmap predictions and draw boxes around the areas that created techniques have been harnessed for a wide variety of purposes in-
the highest signals. For example, Hwang et al. (2019b) tailored a cluding data augmentation (Salehinejad et al., 2019), visualization
DenseNet-based classifier to produce heatmap predictions for each (Bigolin Lanfredi et al., 2019; Seah et al., 2019), abnormality de-
of four types of CXR abnormalities. The network was trained with tection through reconstruction (Tang et al., 2019c; Wolleb et al.,
pixel-wise cross entropy between the predictions and annotations. 2020), domain adaptation (Zhang et al., 2018) or image enhance-
Similarly, Cha et al. (2019) adapted ResNet-50 and ResNet-101 ar- ment techniques (Lee et al., 2019).
chitectures for localization of nodules and masses on CXR. Other The generative adversarial network (GAN) (Goodfellow et al.,
studies (Xue et al., 2018c; Li et al., 2020c) tackled this problem 2014; Yi et al., 2019b) has became the method of choice for im-
using patch-based approaches, commonly referred as multiple in- age generation in CXR and over 50% of the works reviewed here
stance learning, creating patches from chest X-rays and evaluating used GAN-based models.
these for the presence of abnormalities. A number of works focused on CXR generation to aug-
One challenge in building robust deep learning localization sys- ment training datasets (Madani et al., 2018b; Zhang et al.,
tems is to collect large annotated datasets. Collecting such an- 2019a; Salehinejad et al., 2019) by using unconditional GANs
notations is time-consuming and costly which has motivated re- which synthesize images from random noise. For example,
searchers to build systems incorporating weaker labels during Salehinejad et al. (2019) trained a DCGAN model, similar to
14
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 5
Image Generation Studies (Section 4.4). Tasks: DA=Domain Adaptation, IC=Interval Change, IG=Image Generation, IL=Image-level Predictions, LC=Localization,
PR=Preprocessing, RE=Registration, SE=Segmentation, SR=Super Resolution. Bold font in tasks implies that this additional task is central to the work and the study also
appears in another table in this paper. Labels: BS=Bone Suppression, C=ChestX-Ray14, CL=Clavicle, CM=Cardiomegaly, CV=COVID, E=Edema, H=Heart, L=Lung, LO=Lesion
or Opacity, PE=Effusion, PT=Pneumothorax, T=Triage/Abnormal, TB=Tuberculosis, Z=Other. Datasets: C=ChestX-ray14, CC=COVID-CXR, J=JSRT+SCR, MO=Montgomery,
O=Open-i, PL=PLCO, PP=Ped-pneumonia, PR=Private, RP=RSNA-Pneumonia, S=Shenzen, SM=Simulated CXR from CT, X=CheXpert.
Tang et al. (2019b) Attention-based network and CXR synthesis process for data augmentation SE,IG L J,MO,PR
Eslami et al. (2020) Conditional GANs for multi-class segmentation of heart,clavicles and lungs SE CL,H,L J
Onodera et al. (2020) Processing method to produce scatter-corrected CXRs and segments masses with U-Net SE LO SM
Wang et al. (2018) Combines classification loss and autoencoder reconstruction loss IL,SE T J,MO,O,S
Seah et al. (2019) Wasserstein GAN to permute diseased radiographs to appear healthy IL,LC Z PR
Wolleb et al. (2020) Novel GAN model trained with healthy and abnormal CXR to predict difference map IL PE SM,X
Tang et al. (2019c) GANs with U-Net autoencoder and CNN discriminator and encoder for one-class learning IL T C
Mao et al. (2020) Autoencoder uses uncertainty for reconstruction error in one-class learning setting IL T PP,RP
Mahapatra and Ge (2019) Conditional GAN based DA for image registration using segmentation guidance DA,RE,SE L C
Madani et al. (2018a) Adversarial based method adapting new domains for abnormality classification DA,IL CM PL
Umehara et al. (2017) Proposes a patch-based CNN super resolution method SR Z J
Uzunova et al. (2019) Generates high resolution CXRs using multi-scale, patch based GANs SR Z O
Zhang et al. (2019a) Novel GAN model with sketch guidance module for high resolution CXR generation SR Z PP
Lin et al. (2020) AutoEncoder for bone suppression and segmentation with statistical similarity losses SE,PR BS J
Dong et al. (2019) Uses neural architecture search to find a discriminator network for GANs SE H,L J,PR
Taghanaki et al. (2019a) Proposes an iterative gradient based input preprocessing for improved performance SE L S
Fang et al. (2020) Learns transformations to register two CXRs, uses the difference for interval change RE,IC Z PR
Yang et al. (2017) Generates bone and soft tissue (dual energy) images from CXRs PR BS PR
Zarshenas et al. (2019) Proposes an CNN with multi-resolution decomposition for bone suppression images PR BS PR
Gozes and Greenspan (2020) U-Net for bone generation with CT projection images, used for CXR enhancement PR BS SM
Lee et al. (2019) U-Net based network to generate dual energy CXR PR Z PR
Liu et al. (2020a) GAN integrates edges of ribs and clavicles to guide DES-like images generation PR Z PR
Xing et al. (2019) Generates diseased CXRs, evaluates their realness with radiologists and trains models LC C C
Li et al. (2019) Novel CycleGAN model to decompose CXR images incorporating CT projection images IL C C,PR,SM
Salehinejad et al. (2019) Uses DCGAN model to generate CXR with abnormalities for data augmentation IL CM,E,PE,PT PR
Albarqouni et al. (2017) U-Net based architecture to decompose CXR structures, application to TB detection IL TB PR
Madani et al. (2018b) Two DCGAN trained with normal and abnormal images for data augmentation IL Z PL
Bigolin Lanfredi et al. (2019) Novel conditional GAN using lung function test results to visualize COPD progression IL Z PR
Zarei et al. (2021) Conditional GAN and two variational autoencoders designed for CXR generation PR
Gomi et al. (2020) Novel reconstruction algorithm for CXR enhancement PR
Zhou et al. (2020b) Bone shadow suppression using conditional GANS with dilated U-Net variant BS J
Matsubara et al. (2020) Generates CXRs from CT to train CNN for bone suppression BS PR
Zunair and Hamza (2021) Generates COVID-19 CXR images to improve network training and performance CV CC,RP
Bayat et al. (2020) 2D-to-3D encoder-decoder network for generating 3D spine models from CXR studies Z PR
Bigolin Lanfredi et al. (2020) Generates normal from abnormal CXRs, uses the deformations as disease evidence Z PR
Madani et al. (2018b), independently for each class, to generate The most widely studied subject in the image generation liter-
chest radiographs with five different abnormalities. The authors ature is image enhancement. Several researchers investigated bone
demonstrated that this augmentation process improved the ab- suppression (Liu et al., 2020a; Matsubara et al., 2020; Zarshenas
normality classification performance of DCNN classifiers (ResNet, et al., 2019; Gozes and Greenspan, 2020; Lin et al., 2020; Zhou
GoogleNet, AlexNet) by balancing the dataset classes. Another work et al., 2020b) and lung enhancement (Li et al., 2019; Gozes and
(Zhang et al., 2019a) proposed a novel GAN architecture to improve Greenspan, 2020) techniques to improve image interpretability.
the quality of generated CXR by forcing the generator to learn dif- A number of works (Liu et al., 2020a; Zhou et al., 2020b) em-
ferent image representations. The authors proposed SkrGAN, where ployed GANs to generate bone-suppressed images. For example,
a sketch prior constraint is introduced by decomposing the gener- Liu et al. (2020a) employed GANs and leveraged additional input
ator into two modules for generating a sketched structural repre- to the generator to guide the dual-energy subtraction (DES) soft-
sentation and the CXR image, respectively. tissue image generation process. In this study, bones, edges and
Abnormality detection is another task which has been ad- clavicles were first segmented by a CNN model, and the resulting
dressed through a combination of image generation and one-class edge maps were fed to the generator with the original CXR image
learning methods (Tang et al., 2019c; Mao et al., 2020). The under- as prior knowledge. For building a deep learning model for bone
lying idea of these methods is that a generative model trained to suppressed CXR generation, the paired dual energy (DE) imaging is
reconstruct healthy images will have a high reconstruction error if needed, which is not always available in abundance. Several other
abnormal images are input at test time, allowing them to be iden- studies (Li et al., 2019; Gozes and Greenspan, 2020) addressed this
tified. Tang et al. (2019c) harnessed GANs and employed a U-Net by leveraging digitally reconstructed radiographs for enhancing the
type autoencoder to reconstruct images (as the generator), and a lungs and bones in CXR. For instance, Li et al. (2019) trained an
CNN-based discriminator and encoder. The discriminator received autoencoder for generating CXR with bone suppression and lung
both reconstructed images and real images to provide supervisory enhancement, and the knowledge obtained from DRR images were
signal for realistic reconstruction through adversarial training. Sim- integrated through the encoder.
ilarly, Mao et al. (2020) proposed an autoencoder for abnormality
detection which was trained only with healthy images. In this case
4.5. Domain adaptation
the autoencoder was tailored to not only reconstruct healthy im-
ages but also produce uncertainty predictions. By leveraging un-
Most of the papers surveyed in this work train and test their
certainty, the authors proposed a normalized reconstruction error
method on data from the same domain. This finding is inline with
to distinguish abnormal CXR images from normal ones.
the previously reported studies (Kim et al., 2019; Prevedello et al.,
15
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 6
Domain Adaptation Studies (Section 4.5). Tasks: IG=Image Generation, IL=Image-level Predictions, RE=Registration, SE=Segmentation. Bold font in tasks implies that this
additional task is central to the work and the study also appears in another table in this paper. Labels: C=ChestX-Ray14, CL=Clavicle, CM=Cardiomegaly, H=Heart, L=Lung,
M=MIMIC-CXR, PM=Pneumonia, R=Rib, TB=Tuberculosis, Z=Other. Datasets: C=ChestX-ray14, J=JSRT+SCR, M=MIMIC-CXR, MO=Montgomery, O=Open-i, PL=PLCO, PP=Ped-
pneumonia, PR=Private, RP=RSNA-Pneumonia, S=Shenzen, SM=Simulated CXR from CT.
Oliveira et al. (2020b) MUNIT based DA model for lung segmentation SE CL,H,L J
Dong et al. (2018) Adversarial training of lung and heart segmentation for DA SE CM J,PR
Zhang et al. (2018) CycleGAN guided by a segmentation module to convert CXR to CT projection images SE H,L,Z PR
Chen et al. (2018a) CycleGAN based DA model with semantic aware loss for lung segmentation SE L MO
Oliveira et al. (2020a) Conditional GANs based DA for bone segmentation SE R SM
Lenga et al. (2020) Continual learning methods to classify data from new domains IL C,M C,M
Tang et al. (2019a) CycleGAN model to adapt adult to pediatric CXR for pneumonia classification IL PM PP,RP
Mahapatra and Ge (2019) Conditional GAN based DA for image registration using segmentation guidance IG,RE,SE L C
Madani et al. (2018a) Adversarial based method adapting new domains for abnormality classification IG,IL CM PL
Zech et al. (2018) Assessment of generalization to data from different institutes IL PM C,O
Sathitratanacheewin et al. (2020) Demonstrates the effect of training and test on data from different domains IL TB S
Table 7
Other Studies (Section 4.6). Tasks: IL=Image-level Predictions, IR=Image Retrieval, OD=Out-of-Distribution, RE=Registration, RG=Report Generation, RP=Report Parsing.
Bold font in tasks implies that this additional task is central to the work and the study also appears in another table in this paper. Labels: C=ChestX-Ray14, H=Heart,
L=Lung, Q=Image Quality, T=Triage/Abnormal, TB=Tuberculosis, X=CheXpert, Z=Other. Datasets: C=ChestX-ray14, J=JSRT+SCR, M=MIMIC-CXR, MO=Montgomery, O=Open-
i, PR=Private, S=Shenzen, X=CheXpert.
Owais et al. (2020) Uses a database of the intermediate ResNet-50 features to find similar studies IL,IR TB MO,S
Syeda-Mahmood et al. (2020) Generate reports by classifying CXRs, and finding and modifying similar reports RG,RP Z C,M
Li et al. (2020b) Extracts features from Chest X-rays and uses another network to write reports. RG,IL C C,O
Yuan et al. (2019) Generates radiology reports by training on classification labels and report text RG,IL Z O,X
Xue et al. (2018a) A novel recurrent generation network with attention mechanism RG Z O
Mansilla et al. (2020) Anatomical priors to improve deep learning based image registration RE H,L J,MO,S
Márquez-Neila and Proposes a method to reject out-of-distribution images during test time OD,IL Z C
Sznitman (2019)
Bozorgtabar et al. (2020) Proposes to detect anomalies based on a dataset of autoencoder features OD Q,T C
Çallı et al. (2019) Mahalanobis distance on network layers to detect out-of-distribution samples OD Z C
Anavi et al. (2015) Compares the extracted feature and classification similarities for ranking IR PR
Haq et al. (2021) Uses extracted features to cluster similarly labeled CXRs across datasets IR C,X C,X
Chen et al. (2018c) Proposes a learnable hash to retrieve CXRs with similar pathologies IR Z C
Conjeti et al. (2017) Residual network to retrieve images with similar abnormalities IR Z O
Anavi et al. (2016) Combines features extracted from CXRs and metadata for image retrieval IR Z PR
Silva et al. (2020) Proposes to use the saliency maps as a similarity measure for image retrieval IR Z X
2019) and highlights an important concern: most of the perfor- tion based on adversarial training for lung and heart segmentation.
mance levels reported in the literature might not generalize well In this approach, a discriminator network, ResNet, learned to dis-
to data from other domains (Zech et al., 2018). Several studies (Yao criminate between segmentation predictions (heart and lung) from
et al., 2019; Zech et al., 2018; Cohen et al., 2020b) demonstrated the target domain and reference standard segmentations from the
that there was a significant drop in performance when deep learn- source domain. This approach forced the FCN-based segmentation
ing systems were tested on datasets outside their training domain network to learn domain invariant features and produce realis-
for a variety of CXR applications. For example, Yao et al. (2019) in- tic segmentation maps. A number of works (Chen et al., 2018a;
vestigated the performance of a DenseNet model for abnormality Zhang et al., 2018; Oliveira and dos Santos, 2018) addressed un-
classification on CXR images using 10 diverse datasets varied by supervised DA using CycleGAN-based models to transform source
their location and patient distributions. The authors empirically images to resemble those from the target domain. For exam-
demonstrated that there was a substantial drop in performance ple, Zhang et al. (2018) used a CycleGAN-based architecture to
when a model was trained on a single dataset and tested on the adapt CXR images to digitally reconstructed radiographs (DRR)
other domains. Zech et al. (2018) observed a similar finding for (generated from CT scans), for anatomy segmentation in CXR. A
pneumonia detection on chest radiographs. CycleGAN-based model was employed to convert the CXR image
Domain adaptation (DA) methods investigate how to improve appearance and a U-Net variant architecture to simultaneously seg-
the performance of a model on a dataset from a different domain ment organs of interest. Similarly, CycleGAN-based models were
than the training set. In CXR analysis, DA methods have been in- adapted to transfer DRR images to resemble CXR images for bone
vestigated in three main settings; adaptation of CXR images ac- segmentation (Oliveira et al., 2020a) and to transform adult CXR to
quired from different hardware, adaptation of pediatric to adult pediatric CXR for pneumonia classification (Tang et al., 2019c).
CXR and adaptation of digitally reconstructed radiographs (gener- Unlike most of the studies which utilized DA methods in
ated by average intensity projections from CT) to real CXR images. unsupervised setting, a few studies considered supervised and
All domain adaptation studies, and studies on generalization re- semi-supervised approaches to adapt to the target domain.
viewed in this work are detailed in Table 6. Oliveira et al. (2020b) employed a MUNIT-based architecture
Most of the research on DA for CXR analysis harnessed (Huang et al., 2018) to map target images to resemble source im-
adversarial-based DA methods, which either use generative mod- ages, subsequently feeding the transformed images to the seg-
els (e.g., CycleGANs) or non-generative models to adapt to new mentation model. The authors investigated both unsupervised
domains using a variety of different approaches. For example, and semi-supervised approaches in this work, where some la-
Dong et al. (2018) investigated an unsupervised domain adapta- bels from the target domain were available. Another work by
16
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Lenga et al. (2020) studied several recently proposed continual a variety of legal and ethical considerations which may partly ac-
learning approaches, namely joint training, elastic weight con- count for this (Recht et al., 2020; Strohm et al., 2020), however
solidation and learning without forgetting, to improve the per- there is growing acceptance that artificial intelligence (AI) prod-
formance on a target domain and to mitigate effectively catas- ucts have a place in the radiological workflow and attempts are
trophic forgetting for the source domain. The authors evaluated underway to understand and address the issues to be overcome
these methods for 2 publicly available datasets, ChestX-ray14 and (Chokshi et al., 2019). In this section we examine the currently
MIMIC-CXR, for a multi-class abnormality classification task and available commercial products for CXR analysis.
demonstrated that joint training achieved the best performance. An up to date list of commercial products for medical image
analysis (Grand-challenge, 2021; van Leeuwen et al., 2021) was
searched for products applicable to chest X-ray. One product was
4.6. Other applications
excluded as it is not specifically a CXR diagnostic tool, but a texture
analysis product for many modalities. The 21 remaining products
In this section we review articles with a primary applica-
are listed in Table 8. A number of these products have already been
tion that does not fit into any of the categories detailed in
evaluated in peer-reviewed publications, as shown in Table 8 and
Sections 4.1 to 4.5 (14 studies). These works are detailed fully in
it is beyond the scope of this work to make an assessment of their
Table 7.
performance. All of the listed products are CE marked (Europe)
Image retrieval is a task investigated by a number of au-
and/or FDA cleared (United States) and are thus available for clini-
thors (Anavi et al., 2015; 2016; Conjeti et al., 2017; Chen et al.,
cal use (Grand-challenge, 2021; van Leeuwen et al., 2021).
2018c; Silva et al., 2020; Owais et al., 2020; Haq et al., 2021).
The commercial products include applications for a wide range
The aim of image retrieval tools is to search an image archive
of abnormalities, with 6 of them reporting results for more than
to find cases similar to a particular index image. Such algorithms
5 (and up to 30) different labels. The most commonly addressed
are envisaged as a tool for radiologists in their daily workflow.
task is pneumothorax identification (8 products), followed by pleu-
Chen et al. (2018c) proposed a ranked feature extraction and hash-
ral effusion (7), nodules (6) and tuberculosis (4). In contrast with
ing model, while Silva et al. (2020) proposed to use saliency maps
the literature, which is dominated by image-level prediction algo-
as a similarity measure.
rithms, 17 of 21 products in Table 8 claim to provide localization of
Another task that did not belong to previously defined cat-
one or more abnormalities which they are designed to detect, usu-
egories is out-of-distribution detection. Studies working on this
ally visualized with heatmaps or contouring of abnormalities. Two
(Márquez-Neila and Sznitman, 2019; Çallı et al., 2019; Bozorgtabar
further products are designed for generation of bone suppression
et al., 2020) aim to verify whether a test sample belongs to the
images, one for interval change visualization and one for identi-
distribution of the training dataset as model performance is other-
fication and reporting of healthy images. Products contribute dif-
wise expected to be sub-optimal. Çallı et al. (2019) propose using
ferently to the workflow of the radiologist. Five products focus on
the training dataset statistics on different layers of a deep learn-
detecting acute cases to prioritize the worklist and speed up time
ing model and applying Mahalanobis distance to see the distance
to diagnosis. Draft reports are produced by five other products, for
of a sample from the training dataset. Bozorgtabar et al. (2020) ap-
either the normal (healthy) cases only or for all cases. The pro-
proach the problem differently and train an unsupervised autoen-
duction of draft reports, like workflow prioritization, is aimed at
coder. Later they use the feature encodings extracted from CXRs to
optimizing the speed and efficiency of the radiologist.
define a database of known encodings and compare new samples
to this database.
6. Discussion
Report generation is another task which has attracted interest
in deep learning for CXR (Li et al., 2020b; Yuan et al., 2019; Syeda-
In this work we have detailed datasets, literature and commer-
Mahmood et al., 2020; Xue et al., 2018a). These studies aim to
cial products relevant to deep learning in CXR analysis. For re-
partially automate the radiology workflow by evaluating the chest
searchers entering the field this study categorizes the existing data
X-ray and producing a text radiology report. For example, Syeda-
and literature for their ease of reference. In this section we further
Mahmood et al. (2020) first determines the findings to be reported
discuss how future research should be directed for higher quality
and then makes use of a large dataset of existing reports to find
and better clinical relevance.
a similar case. This case report is then customized to produce the
It is clear that CXR deep learning research has thrived on the
final output.
release of multiple large, public, labeled datasets in recent years,
One other task of interest is image registration (Mansilla et al.,
with 210 of 296 publications reviewed here using one or more
2020). This task aims to find the geometric transformation to con-
public datasets in their research. The number of publications in the
vert a CXR so that it anatomically aligns with another CXR image
field has grown consistently as more public data becomes avail-
or a statistically defined shape. The clinical goal of this task is typ-
able, as demonstrated in Fig. 2. However, although these datasets
ically to illustrate interval change between two images. Detecting
are extremely valuable, there are multiple caveats to be considered
new findings, tracking the course of a disease, or evaluating the ef-
in relation to their use, as described in Section 3. In particular, the
ficacy of a treatment are among the many uses of image registra-
caution required in the use of NLP-extracted labels is often over-
tion (Viergever et al., 2016). To that end, Mansilla et al. (2020) aims
looked by researchers, especially for the evaluation and compari-
to create an anatomically plausible registration by using the heart
son of models. For accurate assessment of model performance, the
and lung segmentations to guide the registration process.
use of ‘gold-standard’ test data labels is recommended. These la-
bels can be acquired through expert radiological interpretation of
5. Commercial products CXRs (preferably with multiple readers) or via associated CT scans,
laboratory test results, or other appropriate measurements.
Computer-aided analysis of CXR images has been researched for Other important factors to be considered when using public
many years, and in fact CXR was one of the first modalities for data include the image quality (if it has been reduced prior to re-
which a commercial product for automatic analyis became avail- lease, is this a limiting factor for the application?) and the poten-
able in 2008. In spite of this promising start, and of the advances tial overlap between labels. Although a few publications address
in the field achieved by deep learning, translation to clinical prac- label dependencies, this is most often overlooked, frequently re-
tice, even as an assistant to the reader, is relatively slow. There are sulting in the loss of valuable diagnostic information.
17
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Table 8
Commercial Products for CXR analysis. (Section 5) Labels: T=Triage/Abnormal, PM=Pneumonia, CV=COVID, TB=Tuberculosis, LO=Lesion or Opacity, CM=Cardiomegaly,
ND=Nodule, PE=Effusion, PT=Pneumothorax, TU=Catheter or Tube, LC=Lung Cancer, BS=Bone Suppression, E=Edema, Z=Other Output: LOC=Localization,
PRI=Prioritization, REP=Report, SCOR=Scoring.
Siemens Healthineers AI-Rad Companion Chest Fischer et al. (2020) LO PE PT Z (5) LOC, SCOR, REP
X-Ray
Samsung Healthcare Auto Lung Nodule Sim et al. (2019) ND (1) LOC
Detection
Thirona CAD4COVID-XRay Murphy et al. (2020b) CV (1) LOC, SCOR
Thirona CAD4TB Murphy et al. (2020a); TB (1) LOC, SCOR
Habib et al. (2020);
Qin et al. (2019);
Santos et al. (2020)
Oxipit ChestEye CAD T (1) REP (healthy)
Arterys Chest | MSK AI LO, ND, PE, PT (4) LOC, SCOR, PRI
Quibim Chest X-Ray Classifier Liang et al. (2020) PM CM ND PE PT E Z (16) LOC, SCOR, REP
GE Critical Care Suite PT (1) LOC, SCOR
InferVision InferRead DR Chest TB PE PT LC Z (9) LOC, SCOR
JLK JLD-O2K LC Z (16) LOC, SCOR
Lunit Lunit INSIGHT CXR Hwang et al. (2019b, TB CM ND PE PT Z (11) LOC, SCOR, PRI, REP
2019c, 2019a);
Qin et al. (2019)
qure.ai qXR Singh et al. (2018); T CV TB Z (30) LOC, SCOR, PRI, REP
Nash et al. (2020);
Engle et al. (2020);
Qin et al. (2019)
Digitec TIRESYA BS (1) Bone Suppressed Image
VUNO VUNO Med-Chest X-Ray Kim et al. (2017) LO ND PE PT Z (5) LOC, SCOR
Riverain Technologies ClearRead Xray - Bone Homayounieh et al. (2020); BS(1) Bone Suppressed Image
Suppress Dellios et al. (2017);
Schalekamp et al. (2016,
2014b)
Riverain Technologies ClearRead Xray - Compare LC(1) Subtraction Image
Riverain Technologies ClearRead Xray - Confirm TU(1) LOC
Riverain Technologies ClearRead Xray - Detect Dellios et al. (2017); ND LC (2) LOC
Schalekamp et al. (2014a);
Szucs-Farkas et al. (2013)
behold.ai Red Dot T PT (2) LOC
Zebra Medical Vision Triage Pleural Effusion PE LOC, PRI
Zebra Medical Vision Triage Pneumothorax PT LOC, PRI
While the increased interest in CXR analysis following the re- (López-Cabrera et al., 2021; DeGrave et al., 2020; Cruz et al., 2021;
lease of public datasets is a positive development in the field, a Maguolo and Nanni, 2020; Tartaglione et al., 2020).
secondary consequence of this readily available labeled data is the Although a broad range of off-the-shelf architectures are em-
appearance of many publications from researchers with limited ex- ployed in the literature surveyed for this review, there is little evi-
perience or understanding of deep learning or CXR analysis. The dence to suggest that one architecture outperforms another for any
literature reviewed during the preparation for this paper was very specific task. Many papers evaluate multiple different architectures
variable in quality. A substantial number of the papers included of- for their task but differences between the various architecture re-
fer limited novel contributions although they are technically sound. sults are typically small, proper hyperparameter optimization is
Many of these studies report experiments predicting the labels on not usually performed and statistical significance or data-selection
public datasets using off-the-shelf architectures and without re- influence are rarely considered. Many such evaluations use inaccu-
gard to the label inaccuracies and overlap, or the clinical utility rate NLP-extracted labels for evaluation which serves to muddy the
of such generic image-level algorithms. A large number of works waters even further.
were excluded for reasons of poor scientific quality (142). In 112 of While it is not possible to suggest an optimal architecture for
these the construction of the dataset gave cause for concern, the a specific task, it is observed that ensembles of networks typi-
most common example being that the training dataset was con- cally perform better than individual models (Dietterich, 20 0 0). At
structed such that images with certain labels came from different the time of writing, most of the top-10 submissions from the pub-
data sources, meaning that the images could be easily differenti- lic challenges (CheXpert (Irvin et al., 2019), SIIM-ACR (ACR, 2019),
ated by factors other than the label of interest. In particular, a large and RSNA-Pneumonia (RSNA, 2018)) consist of network ensem-
number of papers (61) combined adult COVID-19 subjects with pe- bles. There is also promise in the development of self-adapting
diatric (healthy and other-pneumonia) subjects in an attempt to frameworks such as the nnU-Net (Isensee et al., 2021) which has
classify COVID-19. Other reasons for exclusion included the presen- achieved an excellent performance in many medical image seg-
tation of results optimized on a validation set (without a held-out mentation challenges. This framework adapts specifically to the
test set), or the inclusion of the same images multiple times in the task at hand by selecting the optimal choice for a number of steps
dataset prior to splitting train and test sets. This latter issue has such as preprocessing, hyperparameter optimization, architecture
been exacerbated by the publication of several COVID-19 related etc., and it is likely that a similar optimization framework would
datasets which combine data from multiple public sources in one perform well for classification or localization tasks, including those
location, and are then themselves combined by authors building for CXR images.
deep-learning systems. Such concerns about dataset construction In spite of the pervasiveness of CXR in clinics worldwide, trans-
for COVID-19 studies have been discussed in several other works lation of AI systems for clinical use has been relatively slow. Apart
18
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
from legal and ethical considerations regarding the use of AI in erature (Dyer et al., 2021; Dunnmon et al., 2019; Baltruschat et al.,
medical decision making (Recht et al., 2020; Strohm et al., 2020), 2020).
a discussion which is outside the scope of this work, there are To further understand how AI could assist with CXR interpre-
still a number of technical hurdles where progress can be made tation, we first must consider the current typical workflow of the
towards the goal of clinical translation. Firstly, the generalizability radiologist, which notably involves a number of additional inputs
of AI algorithms is an important issue which needs further work. beyond the CXR image, that are rarely considered in the research
A large majority of papers in this review draw training, valida- literature. In most scenarios (excluding bedside/AP imaging) both
tion and test samples from the same dataset. However, it is well a frontal and lateral CXR are acquired as part of standard imaging
known that such models tend to have a weaker performance on protocol, to reduce the interpretation difficulties associated with
datasets from external domains. If access to reliable data from mul- projected anatomy. Very few studies included in this review made
tiple domains remains problematic then domain adaptation or ac- use of the lateral image, although there are indications that it can
tive learning methods could be considered to address the general- improve classification accuracy (Hashir et al., 2020). Furthermore,
ization issue. An alternative method to utilize data from multiple the reviewing radiologist has access to the clinical question be-
hospitals without breaching regulatory and privacy codes is fed- ing asked, the patient history and symptoms and in many cases
erated learning, whereby an algorithm can be trained using data other supporting data from blood tests or other investigations. All
from multiple remote locations (Sheller et al., 2019). Further re- of this information assists the radiologist to not only identify the
search is required to determine how this type of system will work visible abnormalities on CXR (e.g., consolidation), but to infer likely
in clinical practice. causes of these abnormalities (e.g., pneumonia). Incorporation of
A final issue for deep learning researchers to consider is fre- data from multiple sources along with the CXR image information
quently referred to as ‘explainable AI’. Systems which produce clas- will almost certainly improve sensitivity and specificity and avoid
sification labels without any indication of reasoning raise concerns an algorithm erroneously suggesting labels which are not compat-
of trustworthiness for radiologists. It is also significantly faster for ible with data from external sources. Another extremely important
experts to accept or reject the findings of an AI system if there and time-consuming element in the radiological review of CXR is
is some indication of how the finding was reached (e.g., identi- comparison with previous images from the same patient, to assess
fication of nodule location with a bounding box, identification of changes over time. Interval change is a topic studied by very few
cardiac and thoracic diameters for cardiomegaly detection). Every authors and addressed by only a single commercial vendor (by pro-
commercial product for detection of abnormality in CXR provides vision of a subtraction image). Innovative AI systems for the visu-
a localization feature to indicate the abnormal location, however alization and quantification of interval change with one or more
the literature is heavily focused on image-level predictions with previous images could substantially improve the efficiency of the
relatively few publications where localization is evaluated. Many radiologist. Finally, the radiologist is required to produce a report
studies provide an unvalidated visualization of the area of inter- as a result of the CXR review, which is another time-consuming
est (Rajaraman et al., 2018a; 2019b; Pasa et al., 2019; Mitra et al., process addressed by very few researchers and just a handful of
2020; Zou et al., 2020; Saednia et al., 2020; Hosch et al., 2020), commercial vendors. A system which can convert radiological find-
using methods like grad-cam (Selvaraju et al., 2020) or saliency ings to a preliminary report has the potential to save time and cost
maps (Simonyan et al., 2014) which output heatmaps indicating for the care provider.
which regions are important in the network result. Although these In many areas of the world, medical facilities that do perform
heatmaps may be useful for conditions that are indicated by lo- CXR imaging do not have access to radiological expertise. This
calized patterns or signs, the lack of comprehensive evaluation of presents a further opportunity for AI to play a role in diagnostic
their accuracy is problematic. Furthermore, many conditions may pathways, as an assistant to the clinician who is not trained in the
be difficult to explain with a heatmap, for example emphysema, interpretation of CXR. Researchers and commercial vendors have
which is identified by irregular radiolucency throughout the entire already identified the need for AI systems to detect signs of tu-
lung (among other features). One possible way to achieve clinically berculosis (TB), a condition which is endemic in many parts of the
useful systems in such cases is to label an image (e.g. positive or world, and frequently in low-resource settings where radiologists
negative) for a series of known radiological features relating to the are not available. While such regions of the world could potentially
condition being identified, or to use other (e.g. segmentation) in- benefit from AI systems to detect other conditions, it is important
formation in the classification (Sogancioglu et al., 2020). to identify in advance what conditions could be feasibly both de-
Beyond the resolution of technical issues, researchers aiming tected and treated in these areas where resources are severely lim-
to produce clinically useful systems need to consider the work- ited.
flow and requirements of the end-user, the radiologist or clini- The findings of this work suggest that while the deep learning
cian, more carefully. At present, in the industrialized world, it is community has benefited from large numbers of publicly available
expected that an AI system will act, at least initially, as an assis- CXR images, the direction of the research has been largely deter-
tant to (not a replacement for) a radiologist. As a 2D image, the mined by the available data and labels, rather than the needs of
CXR is already relatively quickly interpreted by a radiologist, and the clinician or radiologist. Future work, in data provision and la-
so the challenge for AI researchers is to produce systems that will belling, and in deep learning, should have a more direct focus on
save the radiologist time, prioritize urgent cases or improve the the clinical needs for AI in CXR interpretation. More accurate com-
sensitivity/specificity of their findings. Image-level classification for parison and benchmarking of algorithms would be enabled by ad-
a long list of (somewhat arbitrarily defined) labels is unlikely to ditional public challenges using appropriately annotated data for
be clinically useful. Reviewing such a list of labels and associated clinically relevant tasks.
probabilities for every CXR would require substantial time and ef-
fort, without a proportional improvement in diagnostic accuracy. A
simple system with bounding boxes indicating abnormal regions is
likely to be more helpful in directing the attention of the radiolo- Declaration of Competing Interest
gist and has the potential to increase sensitivity to subtle findings
or in difficult regions with many projected structures. Similarly, a The authors declare that they have no known competing finan-
system to quickly identify normal cases has the potential to speed cial interests or personal relationships that could have appeared to
up the workflow as identified by multiple vendors and in the lit- influence the work reported in this paper.
19
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
CRediT authorship contribution statement Baltruschat, I.M., Steinmeister, L., Ittrich, H., Adam, G., Nickisch, H., Saalbach, A., von
Berg, J., Grass, M., Knopp, T., 2019. When does bone suppression and lung field
segmentation improve chest X-ray disease classification? In: 2019 IEEE 16th In-
Erdi Çallı: Conceptualization, Methodology, Software, Data cu- ternational Symposium on Biomedical Imaging (ISBI 2019). IEEE, pp. 1362–1366.
ration, Writing - original draft, Visualization. Ecem Sogancioglu: doi:10.1109/ISBI.2019.8759510.
Conceptualization, Methodology, Data curation, Writing - original Bar, Y., Diamant, I., Wolf, L., Greenspan, H., 2015. Deep learning with non-medical
training used for chest pathology identification. In: Medical Imaging 2015:
draft. Bram van Ginneken: Conceptualization, Writing - review & Computer-Aided Diagnosis. SPIE, p. 94140V. doi:10.1117/12.2083124.
editing, Supervision, Funding acquisition. Kicky G. van Leeuwen: Bar, Y., Diamant, I., Wolf, L., Lieberman, S., Konen, E., Greenspan, H., 2015. Chest
Data curation, Writing - original draft, Writing - review & edit- pathology detection using deep learning with non-medical training. In: 2015
IEEE 12th International Symposium on Biomedical Imaging (ISBI). IEEE, pp. 294–
ing. Keelin Murphy: Conceptualization, Methodology, Data cura-
297. doi:10.1109/ISBI.2015.7163871. ISSN: 1945-8452
tion, Writing - original draft, Writing - review & editing, Super- Bayat, A., Sekuboyina, A., Paetzold, J.C., Payer, C., Stern, D., Urschler, M., Kirschke, J.S.,
vision, Project administration. Menze, B.H., 2020. Inferring the 3D standing spine posture from 2D radiographs.
In: Medical Image Computing and Computer Assisted Intervention - MICCAI
2020, 12266. Springer, pp. 775–784. doi:10.1007/978- 3- 030- 59725- 2_75.
Acknowledgements Becker, H.C., Nettleton, W.J., Meyers, P.H., Sweeney, J.W., Nice, C.M., 1964. Digital
computer determination of a medical diagnostic index directly from chest x-
ray images. IEEE Trans. Biomed. Eng. BME-11 (3), 67–72. doi:10.1109/tbme.1964.
This work was supported by the Dutch Technology Foundation 4502309.
STW, which formed the NWO Domain Applied and Engineering Behzadi-khormouji, H., Rostami, H., Salehi, S., Derakhshande-Rishehri, T., Ma-
soumi, M., Salemi, S., Keshavarz, A., Gholamrezanezhad, A., Assadi, M., Ba-
Sciences and partly funded by the Ministry of Economic Affairs touli, A., 2020. Deep learning, reusable and problem-based architectures for
(Perspectief programme P15-26 ‘DLMedIA: Deep Learning for Med- detection of consolidation on chest X-ray images. Comput. Methods Programs
ical Image Analysis’. Biomed. 185, 105162. doi:10.1016/j.cmpb.2019.105162.
von Berg, J., Krönke, S., Gooßen, A., Bystrov, D., Brück, M., Harder, T., Wieberneit, N.,
We would like to acknowledge and thank Gabrielle Ras and Young, S., 2020. Robust chest X-ray quality assessment using convolutional neu-
Gizem Sogancioglu who have supported us with their wise counsel ral networks and atlas regularization. In: Medical Imaging 2020: Image Process-
and sympathetic ears. ing. SPIE, p. 56. doi:10.1117/12.2549541.
Bertrand, H., Hashir, M., Cohen, J.P., 2019. Do lateral views help automated chest
X-ray predictions? In: International Conference on Medical Imaging with Deep
References Learning – Extended Abstract Track, p. 1.
Bigolin Lanfredi, R., Schroeder, J.D., Vachet, C., Tasdizen, T., 2019. Adversarial regres-
ACR, 2019. SIIM-ACR Pneumothorax Segmentation. sion training for visualizing the progression of chronic obstructive pulmonary
Albarqouni, S., Fotouhi, J., Navab, N., 2017. X-ray in-depth decomposition: reveal- disease with chest X-rays. In: Medical Image Computing and Computer As-
ing the latent structures. In: Medical Image Computing and Computer As- sisted Intervention - MICCAI 2019, 11769. Springer, pp. 685–693. doi:10.1007/
sisted Intervention – MICCAI 2017, 10435. Springer, pp. 444–452. doi:10.1007/ 978- 3- 030- 32226- 7_76.
978- 3- 319- 66179- 7_51. Bigolin Lanfredi, R., Schroeder, J.D., Vachet, C., Tasdizen, T., 2020. Interpretation of
Amiri, M., Brooks, R., Rivaz, H., 2020. Fine-tuning U-Net for ultrasound image seg- disease evidence for medical images using adversarial deformation fields. In:
mentation: different layers, different outcomes. IEEE Trans. Ultrason. Ferroelectr. Medical Image Computing and Computer Assisted Intervention - MICCAI 2020,
Freq. Control 67 (12), 2510–2518. doi:10.1109/TUFFC.2020.3015081. 12262. Springer, pp. 738–748. doi:10.1007/978- 3- 030- 59713- 9_71.
Anand, D., Tank, D., Tibrewal, H., Sethi, A., 2020. Self-supervision vs. transfer learn- Blain, M., T Kassin, M., Varble, N., Wang, X., Xu, Z., Xu, D., Carrafiello, G., Vespro, V.,
ing: robust biomedical image analysis against adversarial attacks. In: 2020 IEEE Stellato, E., Ierardi, A.M., Di Meglio, L., D Suh, R., A Walker, S., Xu, S., H San-
17th International Symposium on Biomedical Imaging (ISBI). IEEE, pp. 1159– ford, T., B Turkbey, E., Harmon, S., Turkbey, B., J Wood, B., 2020. Determination
1163. doi:10.1109/ISBI45749.2020.9098369. of disease severity in COVID-19 patients using deep learning in chest X-ray im-
Anavi, Y., Kogan, I., Gelbart, E., Geva, O., Greenspan, H., 2015. A comparative study ages. Diagn. Interv. Radiol. doi:10.5152/dir.2020.20205.
for chest radiograph image retrieval using binary texture and deep learning Blumenfeld, A., Konen, E., Greenspan, H., 2018. Pneumothorax detection in chest
classification. Int. Conf. IEEE Eng. Med. Biol. Soc. 2015, 2940–2943. doi:10.1109/ radiographs using convolutional neural networks. In: Medical Imaging 2018:
EMBC.2015.7319008. Computer-Aided Diagnosis. SPIE, p. 3. doi:10.1117/12.2292540.
Anavi, Y., Kogan, I., Gelbart, E., Geva, O., Greenspan, H., 2016. Visualizing and en- Bodenreider, O., 2004. The Unified Medical Language System (UMLS): integrating
hancing a deep learning framework using patients age and gender for chest x- biomedical terminology. Nucleic Acids Res. 32 (Database issue), D267–D270.
ray image retrieval. In: Medical Imaging 2016: Computer-Aided Diagnosis. SPIE, doi:10.1093/nar/gkh061.
p. 978510. doi:10.1117/12.2217587. Bonheur, S., S̎tern, D., Payer, C., Pienn, M., Olschewski, H., Urschler, M., 2019. Matwo-
Anis, S., Lai, K.W., Chuah, J.H., Shoaib, M.A., Mohafez, H., Hadizadeh, M., Ding, Y., CapsNet: a multi-label semantic segmentation capsules network. In: Medical
Ong, Z.C., 2020. An overview of deep learning approaches in chest radiograph. Image Computing and Computer Assisted Intervention - MICCAI 2019, 11768.
IEEE Access doi:10.1109/access.2020.3028390. Springer, pp. 664–672. doi:10.1007/978- 3- 030- 32254- 0_74.
Annarumma, M., Withey, S.J., Bakewell, R.J., Pesce, E., Goh, V., Montana, G., 2019. Bortsova, G., Dubost, F., Hogeweg, L., Katramados, I., de Bruijne, M., 2019. Semi-
Automated triaging of adult chest radiographs with deep artificial. Neural Netw. supervised medical image segmentation via learning consistency under trans-
Radiol. 291 (1). doi:10.1148/radiol.2019194005. 272–272 formations. In: Medical Image Computing and Computer Assisted Intervention –
Arbabshirani, M.R., Dallal, A.H., Agarwal, C., Patel, A., Moore, G., 2017. Accurate seg- MICCAI 2019, 11769. Springer, pp. 810–818. doi:10.1007/978- 3- 030- 32226- 7_90.
mentation of lung fields on chest radiographs using deep convolutional net- Bougias, H., Georgiadou, E., Malamateniou, C., Stogiannos, N., 2020. Identifying
works. In: Medical Imaging 2017: Image Processing. SPIE, p. 1013305. doi:10. cardiomegaly in chest X-rays: a cross-sectional study of evaluation and com-
1117/12.2254526. parison between different transfer learning methods. Acta Radiol. doi:10.1177/
Arjovsky, M., Chintala, S., Bottou, L., 2017. Wasserstein generative adversarial net- 0284185120973630. 028418512097363
works. In: Proceedings of the 34th International Conference on Machine Learn- Bozorgtabar, B., Mahapatra, D., Vray, G., Thiran, J.-P., 2020. SALAD: self-supervised
ing. PMLR, pp. 214–223. aggregation learning for anomaly detection on X-rays. In: Medical Image Com-
Arsalan, M., Owais, M., Mahmood, T., Choi, J., Park, K.R., 2020. Artificial intelligence- puting and Computer Assisted Intervention – MICCAI 2020, 12261. Springer,
based diagnosis of cardiac and related diseases. J. Clin. Med. 9 (3), 871. doi:10. pp. 468–478. doi:10.1007/978- 3- 030- 59710- 8_46.
3390/jcm9030871. Brestel, C., Shadmi, R., Tamir, I., Cohen-Sfaty, M., Elnekave, E., 2018. RadBot-CXR:
Ayaz, M., Shaukat, F., Raja, G., 2021. Ensemble learning based automatic detection of classification of four clinical finding categories in chest X-ray using deep learn-
tuberculosis in chest X-ray images using hybrid feature descriptors. Phys. Eng. ing. In: International Conference on Medical Imaging with Deep Learning,
Sci. Med. doi:10.1007/s13246- 020- 00966- 0. pp. 1–8.
Balabanova, Y., Coker, R., Fedorin, I., Zakharova, S., Plavinskij, S., Krukov, N., Atun, R., Burwinkel, H., Kazi, A., Vivar, G., Albarqouni, S., Zahnd, G., Navab, N., Ahmadi, S.-
Drobniewski, F., 2005. Variability in interpretation of chest radiographs among A., 2019. Adaptive image-feature learning for disease classification using in-
russian clinicians and implications for screening programmes: observational ductive graph networks. In: Medical Image Computing and Computer As-
study. BMJ 331 (7513), 379–382. doi:10.1136/bmj.331.7513.379. sisted Intervention – MICCAI 2019, 11769. Springer, pp. 640–648. doi:10.1007/
Balachandar, N., Chang, K., Kalpathy-Cramer, J., Rubin, D.L., 2020. Accounting for 978- 3- 030- 32226- 7_71.
data variability in multi-institutional distributed deep learning for medical Bustos, A., Pertusa, A., Salinas, J.-M., de la Iglesia-Vayá, M., 2020. PadChest: A large
imaging. J. Am. Med. Inform. Assoc. 27 (5), 700–708. doi:10.1093/jamia/ocaa017. chest x-ray image dataset with multi-label annotated reports. Medical Image
Baltruschat, I., Steinmeister, L., Nickisch, H., Saalbach, A., Grass, M., Adam, G., Analysis 66, 101797. doi:10.1016/j.media.2020.101797.
Knopp, T., Ittrich, H., 2020. Smart chest X-ray worklist prioritization using ar- Cai, J., Lu, L., Harrison, A.P., Shi, X., Chen, P., Yang, L., 2018. Iterative attention mining
tificial intelligence: a clinical workflow simulation. Eur. Radiol. doi:10.1007/ for weakly supervised thoracic disease pattern localization in chest X-rays. In:
s00330- 020- 07480- 7. Medical Image Computing and Computer Assisted Intervention – MICCAI 2018,
Baltruschat, I.M., Nickisch, H., Grass, M., Knopp, T., Saalbach, A., 2019. Comparison 11071. Springer, pp. 589–598. doi:10.1007/978- 3- 030- 00934- 2_66.
of deep learning approaches for multi-label chest X-Ray classification. Sci. Rep. Çallı, E., Murphy, K., Sogancioglu, E., van Ginneken, B., 2019. {FRODO}: free rejection
9 (1), 6381. doi:10.1038/s41598- 019- 42294- 8. of out-of-distribution samples: application to chest X-ray analysis. In: Interna-
20
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
tional Conference on Medical Imaging with Deep Learning – Extended Abstract Cohen, J. P., Morrison, P., Dao, L., 2020c. Covid-19 image data collection: prospective
Track, pp. 1–4. predictions are the future. arXiv:2006.11988.
Calli, E., Sogancioglu, E., Scholten, E.T., Murphy, K., Ginneken, B.v., 2019. Handling Conjeti, S., Roy, A.G., Katouzian, A., Navab, N., 2017. Hashing with residual net-
label noise through model confidence and uncertainty: application to chest ra- works for image retrieval. In: Medical Image Computing and Computer As-
diograph classification. In: Medical Imaging 2019: Computer-Aided Diagnosis. sisted Intervention – MICCAI 2017, 10435. Springer, pp. 541–549. doi:10.1007/
SPIE, p. 41. doi:10.1117/12.2514290. 978- 3- 319- 66179- 7_62.
Campo, M.I., Pascau, J., Estepar, R.S.J., 2018. Emphysema quantification on simulated Crosby, J., Chen, S., Li, F., MacMahon, H., Giger, M., 2020. Network output visual-
X-rays through deep learning techniques. In: 2018 IEEE 15th International Sym- ization to uncover limitations of deep learning detection of pneumothorax. In:
posium on Biomedical Imaging (ISBI 2018). IEEE, pp. 273–276. doi:10.1109/ISBI. Medical Imaging 2020: Image Perception, Observer Performance, and Technol-
2018.8363572. ogy Assessment. SPIE, p. 22. doi:10.1117/12.2550066.
Cardenas, D.A.C., Jr, J.R.F., Moreno, R.A., Rebelo, M.d.F.d.S., Krieger, J.E., Gutier- Crosby, J., Rhines, T., Duan, C., Li, F., MacMahon, H., Giger, M., 2019. Impact of im-
rez, M.A., 2021. Automated radiographic bone suppression with deep convolu- printed labels on deep learning classification of AP and PA thoracic radiographs.
tional neural networks. In: Medical Imaging 2021: Biomedical Applications in In: Medical Imaging 2019: Imaging Informatics for Healthcare, Research, and
Molecular, Structural, and Functional Imaging. International Society for Optics Applications. SPIE, p. 13. doi:10.1117/12.2513026.
and Photonics, p. 116001D. doi:10.1117/12.2582210. Crosby, J., Rhines, T., Li, F., MacMahon, H., Giger, M., 2020. Deep convolutional neu-
Castiglioni, I., Ippolito, D., Interlenghi, M., Monti, C.B., Salvatore, C., Schiaffino, S., ral networks in the classification of dual-energy thoracic radiographic views for
Polidori, A., Gandola, D., Messa, C., Sardanelli, F., 2021. Machine learning applied efficient workflow: analysis on over 6500 clinical radiographs. J. Med. Imaging
on chest X-ray can aid in the diagnosis of COVID-19: a first experience from 7 (01), 1. doi:10.1117/1.JMI.7.1.016501.
Lombardy, Italy. Eur. Radiol. Exp. 5 (1), 7. doi:10.1186/s41747- 020- 00203- z. Crosby, J., Rhines, T., Li, F., MacMahon, H., Giger, M., 2020. Deep learning for pneu-
Cha, M.J., Chung, M.J., Lee, J.H., Lee, K.S., 2019. Performance of deep learning model mothorax detection and localization using networks fine-tuned with multiple
in detecting operable lung cancer with chest radiographs. J. Thorac. Imaging 34 institutional datasets. In: Medical Imaging 2020: Computer-Aided Diagnosis.
(2), 86–91. doi:10.1097/rti.0 0 0 0 0 0 0 0 0 0 0 0 0388. SPIE, p. 11. doi:10.1117/12.2549709.
Chakravarty, A., Sarkar, T., Ghosh, N., Sethuraman, R., Sheet, D., 2020. Learning de- Cruz, B. G. S., Bossa, M. N., Sölter, J., Husch, A. D., 2021. Public covid-19 x-ray
cision ensemble using a graph neural network for comorbidity aware chest datasets and their impact on model bias - a systematic review of a significant
radiograph screening. In: 2020 42nd Annual International Conference of the problem. medRxiv. 10.1101/2021.02.15.21251775.
IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, pp. 1234–1237. Daniels, Z.A., Metaxas, D.N., 2019. Exploiting visual and report-based information
doi:10.1109/EMBC44109.2020.9176693. for chest X-ray analysis by jointly learning visual classifiers and topic models.
Chauhan, G., Liao, R., Wells, W., Andreas, J., Wang, X., Berkowitz, S., Horng, S., In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019).
Szolovits, P., Golland, P., 2020. Joint modeling of chest radiographs and radi- IEEE, pp. 1270–1274. doi:10.1109/ISBI.2019.8759548.
ology reports for pulmonary edema assessment. In: Medical Image Computing DeGrave, A. J., Janizek, J. D., Lee, S.-I., 2020. AI for radiographic COVID-19 detection
and Computer Assisted Intervention – MICCAI 2020, 12262. Springer, pp. 529– selects shortcuts over signal. medRxiv. 10.1101/2020.09.13.20193565.
539. doi:10.1007/978- 3- 030- 59713- 9_51. Dellios, N., Teichgraeber, U., Chelaru, R., Malich, A., Papageorgiou, I.E., 2017.
Chen, B., Li, J., Lu, G., Yu, H., Zhang, D., 2020. Label co-occurrence learning with Computer-aided detection fidelity of pulmonary nodules in chest radiograph. J.
graph convolutional networks for multi-label chest X-ray image classifica- Clin. Imaging Sci. 7. doi:10.4103/jcis.JCIS_75_16.
tion. IEEE J. Biomed. Health Inform. 24 (8), 2292–2302. doi:10.1109/JBHI.2020. Demner-Fushman, D., Antani, S., Simpson, M., Thoma, G.R., 2012. Design and de-
2967084. velopment of a multimodal biomedical information retrieval system. J. Comput.
Chen, C., Dou, Q., Chen, H., Heng, P.-A., 2018. Semantic-aware generative adver- Sci. Eng. 6 (2), 168–177. doi:10.5626/JCSE.2012.6.2.168.
sarial nets for unsupervised domain adaptation in chest x-ray segmentation. Deng, J., Dong, W., Socher, R., Li, L., Kai Li, Li Fei-Fei, 2009. ImageNet: a large-scale
In: Machine Learning in Medical Imaging. Springer, pp. 143–151. doi:10.1007/ hierarchical image database. In: 2009 IEEE Conference on Computer Vision and
978- 3- 030- 00919- 9_17. Pattern Recognition, pp. 248–255. doi:10.1109/CVPR.2009.5206848. ISSN: 1063-
Chen, H., Miao, S., Xu, D., Hager, G.D., Harrison, A.P., 2019. Deep hierarchical multi- 6919
-label classification of chest X-ray images. In: International Conference on Med- Deshpande, H., Harder, T., Saalbach, A., Sawarkar, A., Buelow, T., 2020. Detection of
ical Imaging with Deep Learning. PMLR, pp. 109–120. foreign objects in chest radiographs using deep learning. In: 2020 IEEE 17th
Chen, K.-C., Yu, H.-R., Chen, W.-S., Lin, W.-C., Lee, Y.-C., Chen, H.-H., Jiang, J.-H., International Symposium on Biomedical Imaging Workshops (ISBI Workshops).
Su, T.-Y., Tsai, C.-K., Tsai, T.-A., Tsai, C.-M., Lu, H.H.-S., 2020. Diagnosis of com- IEEE, pp. 1–4. doi:10.1109/ISBIWorkshops50223.2020.9153350.
mon pulmonary diseases in children by X-ray images and deep learning. Sci. Devnath, L., Luo, S., Summons, P., Wang, D., 2021. Automated detection of pneumo-
Rep. 10 (1), 17374. doi:10.1038/s41598- 020- 73831- 5. coniosis with multilevel deep features learned from chest X-Ray radiographs.
Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L., 2018. DeepLab: se- Comput. Biol. Med. 129, 104125. doi:10.1016/j.compbiomed.2020.104125.
mantic image segmentation with deep convolutional nets, atrous convolution, Dietterich, T.G., 20 0 0. Ensemble methods in machine learning. In: Multiple Classifier
and fully connected CRFs. IEEE Trans. Pattern Anal. Mach.Intell. 40 (4), 834–848. Systems. Springer, pp. 1–15. doi:10.1007/3- 540- 45014- 9_1.
doi:10.1109/tpami.2017.2699184. Dong, N., Kampffmeyer, M., Liang, X., Wang, Z., Dai, W., Xing, E., 2018. Unsupervised
Chen, S., Han, Y., Lin, J., Zhao, X., Kong, P., 2020. Pulmonary nodule detection domain adaptation for automatic estimation of cardiothoracic ratio. In: Medical
on chest radiographs using balanced convolutional neural network and classic Image Computing and Computer Assisted Intervention – MICCAI 2018, 11071.
candidate detection. Artif. Intell. Med. 107, 101881. doi:10.1016/j.artmed.2020. Springer, pp. 544–552. doi:10.1007/978- 3- 030- 00934- 2_61.
101881. Dong, N., Xu, M., Liang, X., Jiang, Y., Dai, W., Xing, E., 2019. Neural architecture
Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P., 2016. Info- search for adversarial medical image segmentation. In: Medical Image Com-
gan: interpretable representation learning by information maximizing genera- puting and Computer Assisted Intervention – MICCAI 2019, 11769. Springer,
tive adversarial nets. In: Advances in Neural Information Processing Systems, pp. 828–836. doi:10.1007/978- 3- 030- 32226- 7_92.
29, pp. 2172–2180. DSouza, A.M., Abidin, A.Z., Wismüller, A., 2019. Automated identification of thoracic
Chen, Z., Cai, R., Lu, J., Feng, J., Zhou, J., 2018. Order-sensitive deep hashing for multi- pathology from chest radiographs with enhanced training pipeline. In: Medical
morbidity medical image retrieval. In: Medical Image Computing and Computer Imaging 2019: Computer-Aided Diagnosis. SPIE, p. 123. doi:10.1117/12.2512600.
Assisted Intervention – MICCAI 2018, 11070. Springer, pp. 620–628. doi:10.1007/ Dunnmon, J.A., Yi, D., Langlotz, C.P., Ré, C., Rubin, D.L., Lungren, M.P., 2019. Assess-
978- 3- 030- 00928- 1_70. ment of convolutional neural networks for automated classification of chest ra-
Cho, Y., Kim, Y.-G., Lee, S.M., Seo, J.B., Kim, N., 2020. Reproducibility of abnormality diographs. Radiology 290 (2), 537–544. doi:10.1148/radiol.2018181422.
detection on chest radiographs using convolutional neural network in paired Dyer, T., Dillard, L., Harrison, M., Morgan, T.N., Tappouni, R., Malik, Q., Rasal-
radiographs obtained within a short-term interval. Sci. Rep. 10 (1), 17417. doi:10. ingham, S., 2021. Diagnosis of normal chest radiographs using an au-
1038/s41598- 020- 74626- 4. tonomous deep-learning algorithm. Clin. Radiol. doi:10.1016/j.crad.2021.01.015.
Chokshi, F.H., Flanders, A.E., Prevedello, L.M., Langlotz, C.P., 2019. Fostering a healthy S0 0 099260210 0 0763
AI ecosystem for radiology: conclusions of the 2018 RSNA summit on AI in ra- E, L., Zhao, B., Guo, Y., Zheng, C., Zhang, M., Lin, J., Luo, Y., Cai, Y., Song, X., Liang, H.,
diology. Radiology 1 (2), 190021. doi:10.1148/ryai.2019190021. 2019. Using deep-learning techniques for pulmonary-thoracic segmentations
Chollet, F., 2017. Xception: deep learning with depthwise separable convolutions. In: and improvement of pneumonia diagnosis in pediatric chest radiographs. Pe-
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, diatr. Pulmonol. 54 (10), 1617–1626. doi:10.1002/ppul.24431.
pp. 1251–1258. doi:10.1109/cvpr.2017.195. Ellis, R., Ellestad, E., Elicker, B., Hope, M.D., Tosun, D., 2020. Impact of hybrid su-
Cicero, M., Bilbily, A., Colak, E., Dowdell, T., Gray, B., Perampaladas, K., Bar- pervision approaches on the performance of artificial intelligence for the clas-
fett, J., 2017. Training and validating a deep convolutional neural net- sification of chest radiographs. Comput. Biol. Med. 120, 103699. doi:10.1016/j.
work for computer-aided detection and classification of abnormalities on compbiomed.2020.103699.
frontal chest radiographs:. Investig. Radiol. 52 (5), 281–287. doi:10.1097/RLI. Elshennawy, N.M., Ibrahim, D.M., 2020. Deep-pneumonia framework using deep
0 0 0 0 0 0 0 0 0 0 0 0 0341. learning models based on chest X-ray images. Diagnostics 10 (9), 649. doi:10.
Cohen, J.P., Dao, L., Roth, K., Morrison, P., Bengio, Y., Abbasi, A.F., Shen, B., 3390/diagnostics10090649.
Mahsa, H.K., Ghassemi, M., Li, H., Duong, T., 2020. Predicting COVID-19 pneumo- Engle, E., Gabrielian, A., Long, A., Hurt, D.E., Rosenthal, A., 2020. Performance of
nia severity on chest X-ray with deep learning. Cureus doi:10.7759/cureus.9448. Qure.ai automatic classifiers against a large annotated database of patients with
Cohen, J.P., Hashir, M., Brooks, R., Bertrand, H., 2020. On the limits of cross-domain diverse forms of tuberculosis. PLoS One 15 (1), e0224445. doi:10.1371/journal.
generalization in automated x-ray prediction. In: Proceedings of the Third pone.0224445.
Conference on Medical Imaging with Deep Learning, PMLR arXiv:2002.02497. Eslami, M., Tabarestani, S., Albarqouni, S., Adeli, E., Navab, N., Adjouadi, M., 2020.
121:136–155 Image-to-images translation for multi-task organ segmentation and bone sup-
21
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
pression in chest x-ray radiography. IEEE Trans. Med. Imaging 39 (7). doi:10. Groza, V., Kuzin, A., 2020. Pneumothorax segmentation with effective conditioned
1109/tmi.2020.2974159. 1–1 post-processing in chest X-ray. In: 2020 IEEE 17th International Symposium on
Fang, Q., Yan, J., Gu, X., Zhao, J., Li, Q., 2020. Unsupervised learning-based de- Biomedical Imaging Workshops (ISBI Workshops). IEEE, pp. 1–4. doi:10.1109/
formable registration of temporal chest radiographs to detect interval change. ISBIWorkshops50223.2020.9153444.
In: Medical Imaging 2020: Image Processing. SPIE, p. 104. doi:10.1117/12. Gyawali, P.K., Ghimire, S., Bajracharya, P., Li, Z., Wang, L., 2020. Semi-supervised
2549211. medical image classification with global latent mixing. In: Medical Image Com-
Feng, Y., Teh, H.S., Cai, Y., 2019. Deep learning for chest radiology: a review. Curr. puting and Computer Assisted Intervention – MICCAI 2020, 12261. Springer,
Radiol. Rep. 7 (8). doi:10.1007/s40134-019-0333-9. pp. 604–613. doi:10.1007/978- 3- 030- 59710- 8_59.
Ferreira, J.R., Armando Cardona Cardenas, D., Moreno, R.A., de Fatima de Sa Re- Gyawali, P.K., Li, Z., Ghimire, S., Wang, L., 2019. Semi-supervised learning by dis-
belo, M., Krieger, J.E., Antonio Gutierrez, M., 2020. Multi-view ensemble con- entangling and self-ensembling over stochastic latent space. In: Medical Image
volutional neural network to improve classification of pneumonia in low con- Computing and Computer Assisted Intervention – MICCAI 2019, 11769. Springer,
trast chest X-ray images. In: 2020 42nd Annual International Conference of the pp. 766–774. doi:10.1007/978- 3- 030- 32226- 7_85.
IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, pp. 1238–1241. Habib, S.S., Rafiq, S., Zaidi, S.M.A., Ferrand, R.A., Creswell, J., Van Ginneken, B.,
doi:10.1109/EMBC44109.2020.9176517. Jamal, W.Z., Azeemi, K.S., Khowaja, S., Khan, A., 2020. Evaluation of com-
Ferreira Junior, J.R., Cardenas, D.A.C., Moreno, R.A., Rebelo, M.d.F.d.S., Krieger, J.E., puter aided detection of tuberculosis on chest radiography among peo-
Gutierrez, M.A., 2021. A general fully automated deep-learning method to detect ple with diabetes in Karachi Pakistan. Sci. Rep. 10 (1), 6276. doi:10.1038/
cardiomegaly in chest x-rays. In: Medical Imaging 2021: Computer-Aided Diag- s41598- 020- 63084- 7.
nosis. International Society for Optics and Photonics, p. 115972B. doi:10.1117/12. Haghighi, F., Hosseinzadeh Taher, M.R., Zhou, Z., Gotway, M.B., Liang, J., 2020. Learn-
2581980. ing semantics-enriched representation via self-discovery, self-classification,
Fischer, A.M., Varga-Szemes, A., Martin, S.S., Sperl, J.I., Sahbaee, P., Neu- and self-restoration. In: Medical Image Computing and Computer Assisted
mann, D., Gawlitza, J., Henzler, T., Johnson, C.M., Nance, J.W., Schoenberg, S.O., Intervention – MICCAI 2020, 12261. Springer, pp. 137–147. doi:10.1007/
Schoepf, U.J., 2020. Artificial intelligence-based fully automated per lobe seg- 978- 3- 030- 59710- 8_14.
mentation and emphysema-quantification based on chest computed tomog- Haq, N.F., Moradi, M., Wang, Z.J., 2021. A deep community based approach for large
raphy compared with global initiative for chronic obstructive lung disease scale content based X-ray image retrieval. Med. Image Anal. 68, 101847. doi:10.
severity of smokers. J. Thorac. Imaging 35 Suppl 1, S28–S34. doi:10.1097/RTI. 1016/j.media.2020.101847.
0 0 0 0 0 0 0 0 0 0 0 0 050 0. Hashir, M., Bertrand, H., Cohen, J.P., 2020. Quantifying the value of lateral views
Fricks, R.B., Abadi, E., Ria, F., Samei, E., 2021. Classification of COVID-19 in chest ra- in deep learning for chest x-rays. In: Proceedings of the Third Conference on
diographs: assessing the impact of imaging parameters using clinical and sim- Medical Imaging with Deep Learning. PMLR, pp. 288–303.
ulated images. In: Medical Imaging 2021: Computer-Aided Diagnosis. Interna- He, K., Gkioxari, G., Dollar, P., Girshick, R., 2017. Mask r-CNN. In: 2017 IEEE Interna-
tional Society for Optics and Photonics, p. 115970A. doi:10.1117/12.2582223. tional Conference on Computer Vision (ICCV). IEEE, pp. 2961–2969. doi:10.1109/
Frid-Adar, M., Amer, R., Greenspan, H., 2019. Endotracheal tube detection and seg- iccv.2017.322.
mentation in chest radiographs using synthetic data. In: Medical Image Com- He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition.
puting and Computer Assisted Intervention – MICCAI 2019, 11769. Springer, In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778.
pp. 784–792. doi:10.1007/978- 3- 030- 32226- 7_87. doi:10.1109/CVPR.2016.90.
Fukushima, K., Miyake, S., 1982. Neocognitron: a self-organizing neural network Heo, S.-J., Kim, Y., Yun, S., Lim, S.-S., Kim, J., Nam, C.-M., Park, E.-C., Jung, I., Yoon, J.-
model for a mechanism of visual pattern recognition. In: Competition and Coop- H., 2019. Deep learning algorithms with demographic information help to detect
eration in Neural Nets. Springer, pp. 267–285. doi:10.1007/978- 3- 642- 46466- 9_ tuberculosis in chest radiographs in annual workers’ health examination data.
18. Int. J. Environ. Res. Public Health 16 (2), 250. doi:10.3390/ijerph16020250.
Furutani, K., Hirano, Y., Kido, S., 2019. Segmentation of lung region from chest x-ray Hermoza, R., Maicas, G., Nascimento, J.C., Carneiro, G., 2020. Region proposals for
images using U-net. In: International Forum on Medical Imaging in Asia 2019. saliency map refinement for weakly-supervised disease localisation and classi-
SPIE, p. 48. doi:10.1117/12.2521594. fication. In: Medical Image Computing and Computer Assisted Intervention –
Ganesan, P., Rajaraman, S., Long, R., Ghoraani, B., Antani, S., 2019. Assessment of MICCAI 2020, 12266. Springer, pp. 539–549. doi:10.1007/978- 3- 030- 59725- 2_
data augmentation strategies toward performance improvement of abnormal- 52.
ity classification in chest radiographs. In: 2019 41st Annual International Con- Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S., 2017. Gans
ference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, trained by a two time-scale update rule converge to a local NASH equilibrium.
pp. 841–844. doi:10.1109/EMBC.2019.8857516. In: Proceedings of the 31st International Conference on Neural Information Pro-
Ghesu, F.C., Georgescu, B., Gibson, E., Guendel, S., Kalra, M.K., Singh, R., Digu- cessing Systems, pp. 6629–6640.
marthy, S.R., Grbic, S., Comaniciu, D., 2019. Quantifying and leveraging classifica- Hirata, Y., Kusunose, K., Tsuji, T., Fujimori, K., Kotoku, J., Sata, M., 2021. Deep learning
tion uncertainty for chest radiograph assessment. In: Medical Image Computing for detection of elevated pulmonary artery wedge pressure using standard chest
and Computer Assisted Intervention – MICCAI 2019, 11769. Springer, pp. 676– X-ray. Can. J. Cardiol. doi:10.1016/j.cjca.2021.02.007. S0828282X21001094
684. doi:10.1007/978- 3- 030- 32226- 7_75. HMHospitales, 2020. COVIDDSL, Covid Data Save Lives. https://www.hmhospitales.
van Ginneken, B., 2017. Fifty years of computer analysis in chest imaging: rule- com/coronavirus/covid- data- save- lives/english- version.
based, machine learning, deep learning. Radiol. Phys. Technol. 10 (1), 23–32. Holste, G., Sullivan, R.P., Bindschadler, M., Nagy, N., Alessio, A., 2020. Multi-class se-
doi:10.1007/s12194- 017- 0394- 5. mantic segmentation of pediatric chest radiographs. In: Medical Imaging 2020:
van Ginneken, B., Stegmann, M.B., Loog, M., 2006. Segmentation of anatomical Image Processing. SPIE, p. 49. doi:10.1117/12.2544426.
structures in chest radiographs using supervised methods: a comparative study Homayounieh, F., Digumarthy, S.R., Febbo, J.A., Garrana, S., Nitiwarangkul, C.,
on a public database. Med. Image Anal. 10 (1), 19–40. doi:10.1016/j.media.2005. Singh, R., Khera, R.D., Gilman, M., Kalra, M.K., 2020. Comparison of baseline,
02.002. bone-subtracted, and enhanced chest radiographs for detection of pneumotho-
Girshick, R., 2015. Fast r-CNN. In: 2015 IEEE International Conference on Computer rax. Can. Assoc. Radiol. J. doi:10.1177/0846537120908852. 846537120908852
Vision (ICCV). IEEE, pp. 1440–1448. doi:10.1109/iccv.2015.169. Hosch, R., Kroll, L., Nensa, F., Koitka, S., 2020. Differentiation between anteropos-
Girshick, R., Donahue, J., Darrell, T., Malik, J., 2014. Rich feature hierarchies for accu- terior and posteroanterior chest X-ray view position with convolutional neu-
rate object detection and semantic segmentation. In: 2014 IEEE Conference on ral networks. RöFo - Fortschritte auf dem Gebiet der Röntgenstrahlen und der
Computer Vision and Pattern Recognition. IEEE, pp. 580–587. doi:10.1109/cvpr. bildgebenden Verfahren, a–1183–5227 doi:10.1055/a- 1183- 5227.
2014.81. Hu, Q., Drukker, K., Giger, M.L., 2021. Role of standard and soft tissue chest radio-
Gomi, T., Hara, H., Watanabe, Y., Mizukami, S., 2020. Improved digital chest to- graphy images in COVID-19 diagnosis using deep learning. In: Medical Imaging
mosynthesis image quality by use of a projection-based dual-energy virtual 2021: Computer-Aided Diagnosis. International Society for Optics and Photonics,
monochromatic convolutional neural network with super resolution. PLoS One p. 1159704. doi:10.1117/12.2581977.
15 (12), e0244745. doi:10.1371/journal.pone.0244745. Huang, G., Liu, Z., v. d. Maaten, L., Weinberger, K.Q., 2017. Densely connected convo-
Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., lutional networks. In: IEEE Conference on Computer Vision and Pattern Recog-
Courville, A., Bengio, Y., 2014. Generative adversarial nets. In: Proceedings of nition, pp. 2261–2269. doi:10.1109/CVPR.2017.243.
the 27th International Conference on Neural Information Processing Systems - Huang, X., Liu, M.-Y., Belongie, S., Kautz, J., 2018. Multimodal unsupervised image-
Volume 2. MIT Press, pp. 2672–2680. to-image translation. In: Computer Vision – ECCV 2018. Springer, pp. 179–196.
Gozes, O., Greenspan, H., 2019. Deep feature learning from a hospital-scale chest X- doi:10.1007/978- 3- 030- 01219- 9_11.
ray dataset with application to TB detection on a small-scale dataset. In: 2019 Hurt, B., Yen, A., Kligerman, S., Hsiao, A., 2020. Augmenting interpretation of chest
41st Annual International Conference of the IEEE Engineering in Medicine and radiographs with deep learning probability maps. J. Thorac. Imaging 35 (5),
Biology Society (EMBC). IEEE, pp. 4076–4079. doi:10.1109/EMBC.2019.8856729. 285–293. doi:10.1097/RTI.0 0 0 0 0 0 0 0 0 0 0 0 0505.
Gozes, O., Greenspan, H., 2020. Bone structures extraction and enhancement in Hwang, E.J., Nam, J.G., Lim, W.H., Park, S.J., Jeong, Y.S., Kang, J.H., Hong, E.K.,
chest radiographs via CNN trained on synthetic data. In: 2020 IEEE 17th In- Kim, T.M., Goo, J.M., Park, S., Kim, K.H., Park, C.M., 2019. Deep learning for chest
ternational Symposium on Biomedical Imaging (ISBI). IEEE, pp. 858–861. doi:10. radiograph diagnosis in the emergency department. Radiology 293 (3), 573–580.
1109/ISBI45749.2020.9098738. doi:10.1148/radiol.2019191225.
Grand-challenge, 2021. Grand challenge: AI for radiology. https://grand-challenge. Hwang, E.J., Park, S., Jin, K.-N., Kim, J.I., Choi, S.Y., Lee, J.H., Goo, J.M., Aum, J.,
org/aiforradiology/. Yim, J.-J., Cohen, J.G., Ferretti, G.R., and, C.M.P., 2019. Development and valida-
Griner, D., Zhang, R., Tie, X., Zhang, C., Garrett, J.W., Li, K., Chen, G.-H., 2021. COVID- tion of a deep learning–based automated detection algorithm for major thoracic
19 pneumonia diagnosis using chest X-ray radiograph and deep learning. In: diseases on chest radiographs. JAMA Netw. Open 2 (3), e191095. doi:10.1001/
Medical Imaging 2021: Computer-Aided Diagnosis. International Society for Op- jamanetworkopen.2019.1095.
tics and Photonics, p. 1159706. doi:10.1117/12.2581972.
22
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Hwang, E.J., Park, S., Jin, K.-N., Kim, J.I., Choi, S.Y., Lee, J.H., Goo, J.M., Aum, J., different computer-aided detections with convolutional neural net. Sci. Rep. 9
Yim, J.-J., Park, C.M., Deep Learning-Based Automatic Detection Algorithm De- (1), 18738. doi:10.1038/s41598- 019- 55373- 7.
velopment and Evaluation Group, Kim, D.H., Woo, W., Choi, C., Hwang, I.P., Kim, Y.-G., Lee, S.M., Lee, K.H., Jang, R., Seo, J.B., Kim, N., 2020. Optimal ma-
Song, Y.S., Lim, L., Kim, K., Wi, J.Y., Oh, S.S., Kang, M.-J., 2019. Development and trix size of chest radiographs for computer-aided detection on lung nodule or
validation of a deep learning–based automatic detection algorithm for active mass with deep learning. European Radiology 30 (9), 4943–4951. doi:10.1007/
pulmonary tuberculosis on chest radiographs. Clin. Infect. Dis. 69 (5), 739–747. s00330- 020- 06892- 9.
doi:10.1093/cid/ciy967. Kitahara, Y., Tanaka, R., Roth, H., Oda, H., Mori, K., Kasahara, K., Matsumoto, I., 2019.
Hwang, S., Kim, H.-E., 2016. Self-transfer learning for weakly supervised lesion lo- Lung segmentation based on a deep learning approach for dynamic chest ra-
calization. In: Medical Image Computing and Computer-Assisted Intervention – diography. In: Medical Imaging 2019: Computer-Aided Diagnosis. SPIE, p. 130.
MICCAI 2016, 9901. Springer, pp. 239–246. doi:10.1007/978- 3- 319- 46723- 8_28. doi:10.1117/12.2512711.
Hwang, S., Kim, H.-E., Jeong, J., Kim, H.-J., 2016. A novel approach for tuberculosis Kitamura, G., Deible, C., 2020. Retraining an open-source pneumothorax detecting
screening based on deep convolutional neural networks. In: Medical Imaging machine learning algorithm for improved performance to medical images. Clin.
2016: Computer-Aided Diagnosis. International Society for Optics and Photonics, Imaging 61, 15–19. doi:10.1016/j.clinimag.2020.01.008.
p. 97852W. doi:10.1117/12.2216198. Krizhevsky, A., Sutskever, I., Hinton, G. E., 2012. ImageNet classification with deep
Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., Marklund, H., Hagh- convolutional neural networks.
goo, B., Ball, R.L., Shpanskaya, K.S., Seekins, J., Mong, D.A., Halabi, S.S., Sand- Kruger, R.P., Townes, J.R., Hall, D.L., Dwyer, S.J., Lodwick, G.S., 1972. Automated ra-
berg, J.K., Jones, R., Larson, D.B., Langlotz, C.P., Patel, B.N., Lungren, M.P., Ng, A.Y., diographic diagnosis via feature extraction and classification of cardiac size and
2019. Chexpert: a large chest radiograph dataset with uncertainty labels and ex- shape descriptors. IEEE Transactions on Biomedical Engineering BME-19 (3),
pert comparison. In: AAAI Conference on Artificial Intelligence, 33, pp. 590–597. 174–186. doi:10.1109/tbme.1972.324115.
Isensee, F., Jaeger, P.F., Kohl, S.A.A., Petersen, J., Maier-Hein, K.H., 2021. nnU-Net: a Kuo, P.-C., Tsai, C.C., López, D.M., Karargyris, A., Pollard, T.J., Johnson, A.E.W.,
self-configuring method for deep learning-based biomedical image segmenta- Celi, L.A., 2021. Recalibration of deep learning models for abnormality detec-
tion. Nat. Methods 18 (2), 203–211. doi:10.1038/s41592- 020- 01008- z. tion in smartphone-captured chest radiograph. npj Digital Medicine 4 (1), 25.
Isola, P., Zhu, J.-Y., Zhou, T., Efros, A.A., 2017. Image-to-image translation with condi- doi:10.1038/s41746- 021- 00393- 9.
tional adversarial networks. In: 2017 IEEE Conference on Computer Vision and Kurmann, T., Márquez-Neila, P., Wolf, S., Sznitman, R., 2019. Deep Multi-label Clas-
Pattern Recognition (CVPR). IEEE, pp. 1125–1134. doi:10.1109/cvpr.2017.632. sification in Affine Subspaces. In: Medical Image Computing and Computer As-
Jaeger, S., Candemir, S., Antani, S., Wáng, Y.-X.J., Lu, P.-X., Thoma, G., 2014. Two pub- sisted Intervention – MICCAI 2019, 11764. Springer, pp. 165–173. doi:10.1007/
lic chest X-ray datasets for computer-aided screening of pulmonary diseases. 978- 3- 030- 32239- 7_19.
Quant. Imaging Med. Surg. 4 (6), 475–477. doi:10.3978/j.issn.2223-4292.2014.11. Kusakunniran, W., Karnjanapreechakorn, S., Siriapisith, T., Borwarnginn, P., Su-
20. tassananon, K., Tongdee, T., Saiviroonporn, P., 2021. COVID-19 detection and
Jang, R., Kim, N., Jang, M., Lee, K.H., Lee, S.M., Lee, K.H., Noh, H.N., Seo, J.B., 2020. heatmap generation in chest x-ray images. Journal of Medical Imaging 8 (S1),
Assessment of the robustness of convolutional neural networks in labeling noise 014001. doi:10.1117/1.JMI.8.S1.014001.
by using chest X-ray images from multiple centers. JMIR Med. Inform. 8 (8), Kusunose, K., Hirata, Y., Tsuji, T., Kotoku, J., Sata, M., 2020. Deep learning to pre-
e18089. doi:10.2196/18089. dict elevated pulmonary artery pressure in patients with suspected pulmonary
Johnson, A.E.W., Pollard, T.J., Berkowitz, S.J., Greenbaum, N.R., Lungren, M.P., ying hypertension using standard chest X ray. Sci. Rep. 10 (1), 19311. doi:10.1038/
Deng, C., Mark, R.G., Horng, S., 2019. MIMIC-CXR, a de-identified publicly avail- s41598- 020- 76359- w.
able database of chest radiographs with free-text reports. Sci. Data 6 (1). doi:10. Lakhani, P., 2017. Deep Convolutional Neural Networks for Endotracheal Tube Po-
1038/s41597- 019- 0322- 0. sition and X-ray Image Classification: Challenges and Opportunities. Journal of
Kallianos, K., Mongan, J., Antani, S., Henry, T., Taylor, A., Abuya, J., Kohli, M., 2019. Digital Imaging 30 (4), 460–468. doi:10.1007/s10278- 017- 9980- 7.
How far have we come? artificial intelligence for chest radiograph interpreta- Lakhani, P., Sundaram, B., 2017. Deep Learning at Chest Radiography: Automated
tion. Clin. Radiol. 74 (5), 338–345. doi:10.1016/j.crad.2018.12.015. Classification of Pulmonary Tuberculosis by Using Convolutional Neural Net-
Karargyris, A., Kashyap, S., Wu, J.T., Sharma, A., Moradi, M., Syeda-Mahmood, T., works. Radiology 284 (2), 574–582. doi:10.1148/radiol.2017162326.
2019. Age prediction using a large chest x-ray dataset. In: Medical Imaging Larrazabal, A.J., Martinez, C., Glocker, B., Ferrante, E., 2020. Post-DAE: Anatomically
2019: Computer-Aided Diagnosis. SPIE, p. 66. doi:10.1117/12.2512922. Plausible Segmentation via Post-Processing With Denoising Autoencoders. IEEE
Karargyris, A., Wong, K.C.L., Wu, J.T., Moradi, M., Syeda-Mahmood, T., 2019. Boosting Transactions on Medical Imaging 39 (12), 3813–3820. doi:10.1109/TMI.2020.
the rule-out accuracy of deep disease detection using class weight modifiers. 3005297.
In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). Laserson, J., Lantsman, C.D., Cohen-Sfady, M., Tamir, I., Goz, E., Brestel, C., Bar, S.,
IEEE, pp. 877–881. doi:10.1109/ISBI.2019.8759532. Atar, M., Elnekave, E., 2018. TextRay: Mining Clinical Reports to Gain a Broad
Karras, T., Aila, T., Laine, S., Lehtinen, J., 2018. Progressive growing of GANs for im- Understanding of Chest X-Rays. In: Medical Image Computing and Computer
proved quality, stability, and variation. In: International Conference on Learning Assisted Intervention – MICCAI 2018, 11071. Springer, pp. 553–561. doi:10.1007/
Representations, pp. 1–8. 978- 3- 030- 00934- 2_62.
Kashyap, S., Karargyris, A., Wu, J., Gur, Y., Sharma, A., Wong, K.C.L., Moradi, M., LeCun, Y., Bengio, Y., 1998. Convolutional networks for images, speech, and time
Syeda-Mahmood, T., 2020. Looking in the right place for anomalies: explainable series. In: The handbook of brain theory and neural networks. MIT Press,
Ai through automatic location learning. In: 2020 IEEE 17th International Sympo- pp. 255–258.
sium on Biomedical Imaging (ISBI). IEEE, pp. 1125–1129. doi:10.1109/ISBI45749. Lee, D., Kim, H., Choi, B., Kim, H.-J., 2019. Development of a deep neural network for
2020.9098370. generating synthetic dual-energy chest x-ray images with single x-ray exposure.
Kashyap, S., Moradi, M., Karargyris, A., Wu, J.T., Morris, M., Saboury, B., Siegel, E., Physics in Medicine & Biology 64 (11), 115017. doi:10.1088/1361-6560/ab1cee.
Syeda-Mahmood, T., 2019. Artificial intelligence for point of care radiograph Lee, H., Mansouri, M., Tajmir, S., Lev, M.H., Do, S., 2018. A Deep-Learning System
quality assessment. In: Medical Imaging 2019: Computer-Aided Diagnosis. Inter- for Fully-Automated Peripherally Inserted Central Catheter (PICC) Tip Detection.
national Society for Optics and Photonics, p. 109503K. doi:10.1117/12.2513092. Journal of Digital Imaging 31 (4), 393–402. doi:10.1007/s10278- 017- 0025- z.
Kermany, D., 2018. Large dataset of labeled optical coherence tomography (oct) and van Leeuwen, K.G., Schalekamp, S., Rutten, M.J.C.M., van Ginneken, B., de
chest x-ray images. 10.17632/RSCBJBR9SJ.3. Rooij, M., 2021. Artificial intelligence in radiology: 100 commercially avail-
Khakzar, A., Albarqouni, S., Navab, N., 2019. Learning interpretable features via ad- able products and their scientific evidence. European Radiology doi:10.1007/
versarially robust optimization. In: Medical Image Computing and Computer As- s00330- 021- 07892- z.
sisted Intervention – MICCAI 2019, 11769. Springer, pp. 793–800. doi:10.1007/ Lenga, M., Schulz, H., Saalbach, A., 2020. Continual Learning for Domain Adaptation
978- 3- 030- 32226- 7_88. in Chest X-ray Classification. Proceedings of the Third Conference on Medical
Khatibi, T., Shahsavari, A., Farahani, A., 2021. Proposing a novel multi-instance learn- Imaging with Deep Learning, PMLR 121:413–423.
ing model for tuberculosis recognition from chest X-ray images based on CNNs, Lenis, D., Major, D., Wimmer, M., Berg, A., Sluiter, G., Bühler, K., 2020. Domain Aware
complex networks and stacked ensemble. Phys. Eng. Sci. Med. doi:10.1007/ Medical Image Classifier Interpretation by Counterfactual Impact Analysis. In:
s13246- 021- 00980- w. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020,
Kholiavchenko, M., Sirazitdinov, I., Kubrak, K., Badrutdinova, R., Kuleev, R., Yuan, Y., 12261. Springer, pp. 315–325. doi:10.1007/978- 3- 030- 59710- 8_31.
Vrtovec, T., Ibragimov, B., 2020. Contour-aware multi-label chest X-ray organ Li, B., Kang, G., Cheng, K., Zhang, N., 2019. Attention-Guided Convolutional Neural
segmentation. Int. J. Comput. Assist. Radiol. Surg. 15 (3), 425–436. doi:10.1007/ Network for Detecting Pneumonia on Chest X-Rays. In: 2019 41st Annual In-
s11548- 019- 02115- 9. ternational Conference of the IEEE Engineering in Medicine and Biology Society
Kim, H.-E., Kim, S., Lee, J., 2018. Keep and learn: continual learning by constrain- (EMBC). IEEE, pp. 4851–4854. doi:10.1109/EMBC.2019.8857277.
ing the latent space for knowledge preservation in neural networks. In: Medical Li, F., Shi, J.-X., Yan, L., Wang, Y.-G., Zhang, X.-D., Jiang, M.-S., Wu, Z.-Z., Zhou, K.-Q.,
Image Computing and Computer Assisted Intervention – MICCAI 2018, 11070. 2021. Lesion-aware convolutional neural network for chest radiograph classifi-
Springer, pp. 520–528. doi:10.1007/978- 3- 030- 00928- 1_59. cation. Clin. Radiol. 76 (2), 155.e1–155.e14. doi:10.1016/j.crad.2020.08.027.
Kim, J.R., Shim, W.H., Yoon, H.M., Hong, S.H., Lee, J.S., Cho, Y.A., Kim, S., 2017. Li, M.D., Arun, N.T., Gidwani, M., Chang, K., Deng, F., Little, B.P., Mendoza, D.P.,
Computerized bone age estimation using deep learning based program: evalua- Lang, M., Lee, S.I., O’Shea, A., Parakh, A., Singh, P., Kalpathy-Cramer, J., 2020.
tion of the accuracy and efficiency. AJR. Am. J. Roentgenol. 209 (6), 1374–1380. Automated assessment of COVID-19 pulmonary disease severity on chest radio-
doi:10.2214/AJR.17.18224. graphs using convolutional siamese neural networks. medRxiv doi:10.1101/2020.
Kim, M., Lee, B.-D., 2021. Automatic lung segmentation on chest X-rays using self- 05.20.20108159.
attention deep neural network. Sensors 21 (2), 369. doi:10.3390/s21020369. Li, X., Cao, R., Zhu, D., 2020. Vispi: Automatic visual perception and interpretation
Kim, Y.-G., Cho, Y., Wu, C.-J., Park, S., Jung, K.-H., Seo, J.B., Lee, H.J., Hwang, H.J., of chest x-rays. In: Medical Imaging with Deep Learning, pp. 1–8.
Lee, S.M., Kim, N., 2019. Short-term reproducibility of pulmonary nodule and Li, X., Shen, L., Xie, X., Huang, S., Xie, Z., Hong, X., Yu, J., 2020. Multi-resolution
mass detection in chest radiographs: comparison among radiologists and four convolutional networks for chest X-ray radiograph based lung nodule detection.
Artif. Intell. Med. 103, 101744. doi:10.1016/j.artmed.2019.101744.
23
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Li, X., Zhu, D., 2020. Robust Detection of Adversarial Attacks on Medical Images. In: FCN with Local Refinement. In: Medical Image Computing and Computer As-
2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, sisted Intervention – MICCAI 2018, 11071. Springer, pp. 562–570. doi:10.1007/
pp. 1154–1158. doi:10.1109/ISBI45749.2020.9098628. 978- 3- 030- 00934- 2_63.
Li, Y., Dong, X., Shi, W., Miao, Y., Yang, H., Jiang, Z., 2021. Lung fields segmentation Maguolo, G., Nanni, L., 2020. A critic evaluation of methods for covid-19 automatic
in chest radiographs using Dense-U-Net and fully connected CRF. In: Twelfth detection from x-ray images. arXiv preprint arXiv:2004.12823.
International Conference on Graphics and Image Processing (ICGIP 2020). Inter- Mahapatra, D., Ge, Z., 2019. Training Data Independent Image Registration with Gans
national Society for Optics and Photonics, p. 1172011. doi:10.1117/12.2589384. Using Transfer Learning and Segmentation Information. In: 2019 IEEE 16th In-
Li, Z., Hou, Z., Chen, C., Hao, Z., An, Y., Liang, S., Lu, B., 2019. Automatic car- ternational Symposium on Biomedical Imaging (ISBI 2019). IEEE, pp. 709–713.
diothoracic ratio calculation with deep learning. IEEE Access 7, 37749–37756. doi:10.1109/ISBI.2019.8759247.
doi:10.1109/ACCESS.2019.290 0 053. Majkowska, A., Mittal, S., Steiner, D.F., Reicher, J.J., McKinney, S.M., Duggan, G.E.,
Li, Z., Li, H., Han, H., Shi, G., Wang, J., Zhou, S.K., 2019. Encoding CT Anatomy Eswaran, K., Cameron Chen, P.-H., Liu, Y., Kalidindi, S.R., Ding, A., Corrado, G.S.,
Knowledge for Unpaired Chest X-ray Image Decomposition. In: Medical Image Tse, D., Shetty, S., 2019. Chest Radiograph Interpretation with Deep Learn-
Computing and Computer Assisted Intervention – MICCAI 2019, 11769. Springer, ing Models: Assessment with Radiologist-adjudicated Reference Standards and
pp. 275–283. doi:10.1007/978- 3- 030- 32226- 7_31. Population-adjusted Evaluation. Radiology 294 (2), 421–431. doi:10.1148/radiol.
Liang, C.-H., Liu, Y.-C., Wu, M.-T., Garcia-Castro, F., Alberich-Bayarri, A., Wu, F.-Z., 2019191293. Publisher: Radiological Society of North America
2020. Identifying pulmonary nodules or masses on chest radiography using Mansilla, L., Milone, D.H., Ferrante, E., 2020. Learning deformable registration of
deep learning: external validation and strategies to improve clinical practice. medical images with anatomical constraints. Neural Networks 124, 269–279.
Clinical Radiology 75 (1), 38–45. doi:10.1016/j.crad.2019.08.005. doi:10.1016/j.neunet.2020.01.023.
Liang, G., Zheng, L., 2020. A transfer learning method with deep residual net- Mansoor, A., Cerrolaza, J.J., Perez, G., Biggs, E., Okada, K., Nino, G., Linguraru, M.G.,
work for pediatric pneumonia diagnosis. Computer Methods and Programs in 2020. A Generic Approach to Lung Field Segmentation From Chest Radiographs
Biomedicine 187, 104964. doi:10.1016/j.cmpb.2019.06.023. Using Deep Space and Shape Learning. IEEE Transactions on Biomedical Engi-
Lin, C., Tang, R., Lin, D.D., Liu, L., Lu, J., Chen, Y., Gao, D., Zhou, J., 2020. Deep Feature neering 67 (4), 1206–1220. doi:10.1109/TBME.2019.2933508.
Disentanglement Learning for Bone Suppression in Chest Radiographs. In: 2020 Mansoor, A., Perez, G., Nino, G., Linguraru, M.G., 2016. Automatic tissue characteri-
IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE, pp. 795– zation of air trapping in chest radiographs using deep neural networks. In: 2016
798. doi:10.1109/ISBI45749.2020.9098399. 38th Annual International Conference of the IEEE Engineering in Medicine and
Lin, T.-Y., Goyal, P., Girshick, R., He, K., Dollar, P., 2017. Focal loss for dense object Biology Society (EMBC). IEEE, pp. 97–100. doi:10.1109/EMBC.2016.7590649.
detection. In: 2017 IEEE International Conference on Computer Vision (ICCV). Mao, C., Yao, L., Pan, Y., Luo, Y., Zeng, Z., 2018. Deep Generative Classifiers for Tho-
IEEE, pp. 2980–2988. doi:10.1109/iccv.2017.324. racic Disease Diagnosis with Chest X-ray Images. In: 2018 IEEE International
Litjens, G., Kooi, T., Bejnordi, B.E., Setio, A.A.A., Ciompi, F., Ghafoorian, M., van der Conference on Bioinformatics and Biomedicine (BIBM). IEEE, pp. 1209–1214.
Laak, J.A., van Ginneken, B., Sánchez, C.I., 2017. A survey on deep learning in doi:10.1109/BIBM.2018.8621107.
medical image analysis. Medical Image Analysis 42, 60–88. doi:10.1016/j.media. Mao, Y., Xue, F.-F., Wang, R., Zhang, J., Zheng, W.-S., Liu, H., 2020. Abnormality De-
2017.07.005. tection in Chest X-Ray Images Using Uncertainty Prediction Autoencoders. In:
Liu, H., Wang, L., Nan, Y., Jin, F., Wang, Q., Pu, J., 2019. SDFN: Segmentation- Medical Image Computing and Computer Assisted Intervention – MICCAI 2020,
based deep fusion network for thoracic disease classification in chest X-ray 12266. Springer, pp. 529–538. doi:10.1007/978- 3- 030- 59725- 2_51.
images. Computerized Medical Imaging and Graphics 75, 66–73. doi:10.1016/j. Mathai, T.S., Gorantla, V., Galeotti, J., 2019. Segmentation of Vessels in Ultra High
compmedimag.2019.05.005. Frequency Ultrasound Sequences Using Contextual Memory. In: Medical Image
Liu, X., Wang, S., Deng, Y., Chen, K., 2017. Coronary artery calcification (CAC) clas- Computing and Computer Assisted Intervention – MICCAI 2019, 11765. Springer,
sification with deep convolutional neural networks. In: Medical Imaging 2017: pp. 173–181. doi:10.1007/978- 3- 030- 32245- 8_20.
Computer-Aided Diagnosis. SPIE, p. 101340M. doi:10.1117/12.2253974. Matsubara, N., Teramoto, A., Saito, K., Fujita, H., 2020. Bone suppression for chest X-
Liu, Y., Liu, M., Xi, Y., Qin, G., Shen, D., Yang, W., 2020. Generating Dual-Energy ray image using a convolutional neural filter. Physical and Engineering Sciences
Subtraction Soft-Tissue Images from Chest Radiographs via Bone Edge-Guided in Medicine 43 (1), 97–108. doi:10.1007/s13246- 019- 00822- w.
GAN. In: Medical Image Computing and Computer Assisted Intervention – MIC- Matsumoto, T., Kodera, S., Shinohara, H., Ieki, H., Yamaguchi, T., Higashikuni, Y.,
CAI 2020, 12262. Springer, pp. 678–687. doi:10.1007/978- 3- 030- 59713- 9_65. Kiyosue, A., Ito, K., Ando, J., Takimoto, E., Akazawa, H., Morita, H., Komuro, I.,
Liu, Y.-C., Lin, Y.-C., Tsai, P.-Y., Iwata, O., Chuang, C.-C., Huang, Y.-H., Tsai, Y.-S., 2020. Diagnosing Heart Failure from Chest X-Ray Images Using Deep Learning.
Sun, Y.-N., 2020. Convolutional Neural Network-Based Humerus Segmentation International Heart Journal 61 (4), 781–786. doi:10.1536/ihj.19-714.
and Application to Bone Mineral Density Estimation from Chest X-ray Images McDonald, C.J., Overhage, J.M., Barnes, M., Schadow, G., Blevins, L., Dexter, P.R.,
of Critical Infants. Diagnostics 10 (12), 1028. doi:10.3390/diagnostics10121028. Mamlin, B., 2005. The Indiana Network For Patient Care: A Working Local
Lodwick, G.S., Keats, T.E., Dorst, J.P., 1963. The coding of roentgen images for com- Health Information Infrastructure. Health Affairs 24 (5), 1214–1220. doi:10.1377/
puter analysis as applied to lung cancer. Radiology 81 (2), 185–200. doi:10.1148/ hlthaff.24.5.1214. Publisher: Health Affairs
81.2.185. McManigle, J.E., Bartz, R.R., Carin, L., 2020. Y-Net for Chest X-Ray Preprocessing:
Longjiang, E., Baisong Zhao, Liu, H., Zheng, C., Song, X., Cai, Y., Liang, H., 2020. Simultaneous Classification of Geometry and Segmentation of Annotations. In:
Image-based Deep Learning in Diagnosing the Etiology of Pneumonia on Pe- 2020 42nd Annual International Conference of the IEEE Engineering in Medicine
diatric Chest X-rays. Pediatr. Pulmonol. ppul.25229. doi:10.1002/ppul.25229. & Biology Society (EMBC). IEEE, pp. 1266–1269. doi:10.1109/EMBC44109.2020.
Lu, M.T., Ivanov, A., Mayrhofer, T., Hosny, A., Aerts, H.J.W.L., Hoffmann, U., 2019. Deep 9176334.
Learning to Assess Long-term Mortality From Chest Radiographs. JAMA Network Mettler, F.A., Bhargavan, M., Faulkner, K., Gilley, D.B., Gray, J.E., Ibbott, G.S.,
Open 2 (7), e197416. doi:10.1001/jamanetworkopen.2019.7416. Lipoti, J.A., Mahesh, M., McCrohan, J.L., Stabin, M.G., Thomadsen, B.R.,
Lu, M.T., Raghu, V.K., Mayrhofer, T., Aerts, H.J., Hoffmann, U., 2020. Deep Learning Yoshizumi, T.T., 2009. Radiologic and nuclear medicine studies in the united
Using Chest Radiographs to Identify High-Risk Smokers for Lung Cancer Screen- states and worldwide: Frequency, radiation dose, and comparison with other
ing Computed Tomography: Development and Validation of a Prediction Model. radiation sources—1950–2007. Radiology 253 (2), 520–531. doi:10.1148/radiol.
Annals of Internal Medicine 173 (9), 704–713. doi:10.7326/M20-1868. 2532082010.
Lu, Y., Li, W., Zheng, K., Wang, Y., Harrison, A.P., Lin, C., Wang, S., Xiao, J., Lu, L., Meyers, P.H., Nice, C.M., Becker, H.C., Nettleton, W.J., Sweeney, J.W., Meckstroth, G.R.,
Kuo, C.-F., Miao, S., 2020. Learning to Segment Anatomical Structures Accu- 1964. Automated computer analysis of radiographic images. Radiology 83 (6),
rately from One Exemplar. In: Medical Image Computing and Computer As- 1029–1034. doi:10.1148/83.6.1029.
sisted Intervention – MICCAI 2020, 12261. Springer, pp. 678–688. doi:10.1007/ Michael, P., Yoon, H.-J., 2020. Survey of image denoising methods for medical im-
978- 3- 030- 59710- 8_66. age classification. In: Medical Imaging 2020: Computer-Aided Diagnosis. SPIE,
Luo, L., Yu, L., Chen, H., Liu, Q., Wang, X., Xu, J., Heng, P.-A., 2020. Deep Mining p. 132. doi:10.1117/12.2549695.
External Imperfect Data for Chest X-Ray Disease Screening. IEEE Transactions Milletari, F., Rieke, N., Baust, M., Esposito, M., Navab, N., 2018. CFCM: Segmenta-
on Medical Imaging 39 (11), 3583–3594. doi:10.1109/TMI.2020.30 0 0949. tion via Coarse to Fine Context Memory. In: Medical Image Computing and
López-Cabrera, J.D., Orozco-Morales, R., Portal-Diaz, J.A., Lovelle-Enríquez, O., Pérez- Computer Assisted Intervention – MICCAI 2018, 11073. Springer, pp. 667–674.
Díaz, M., 2021. Current limitations to identify COVID-19 using artificial in- doi:10.1007/978- 3- 030- 00937- 3_76.
telligence with chest X-ray imaging. Health and Technology 11 (2), 411–424. Mitra, A., Chakravarty, A., Ghosh, N., Sarkar, T., Sethuraman, R., Sheet, D., 2020. A
doi:10.1007/s12553- 021- 00520- 2. Systematic Search over Deep Convolutional Neural Network Architectures for
Ma, C., Wang, H., Hoi, S.C.H., 2019. Multi-label Thoracic Disease Image Classifica- Screening Chest Radiographs. In: 2020 42nd Annual International Conference
tion with Cross-Attention Networks. In: Medical Image Computing and Com- of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, pp. 1225–
puter Assisted Intervention – MICCAI 2019, 11769. Springer, pp. 730–738. doi:10. 1228. doi:10.1109/EMBC44109.2020.9175246.
1007/978- 3- 030- 32226- 7_81. Mittal, A., Kumar, D., Mittal, M., Saba, T., Abunadi, I., Rehman, A., Roy, S., 2020. De-
Madani, A., Moradi, M., Karargyris, A., Syeda-Mahmood, T., 2018. Semi-supervised tecting Pneumonia Using Convolutions and Dynamic Capsule Routing for Chest
learning with generative adversarial networks for chest X-ray classification with X-ray Images. Sensors 20 (4), 1068. doi:10.3390/s20041068.
ability of data domain adaptation. In: 2018 IEEE 15th International Symposium Moradi, M., Madani, A., Gur, Y., Guo, Y., Syeda-Mahmood, T., 2018. Bimodal Net-
on Biomedical Imaging (ISBI 2018). IEEE, pp. 1038–1042. doi:10.1109/ISBI.2018. work Architectures for Automatic Generation of Image Annotation from Text. In:
8363749. Medical Image Computing and Computer Assisted Intervention – MICCAI 2018,
Madani, A., Moradi, M., Karargyris, A., Syeda-Mahmood, T.F., 2018. Chest x-ray gen- 11070. Springer, pp. 449–456. doi:10.1007/978- 3- 030- 00928- 1_51.
eration and data augmentation for cardiovascular abnormality classification. In: Moradi, M., Wong, K.L., Karargyris, A., Syeda-Mahmood, T., 2020. Quality controlled
Medical Imaging 2018: Image Processing. SPIE, p. 57. doi:10.1117/12.2293971. segmentation to aid disease detection. In: Medical Imaging 2020: Computer-
Mader, A.O., von Berg, J., Fabritz, A., Lorenz, C., Meyer, C., 2018. Localization Aided Diagnosis. SPIE, p. 138. doi:10.1117/12.2549426.
and Labeling of Posterior Ribs in Chest Radiographs Using a CRF-regularized Mortani Barbosa, E.J., Gefter, W.B., Ghesu, F.C., Liu, S., Mailhe, B., Mansoor, A.,
24
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Grbic, S., Vogt, S., 2021. Automated Detection and Quantification of COVID- Z., 2020. Learning hierarchical attention for weakly-supervised chest X-ray ab-
19 Airspace Disease on Chest Radiographs: A Novel Approach Achieving Ex- normality localization and diagnosis. IEEE Trans. Med. Imaging doi:10.1109/TMI.
pert Radiologist-Level Performance Using a Deep Convolutional Neural Network 2020.3042773. 1–1
Trained on Digital Reconstructed Radiographs From Computed Tomography– Ouyang, X., Xue, Z., Zhan, Y., Zhou, X.S., Wang, Q., Zhou, Y., Wang, Q., Cheng, J.-Z.,
Derived Ground Truth. Invest. Radiol. Publish Ahead of Print. doi:10.1097/RLI. 2019. Weakly supervised segmentation framework with uncertainty: a study on
0 0 0 0 0 0 0 0 0 0 0 0 0763. pneumothorax segmentation in chest X-ray. In: Medical Image Computing and
Murphy, K., Habib, S.S., Zaidi, S.M.A., Khowaja, S., Khan, A., Melendez, J., Computer Assisted Intervention – MICCAI 2019, 11769. Springer, pp. 613–621.
Scholten, E.T., Amad, F., Schalekamp, S., Verhagen, M., Philipsen, R.H.H.M., Mei- doi:10.1007/978- 3- 030- 32226- 7_68.
jers, A., van Ginneken, B., 2020. Computer aided detection of tuberculosis on Owais, M., Arsalan, M., Mahmood, T., Kim, Y.H., Park, K.R., 2020. Comprehen-
chest radiographs: An evaluation of the CAD4TB v6 system. Sci. Rep. 10 (1), sive computer-aided decision support framework to diagnose tuberculosis from
5492. doi:10.1038/s41598- 020- 62148- y. chest X-ray images: data mining study. JMIR Med. Inform. 8 (12), e21790.
Murphy, K., Smits, H., Knoops, A.J.G., Korst, M.B.J.M., Samson, T., Scholten, E.T., doi:10.2196/21790.
Schalekamp, S., Schaefer-Prokop, C.M., Phi lipsen, R.H.H.M., Meijers, A., Melen- Pan, I., Agarwal, S., Merck, D., 2019. Generalizable inter-institutional classification
dez, J., van Ginneken, B., Rutten, M., 2020. COVID-19 on Chest Radiographs: A of abnormal chest radiographs using efficient convolutional neural networks. J.
Multireader Evaluation of an Artificial Intelligence System. Radiology 296 (3), Digit. Imaging 32 (5), 888–896. doi:10.1007/s10278- 019- 00180- 9.
E166–E172. doi:10.1148/radiol.2020201874. Pan, Y., Chen, Q., Chen, T., Wang, H., Zhu, X., Fang, Z., Lu, Y., 2019. Evaluation of
Márquez-Neila, P., Sznitman, R., 2019. Image Data Validation for Medical Systems. a computer-aided method for measuring the Cobb angle on chest X-rays. Eur.
In: Medical Image Computing and Computer Assisted Intervention – MICCAI Spine J. 28 (12), 3035–3043. doi:10.10 07/s0 0586- 019- 06115- w.
2019, 11767. Springer, pp. 329–337. doi:10.1007/978- 3- 030- 32251- 9_36. Park, S., Lee, S.M., Kim, N., Choe, J., Cho, Y., Do, K.-H., Seo, J.B., 2019. Applica-
Nakao, T., Hanaoka, S., Nomura, Y., Murata, M., Takenaga, T., Miki, S., Watadani, T., tion of deep learning–based computer-aided detection system: detecting pneu-
Yoshikawa, T., Hayashi, N., Abe, O., 2021. Unsupervised Deep Anomaly Detection mothorax on chest radiograph after biopsy. Eur. Radiol. 29 (10), 5341–5348.
in Chest Radiographs. J. Digit. Imaging doi:10.1007/s10278- 020- 00413- 2. doi:10.10 07/s0 0330- 019- 06130- x.
Nam, J.G., Kim, M., Park, J., Hwang, E.J., Lee, J.H., Hong, J.H., Goo, J.M., Park, C.M., Park, S., Lee, S.M., Lee, K.H., Jung, K.-H., Bae, W., Choe, J., Seo, J.B., 2020. Deep
2020. Development and validation of a deep learning algorithm detecting 10 learning-based detection system for multiclass lesions on chest radiographs:
common abnormalities on chest radiographs. Eur. Respir. J. 2003061. doi:10. comparison with observer readings. European Radiology 30 (3), 1359–1368.
1183/13993003.03061-2020. doi:10.10 07/s0 0330- 019- 06532- x.
Nam, J.G., Park, S., Hwang, E.J., Lee, J.H., Jin, K.-N., Lim, K.Y., Vu, T.H., Sohn, J.H., Pasa, F., Golkov, V., Pfeiffer, F., Cremers, D., Pfeiffer, D., 2019. Efficient deep network
Hwang, S., Goo, J.M., Park, C.M., 2019. Development and Validation of Deep architectures for fast chest X-ray tuberculosis screening and visualization. Sci.
Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nod- Rep. 9 (1), 6268. doi:10.1038/s41598- 019- 42557- 4.
ules on Chest Radiographs. Radiology 290 (1), 218–228. doi:10.1148/radiol. Paul, A., Shen, T.C., Lee, S., Balachandar, N., Peng, Y., Lu, Z., Summers, R.M., 2021.
2018180237. Generalized zero-shot chest X-ray diagnosis through trait-guided multi-view se-
Narayanan, B.N., Davuluru, V.S.P., Hardie, R.C., 2020. Two-stage deep learning ar- mantic embedding with self-training. IEEE Trans. Med. Imaging doi:10.1109/TMI.
chitecture for pneumonia detection and its diagnosis in chest radiographs. In: 2021.3054817. 1–1
Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Ap- Paul, A., Tang, Y.-X., Shen, T.C., Summers, R.M., 2021. Discriminative ensemble learn-
plications. SPIE, p. 15. doi:10.1117/12.2547635. ing for few-shot chest x-ray diagnosis. Med. Image Anal. 68, 101911. doi:10.1016/
Nash, M., Kadavigere, R., Andrade, J., Sukumar, C.A., Chawla, K., Shenoy, V.P., j.media.2020.101911.
Pande, T., Huddart, S., Pai, M., Saravu, K., 2020. Deep learning, computer-aided Paul, A., Tang, Y.-X., Summers, R.M., 2020. Fast few-shot transfer learning for disease
radiography reading for tuberculosis: a diagnostic accuracy study from a tertiary identification from chest x-ray images using autoencoder ensemble. In: Medical
hospital in India. Sci. Rep. 10 (1), 210. doi:10.1038/s41598- 019- 56589- 3. Imaging 2020: Computer-Aided Diagnosis. SPIE, p. 6. doi:10.1117/12.2549060.
National Lung Screening Trial Research Team, N., Aberle, D.R., Adams, A.M., Pesce, E., Joseph Withey, S., Ypsilantis, P.-P., Bakewell, R., Goh, V., Montana, G., 2019.
Berg, C.D., Black, W.C., Clapp, J.D., Fagerstrom, R.M., Gareen, I.F., Gatsonis, C., Learning to detect chest radiographs containing pulmonary lesions using visual
Marcus, P.M., Sicks, J.D., 2011. Reduced lung-cancer mortality with low-dose attention networks. Med. Image Anal. 53, 26–38. doi:10.1016/j.media.2018.12.
computed tomographic screening. The New England Journal of Medicine 365 007.
(5), 395–409. doi:10.1056/NEJMoa1102873. Pham, H.H., Le, T.T., Ngo, D.T., Tran, D.Q., Nguyen, H.Q., 2020. Interpreting chest
Novikov, A.A., Lenis, D., Major, D., Hladuvka, J., Wimmer, M., Buhler, K., 2018. Fully X-rays via {CNN}s that exploit hierarchical disease dependencies and uncer-
convolutional architectures for multiclass segmentation in chest radiographs. tainty labels. In: Medical Imaging with Deep Learning, pp. 1–8.
IEEE Transactions on Medical Imaging 37 (8), 1865–1876. doi:10.1109/tmi.2018. Portela, R.D.S., Pereira, J.R.G., Costa, M.G.F., Filho, C.F.F.C., 2020. Lung region seg-
2806086. mentation in chest X-ray images using deep convolutional neural networks. In:
Nugroho, B.A., 2021. An aggregate method for thorax diseases classification. Sci. Rep. 2020 42nd Annual International Conference of the IEEE Engineering in Medicine
11 (1), 3242. doi:10.1038/s41598- 021- 81765- 9. & Biology Society (EMBC). IEEE, pp. 1246–1249. doi:10.1109/EMBC44109.2020.
Oakden-Rayner, L., 2019. Half a million X-rays! First impressions of the Stanford and 9175478.
MIT chest X-ray datasets. https://lukeoakdenrayner.wordpress.com/2019/02/25/ Prevedello, L.M., Halabi, S.S., Shih, G., Wu, C.C., Kohli, M.D., Chokshi, F.H., Erick-
half- a- million- x- rays- first- impressions- of- the- stanford- and- mit- chest- x- ray-datasets/. son, B.J., Kalpathy-Cramer, J., Andriole, K.P., Flanders, A.E., 2019. Challenges
Oakden-Rayner, L., 2020. Exploring large-scale public medical image datasets. Aca- related to artificial intelligence research in medical imaging and the impor-
demic Radiology 27 (1), 106–112. doi:10.1016/j.acra.2019.10.006. tance of image analysis competitions. Radiology 1 (1), e180031. doi:10.1148/ryai.
Odena, A., Olah, C., Shlens, J., 2017. Conditional image synthesis with auxiliary clas- 2019180031.
sifier GANs. In: Proceedings of the 34th International Conference on Machine Qin, C., Yao, D., Shi, Y., Song, Z., 2018. Computer-aided detection in chest radiography
Learning. In: Proceedings of Machine Learning Research, 70, pp. 2642–2651. based on artificial intelligence: a survey. BioMed. Eng.OnLine 17 (1). doi:10.1186/
Ogawa, R., Kido, T., Kido, T., Mochizuki, T., 2019. Effect of augmented datasets on s12938- 018- 0544- y.
deep convolutional neural networks applied to chest radiographs. Clinical Radi- Qin, Z.Z., Sander, M.S., Rai, B., Titahong, C.N., Sudrungrot, S., Laah, S.N., Ad-
ology 74 (9), 697–701. doi:10.1016/j.crad.2019.04.025. hikari, L.M., Carter, E.J., Puri, L., Codlin, A.J., Creswell, J., 2019. Using artificial
Oh, D.Y., Kim, J., Lee, K.J., 2019. Longitudinal Change Detection on Chest X-rays Us- intelligence to read chest radiographs for tuberculosis detection: a multi-site
ing Geometric Correlation Maps. In: Medical Image Computing and Computer evaluation of the diagnostic accuracy of three deep learning systems. Sci. Rep.
Assisted Intervention – MICCAI 2019, 11769. Springer, pp. 748–756. doi:10.1007/ 9 (1), 150 0 0. doi:10.1038/s41598- 019- 51503- 3.
978- 3- 030- 32226- 7_83. Qu, W., Balki, I., Mendez, M., Valen, J., Levman, J., Tyrrell, P.N., 2020. Assessing and
Oh, Y., Park, S., Ye, J.C., 2020. Deep Learning COVID-19 Features on CXR Using Lim- mitigating the effects of class imbalance in machine learning with application
ited Training Data Sets. IEEE Transactions on Medical Imaging 39 (8), 2688– to X-ray imaging. Int. J. Comput. Assist.Radiol. Surg. 15 (12), 2041–2048. doi:10.
2700. doi:10.1109/TMI.2020.2993291. 1007/s11548- 020- 02260- 6.
Olatunji, T., Yao, L., Covington, B., Upton, A., 2019. Caveats in generating medical Que, Q., Tang, Z., Wang, R., Zeng, Z., Wang, J., Chua, M., Gee, T.S., Yang, X., Veer-
imaging labels from radiology reports with natural language processing. In: In- avalli, B., 2018. CardioXNet: automated detection for cardiomegaly based on
ternational Conference on Medical Imaging with Deep Learning – Extended Ab- deep learning. In: International Conference of the IEEE Engineering in Medicine
stract Track, pp. 1–4. and Biology Society. IEEE, pp. 612–615. doi:10.1109/embc.2018.8512374.
Oliveira, H., Mota, V., Machado, A.M., dos Santos, J.A., 2020. From 3d to 2d: Trans- Quekel, L.G., Kessels, A.G., Goei, R., van Engelshoven, J.M., 2001. Detection of lung
ferring knowledge for rib segmentation in chest x-rays. Pattern Recognit. Lett. cancer on the chest radiograph: a study on observer performance. Eur. J. Radiol.
140, 10–17. doi:10.1016/j.patrec.2020.09.021. 39 (2), 111–116. doi:10.1016/s0720-048x(01)00301-1.
Oliveira, H., dos Santos, J., 2018. Deep transfer learning for segmentation of Rahman, M.F., Tseng, T.-L.B., Pokojovy, M., Qian, W., Totada, B., Xu, H., 2021. An
anatomical structures in chest radiographs. In: 2018 31st SIBGRAPI Conference automatic approach to lung region segmentation in chest X-ray images using
on Graphics, Patterns and Images (SIBGRAPI). IEEE, pp. 204–211. doi:10.1109/ adapted U-Net architecture. In: Medical Imaging 2021: Physics of Medical Imag-
sibgrapi.2018.0 0 033. ing. International Society for Optics and Photonics, p. 115953I. doi:10.1117/12.
Oliveira, H.N., Ferreira, E., Santos, J.A.D., 2020. Truly generalizable radiograph seg- 2581882.
mentation with conditional domain adaptation. IEEE Access 8, 84037–84062. Rajan, D., Thiagarajan, J.J., Karargyris, A., Kashyap, S., 2021. Self-training with im-
doi:10.1109/access.2020.2991688. proved regularization for sample-efficient chest X-ray classification. In: Medical
Onodera, S., Lee, Y., Tanaka, Y., 2020. Evaluation of dose reduction potential in Imaging 2021: Computer-Aided Diagnosis. International Society for Optics and
scatter-corrected bedside chest radiography using U-net. Radiol. Phys. Technol. Photonics, p. 115971S. doi:10.1117/12.2582290.
13 (4), 336–347. doi:10.1007/s12194- 020- 00586- z. Rajaraman, S., Antani, S.K., 2020. Modality-specific deep learning model ensem-
Ouyang, X., Karanam, S., Wu, Z., Chen, T., Huo, J., Zhou, X.S., Wang, Q., Cheng, J.-
25
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
bles toward improving TB detection in chest radiographs. IEEE Access 8, 27318– Salehinejad, H., Colak, E., Dowdell, T., Barfett, J., Valaee, S., 2019. Synthesizing chest
27326. doi:10.1109/ACCESS.2020.2971257. X-ray pathology for training deep convolutional neural networks. IEEE Trans.
Rajaraman, S., Candemir, S., Kim, I., Thoma, G., Antani, S., 2018. Visualization and Med. Imaging 38 (5), 1197–1206. doi:10.1109/TMI.2018.2881415.
interpretation of convolutional neural network predictions in detecting pneu- Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X., 2016.
monia in pediatric chest radiographs. Appl. Sci. 8 (10), 1715. doi:10.3390/ Improved techniques for training gans. In: Proceedings of the 30th International
app8101715. Conference on Neural Information Processing Systems, pp. 2234–2242.
Rajaraman, S., Candemir, S., Xue, Z., Alderson, P.O., Kohli, M., Abuya, J., Thoma, G.R., Samala, R.K., Hadjiiski, L., Chan, H.-P., Zhou, C., Stojanovska, J., Agarwal, P., Fung, C.,
Antani, S., 2018. A novel stacked generalization of models for improved TB de- 2021. Severity assessment of COVID-19 using imaging descriptors: a deep-
tection in chest radiographs. In: 2018 40th Annual International Conference of learning transfer learning approach from non-COVID-19 pneumonia. In: Med-
the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, pp. 718– ical Imaging 2021: Computer-Aided Diagnosis. International Society for Optics
721. doi:10.1109/EMBC.2018.8512337. and Photonics, p. 115971T. doi:10.1117/12.2582115.
Rajaraman, S., Kim, I., Antani, S.K., 2020. Detection and visualization of abnormal- Santos, A.d.S., Oliveira, R.D.d., Lemos, E.F., Lima, F., Cohen, T., Cords, O., Martinez, L.,
ity in chest radiographs using modality-specific convolutional neural network Gonçalves, C., Ko, A., Andrews, J.R., Croda, J., 2020. Yield, efficiency and costs of
ensembles. PeerJ 8, e8693. doi:10.7717/peerj.8693. mass screening algorithms for tuberculosis in Brazilian prisons. Clin. Infect. Dis.
Rajaraman, S., Sornapudi, S., Alderson, P.O., Folio, L.R., Antani, S.K., 2020. Analyz- doi:10.1093/cid/ciaa135.
ing inter-reader variability affecting deep ensemble learning for COVID-19 de- Sathitratanacheewin, S., Sunanta, P., Pongpirul, K., 2020. Deep learning for au-
tection in chest radiographs. PLoS One 15 (11), e0242301. doi:10.1371/journal. tomated classification of tuberculosis-related chest X-Ray: dataset distribu-
pone.0242301. tion shift limits diagnostic performance generalizability. Heliyon 6 (8), e04614.
Rajaraman, S., Sornapudi, S., Kohli, M., Antani, S., 2019. Assessment of an ensem- doi:10.1016/j.heliyon.2020.e04614.
ble of machine learning models toward abnormality detection in chest radio- Schalekamp, S., van Ginneken, B., Koedam, E., Snoeren, M.M., Tiehuis, A.M., Witten-
graphs. In: 2019 41st Annual International Conference of the IEEE Engineering in berg, R., Karssemeijer, N., Schaefer-Prokop, C.M., 2014. Computer-aided detection
Medicine and Biology Society (EMBC). IEEE, pp. 3689–3692. doi:10.1109/EMBC. improves detection of pulmonary nodules in chest radiographs beyond the sup-
2019.8856715. port by bone-suppressed images. Radiology 272 (1), 252–261. doi:10.1148/radiol.
Rajaraman, S., Thoma, G., Antani, S., Candemir, S., 2019. Visualizing and explain- 14131315.
ing deep learning predictions for pneumonia detection in pediatric chest ra- Schalekamp, S., Ginneken, B.v., Berk, I.A.H.v.d., Hartmann, I.J.C., Snoeren, M.M.,
diographs. In: Medical Imaging 2019: Computer-Aided Diagnosis. SPIE, p. 27. Odink, A.E., Lankeren, W.v., Pegge, S.A.H., Schijf, L.J., Karssemeijer, N., Schaefer-
doi:10.1117/12.2512752. Prokop, C.M., 2014. Bone suppression increases the visibility of invasive pul-
Rajkomar, A., Lingam, S., Taylor, A.G., Blum, M., Mongan, J., 2017. High-throughput monary aspergillosis in chest radiographs. PLoS One 9 (10), e108551. doi:10.
classification of radiographs using deep convolutional neural networks. J. Digit. 1371/journal.pone.0108551.
Imaging 30 (1), 95–101. doi:10.1007/s10278- 016- 9914- 9. Schalekamp, S., Karssemeijer, N., Cats, A.M., De Hoop, B., Geurts, B.H.J., Berger-
Rajpurkar, P., Irvin, J., Ball, R.L., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Hartog, O., van Ginneken, B., Schaefer-Prokop, C.M., 2016. The effect of supple-
Bagul, A., Langlotz, C.P., Patel, B.N., Yeom, K.W., Shpanskaya, K., Blanken- mentary bone-suppressed chest radiographs on the assessment of a variety of
berg, F.G., Seekins, J., Amrhein, T.J., Mong, D.A., Halabi, S.S., Zucker, E.J., Ng, A.Y., common pulmonary abnormalities: results of an observer study. J. Thorac. Imag-
Lungren, M.P., 2018. Deep learning for chest radiograph diagnosis: aretrospec- ing 31 (2), 119–125. doi:10.1097/RTI.0 0 0 0 0 0 0 0 0 0 0 0 0195.
tive comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. Schroeder, J.D., Bigolin Lanfredi, R., Li, T., Chan, J., Vachet, C., Paine, R., Srikumar, V.,
15 (11), e1002686. doi:10.1371/journal.pmed.1002686. Tasdizen, T., 2021. Prediction of obstructive lung disease from chest radiographs
Rajpurkar, P., O’Connell, C., Schechter, A., Asnani, N., Li, J., Kiani, A., Ball, R.L., via deep learning trained on pulmonary function data. Int. J. Chron. Obstr. Pul-
Mendelson, M., Maartens, G., van Hoving, D.J., Griesel, R., Ng, A.Y., Boyles, T.H., monary Dis. 15, 3455–3466. doi:10.2147/COPD.S279850.
Lungren, M.P., 2020. CheXaid: deep learning assistance for physician diagnosis Schultheiss, M., Schober, S.A., Lodde, M., Bodden, J., Aichele, J., Müller-Leisse, C.,
of tuberculosis using chest x-rays in patients with HIV. npj Digit. Med. 3 (1), Renger, B., Pfeiffer, F., Pfeiffer, D., 2020. A robust convolutional neural network
115. doi:10.1038/s41746- 020- 00322- 2. for lung nodule detection in the presence of foreign bodies. Sci. Rep. 10 (1),
Raoof, S., Feigin, D., Sung, A., Raoof, S., Irugulpati, L., Rosenow, E.C., 2012. Interpre- 12987. doi:10.1038/s41598- 020- 69789- z.
tation of plain chest roentgenogram. Chest 141 (2), 545–558. doi:10.1378/chest. Schwab, E., Goossen, A., Deshpande, H., Saalbach, A., 2020. Localization of critical
10-1302. findings in chest X-ray without local annotations using multi-instance learning.
Ravishankar, H., Venkataramani, R., Anamandra, S., Sudhakar, P., Annangi, P., 2019. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI). IEEE,
Feature transformers: privacy preserving lifelong learners for medical imaging. pp. 1879–1882. doi:10.1109/ISBI45749.2020.9098551.
In: Medical Image Computing and Computer Assisted Intervention – MICCAI Seah, J.C.Y., Tang, J.S.N., Kitchen, A., Gaillard, F., Dixon, A.F., 2019. Chest radiographs
2019, 11767. Springer, pp. 347–355. doi:10.1007/978- 3- 030- 32251- 9_38. in congestive heart failure: visualizing neural network learning. Radiology 290
Recht, M.P., Dewey, M., Dreyer, K., Langlotz, C., Niessen, W., Prainsack, B., Smith, J.J., (2), 514–522. doi:10.1148/radiol.2018180887.
2020. Integrating artificial intelligence into the clinical practice of radiology: Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., 2020. Grad-
challenges and recommendations. Eur. Radiol. 30 (6), 3576–3584. doi:10.1007/ cam: visual explanations from deep networks via gradient-based localization.
s00330- 020- 06672-5. Int. J. Comput. Vis. 128 (2), 336–359. doi:10.1007/s11263- 019- 01228- 7. ArXiv:
Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: unified, 1610.02391
real-time object detection. In: 2016 IEEE Conference on Computer Vision and Shah, M.P., Merchant, S.N., Awate, S.P., 2018. MS-Net: mixed-supervision fully-
Pattern Recognition (CVPR). IEEE, pp. 779–788. doi:10.1109/cvpr.2016.91. convolutional networks for full-resolution segmentation. In: Medical Image
Redmon, J., Farhadi, A., 2017. YOLO90 0 0: better, faster, stronger. In: 2017 IEEE Con- Computing and Computer Assisted Intervention – MICCAI 2018, 11073. Springer,
ference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp. 7263– pp. 379–387. doi:10.1007/978- 3- 030- 00937- 3_44.
7271. doi:10.1109/cvpr.2017.690. Shah, U., Abd-Alrazeq, A., Alam, T., Househ, M., Shah, Z., 2020. An efficient method
Redmon, J., Farhadi, A., 2018. Yolov3: An incremental improvement. arXiv preprint to predict pneumonia from chest X-rays using deep learning approach. Stud.
arXiv:1804.02767. Health Technol. Inform. 272, 457–460. doi:10.3233/SHTI200594.
Ren, S., He, K., Girshick, R., Sun, J., 2017. Faster r-CNN: towards real-time object de- Shelhamer, E., Long, J., Darrell, T., 2017. Fully convolutional networks for seman-
tection with region proposal networks. IEEE Trans. Pattern Anal. Mach.Intell. 39 tic segmentation. IEEE Trans. Pattern Anal. Mach.Intell. 39 (4), 640–651. doi:10.
(6), 1137–1149. doi:10.1109/tpami.2016.2577031. 1109/tpami.2016.2572683.
Rolnick, D., Veit, A., Belongie, S., Shavit, N., 2018. Deep learning is robust to massive Sheller, M.J., Reina, G.A., Edwards, B., Martin, J., Bakas, S., 2019. Multi-institutional
label noise. arXiv:1705.10694. deep learning modeling without sharing patient data: a feasibility study
Ronneberger, O., Fischer, P., Brox, T., 2015. U-Net: convolutional networks for on brain tumor segmentation. In: Brainlesion: Glioma, Multiple Sclero-
biomedical image segmentation. In: International Conference on Medical Image sis, Stroke and Traumatic Brain Injuries. Springer, pp. 92–104. doi:10.1007/
Computing and Computer Assisted Intervention. In: LNCS, 9351, pp. 234–241. 978- 3- 030- 11723- 8_9.
RSNA, 2018. rsna pneumonia detection challenge. Library Catalog: http://www. Shiraishi, J., Katsuragawa, S., Ikezoe, J., Matsumoto, T., Kobayashi, T., Komatsu, K.-i.,
kaggle.com. Matsui, M., Fujita, H., Kodera, Y., Doi, K., 20 0 0. Development of a digital image
Rueckel, J., Kunz, W.G., Hoppe, B.F., Patzig, M., Notohamiprodjo, M., Meinel, F.G., database for chest radiographs with and without a lung nodule: receiver op-
Cyran, C.C., Ingrisch, M., Ricke, J., Sabel, B.O., 2020. Artificial intelligence algo- erating characteristic analysis of radiologists’ detection of pulmonary nodules.
rithm detecting lung infection in supine chest radiographs of critically Ill pa- Am. J. Roentgenol. 174 (1), 71–74. doi:10.2214/ajr.174.1.1740071.
tients with a diagnostic accuracy similar to board-certified radiologists. Crit. Silva, W., Poellinger, A., Cardoso, J.S., Reyes, M., 2020. Interpretability-guided
Care Med. doi:10.1097/CCM.0 0 0 0 0 0 0 0 0 0 0 04397. Publish Ahead of Print content-based medical image retrieval. In: Medical Image Computing and Com-
Sabottke, C.F., Breaux, M.A., Spieler, B.M., 2020. Estimation of age in unidentified puter Assisted Intervention – MICCAI 2020, 12261. Springer, pp. 305–314. doi:10.
patients via chest radiography using convolutional neural network regression. 1007/978- 3- 030- 59710- 8_30.
Emerg. Radiol. 27 (5), 463–468. doi:10.1007/s10140- 020- 01782- 5. Sim, Y., Chung, M.J., Kotter, E., Yune, S., Kim, M., Do, S., Han, K., Kim, H., Yang, S.,
Saednia, K., Jalalifar, A., Ebrahimi, S., Sadeghi-Naini, A., 2020. An attention-guided Lee, D.-J., Choi, B.W., 2019. Deep convolutional neural network–based software
deep neural network for annotating abnormalities in chest X-ray images: visu- improves radiologist detection of malignant lung nodules on chest radiographs.
alization of network decision basis * . In: 2020 42nd Annual International Con- Radiology 294 (1), 199–209. doi:10.1148/radiol.2019182465.
ference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE, Simonyan, K., Vedaldi, A., Zisserman, A., 2014. Deep inside convolutional networks:
pp. 1258–1261. doi:10.1109/EMBC44109.2020.9175378. visualising image classification models and saliency maps. arXiv:1312.6034.
Sahiner, B., Pezeshk, A., Hadjiiski, L.M., Wang, X., Drukker, K., Cha, K.H., Sum- Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale
mers, R.M., Giger, M.L., 2018. Deep learning in medical imaging and radiation image recognition. arXiv:1409.1556.
therapy. Med. Phys. 46 (1), e1–e36. doi:10.1002/mp.13264. Singh, R., Kalra, M.K., Nitiwarangkul, C., Patti, J.A., Homayounieh, F., Padole, A.,
26
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Rao, P., Putha, P., Muse, V.V., Sharma, A., Digumarthy, S.R., 2018. Deep learning sisted Intervention – MICCAI 2019, 11769. Springer, pp. 431–440. doi:10.1007/
in chest radiography: detection of findings and presence of change. PLoS One 978- 3- 030- 32226- 7_48.
13 (10), e0204155. doi:10.1371/journal.pone.0204155. Tang, Y.-B., Tang, Y.-X., Xiao, J., Summers, R.M., 2019. Xlsor: a robust and accurate
Singh, V., Danda, V., Gorniak, R., Flanders, A., Lakhani, P., 2019. Assessment of critical lung segmentor on chest x-rays using criss-cross attention and customized ra-
feeding tube malpositions on radiographs using deep learning. J. Digit. Imaging diorealistic abnormalities generation. In: International Conference on Medical
32 (4), 651–655. doi:10.1007/s10278- 019- 00229- 9. Imaging with Deep Learning. PMLR, pp. 457–467.
Sirazitdinov, I., Kholiavchenko, M., Kuleev, R., Ibragimov, B., 2019. Data augmentation Tang, Y.-X., Tang, Y.-B., Han, M., Xiao, J., Summers, R.M., 2019. Abnormal chest X-ray
for chest pathologies classification. In: 2019 IEEE 16th International Symposium identification with generative adversarial one-class classifier. In: 2019 IEEE 16th
on Biomedical Imaging (ISBI 2019). IEEE, pp. 1216–1219. doi:10.1109/ISBI.2019. International Symposium on Biomedical Imaging (ISBI 2019). IEEE, pp. 1358–
8759573. 1361. doi:10.1109/ISBI.2019.8759442.
Sivaramakrishnan, R., Antani, S., Candemir, S., Xue, Z., Thoma, G., Alderson, P., Tang, Y.-X., Tang, Y.-B., Peng, Y., Yan, K., Bagheri, M., Redd, B.A., Brandon, C.J., Lu, Z.,
Abuya, J., Kohli, M., 2018. Comparing deep learning models for population Han, M., Xiao, J., Summers, R.M., 2020. Automated abnormality classification of
screening using chest radiography. In: Medical Imaging 2018: Computer-Aided chest radiographs using deep convolutional neural networks. npj Digit. Med. 3
Diagnosis. SPIE, p. 49. doi:10.1117/12.2293140. (1), 70. doi:10.1038/s41746- 020- 0273- z.
Sogancioglu, E., Murphy, K., Calli, E., Scholten, E.T., Schalekamp, S., Ginneken, B.V., Tartaglione, E., Barbano, C.A., Berzovini, C., Calandri, M., Grangetto, M., 2020. Un-
2020. Cardiomegaly detection on chest radiographs: segmentation versus clas- veiling COVID-19 from CHEST X-ray with deep learning: a hurdles race with
sification. IEEE Access 8, 94631–94642. doi:10.1109/access.2020.2995567. small data. Int. J. Environ. Res. Public Health 17 (18), 6933. doi:10.3390/
Souza, J.C., Bandeira Diniz, J.O., Ferreira, J.L., França da Silva, G.L., Corrêa Silva, A., de ijerph17186933.
Paiva, A.C., 2019. An automatic method for lung segmentation and reconstruc- Taylor, A.G., Mielke, C., Mongan, J., 2018. Automated detection of moderate and large
tion in chest X-ray using deep neural networks. Comput. Methods Programs pneumothorax on frontal chest X-rays using deep convolutional neural net-
Biomed. 177, 285–296. doi:10.1016/j.cmpb.2019.06.005. works: a retrospective study. PLoS Med. 15 (11), e1002697. doi:10.1371/journal.
Strohm, L., Hehakaya, C., Ranschaert, E.R., Boon, W.P.C., Moors, E.H.M., 2020. pmed.1002697.
Implementation of artificial intelligence (AI) applications in radiology: hin- Thammarach, P., Khaengthanyakan, S., Vongsurakrai, S., Phienphanich, P.,
dering and facilitating factors. Eur. Radiol. 30 (10), 5525–5532. doi:10.1007/ Pooprasert, P., Yaemsuk, A., Vanichvarodom, P., Munpolsri, N., Khwayotha, S.,
s00330- 020- 06946- y. Lertkowit, M., Tungsagunwattana, S., Vijitsanguan, C., Lertrojanapunya, S.,
Su, C.-Y., Tsai, T.-Y., Tseng, C.-Y., Liu, K.-H., Lee, C.-W., 2021. A Deep Learning Method Noisiri, W., Chiawiriyabunya, I., Aphikulvanich, N., Tantibundhit, C., 2020.
for Alerting Emergency Physicians about the Presence of Subphrenic Free Air on AI chest 4 all. In: 2020 42nd Annual International Conference of the
Chest Radiographs. J. Clin. Med. 10 (2), 254. doi:10.3390/jcm10020254. IEEE Engineering in Medicine Biology Society (EMBC), pp. 1229–1233.
Subramanian, V., Wang, H., Wu, J.T., Wong, K.C.L., Sharma, A., Syeda-Mahmood, T., doi:10.1109/EMBC44109.2020.9175862. ISSN: 2694-0604
2019. Automated detection and type classification of central venous catheters in Toba, S., Mitani, Y., Yodoya, N., Ohashi, H., Sawada, H., Hayakawa, H., Hirayama, M.,
chest X-rays. In: Medical Image Computing and Computer Assisted Intervention Futsuki, A., Yamamoto, N., Ito, H., Konuma, T., Shimpo, H., Takao, M., 2020. Pre-
– MICCAI 2019, 11769. Springer, pp. 522–530. doi:10.1007/978- 3- 030- 32226- 7_ diction of pulmonary to systemic flow ratio in patients with congenital heart
58. disease using deep learning–based analysis of chest radiographs. JAMA Cardiol.
Sullivan, R.P., Holste, G., Burkow, J., Alessio, A., 2020. Deep learning methods for 5 (4), 449. doi:10.1001/jamacardio.2019.5620.
segmentation of lines in pediatric chest radiographs. In: Medical Imaging 2020: Tolkachev, A., Sirazitdinov, I., Kholiavchenko, M., Mustafaev, T., Ibragimov, B., 2020.
Computer-Aided Diagnosis. SPIE, p. 87. doi:10.1117/12.2550686. Deep learning for diagnosis and segmentation of pneumothorax: the results
Syeda-Mahmood, T., Ahmad, H., Ansari, N., Gur, Y., Kashyap, S., Karargyris, A., on the Kaggle competition and validation against radiologists. IEEE J. Biomed.
Moradi, M., Pillai, A., Seshadhri, K., Wang, W., Wong, K.C.L., Wu, J., 2019. Build- Health Inform. doi:10.1109/JBHI.2020.3023476. 1–1
ing a benchmark dataset and classifiers for sentence-level findings in AP chest Toriwaki, J.-I., Suenaga, Y., Negoro, T., Fukumura, T., 1973. Pattern recognition of
X-rays. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI chest X-ray images. Comput. Graph. Image Process. 2 (3–4), 252–271. doi:10.
2019). IEEE, pp. 863–867. doi:10.1109/ISBI.2019.8759162. 1016/0146-664x(73)90 0 05-1.
Syeda-Mahmood, T., Wong, K.C.L., Gur, Y., Wu, J.T., Jadhav, A., Kashyap, S., Karar- Ul Abideen, Z., Ghafoor, M., Munir, K., Saqib, M., Ullah, A., Zia, T., Tariq, S.A.,
gyris, A., Pillai, A., Sharma, A., Syed, A.B., Boyko, O., Moradi, M., 2020. Chest Ahmed, G., Zahra, A., 2020. Uncertainty assisted robust tuberculosis identifica-
X-ray report generation through fine-grained label learning. In: Medical Image tion with Bayesian convolutional neural networks. IEEE Access 8, 22812–22825.
Computing and Computer Assisted Intervention – MICCAI 2020, 12262. Springer, doi:10.1109/ACCESS.2020.2970023.
pp. 561–571. doi:10.1007/978- 3- 030- 59713- 9_54. Umehara, K., Ota, J., Ishimaru, N., Ohno, S., Okamoto, K., Suzuki, T., Shirai, N.,
Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A., 2017. Inception-v4, inception-resnet Ishida, T., 2017. Super-resolution convolutional neural network for the improve-
and the impact of residual connections on learning. In: Proceedings of the Thir- ment of the image quality of magnified images in chest radiographs. In: Medical
ty-First AAAI Conference on Artificial Intelligence. AAAI Press, pp. 4278–4284. Imaging 2017: Image Processing. International Society for Optics and Photonics,
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Van- p. 101331P. doi:10.1117/12.2249969.
houcke, V., Rabinovich, A., 2015. Going deeper with convolutions. In: IEEE Con- United Nations, 2008. United nations scientific committee on the effects of
ference on Computer Vision and Pattern Recognition, pp. 1–9. atomic radiation (UNSCEAR), 2008 report on sources and effects of ioniz-
Szucs-Farkas, Z., Schick, A., Cullmann, J.L., Ebner, L., Megyeri, B., Vock, P., Christe, A., ing radiation. http://www.unscear.org/docs/publications/20 08/UNSCEAR_20 08_
2013. Comparison of dual-energy subtraction and electronic bone suppression Annex- A- CORR.pdf.
combined with computer-aided detection on chest radiographs: effect on hu- Unnikrishnan, B., Nguyen, C.M., Balaram, S., Foo, C.S., Krishnaswamy, P., 2020.
man observers’ performance in nodule detection. AJR. Am. J. Roentgenol. 200 Semi-supervised classification of diagnostic radiographs with NoTeacher: a
(5), 1006–1013. doi:10.2214/AJR.12.8877. teacher that is not mean. In: Medical Image Computing and Computer As-
Tabik, S., Gomez-Rios, A., Martin-Rodriguez, J.L., Sevillano-Garcia, I., Rey-Area, M., sisted Intervention – MICCAI 2020, 12261. Springer, pp. 624–634. doi:10.1007/
Charte, D., Guirado, E., Suarez, J.L., Luengo, J., Valero-Gonzalez, M.A., Garcia- 978- 3- 030- 59710- 8_61.
Villanova, P., Olmedo-Sanchez, E., Herrera, F., 2020. COVIDGR dataset and Ureta, J., Aran, O., Rivera, J.P., 2020. Detecting pneumonia in chest radiographs using
COVID-SDNet methodology for predicting COVID-19 based on chest X-ray im- convolutional neural networks. In: Twelfth International Conference on Machine
ages. IEEE J. Biomed. Health Inform. 24 (12), 3595–3605. doi:10.1109/JBHI.2020. Vision (ICMV 2019). SPIE, p. 116. doi:10.1117/12.2559527.
3037127. Uzunova, H., Ehrhardt, J., Jacob, F., Frydrychowicz, A., Handels, H., 2019. Multi-scale
Taghanaki, S.A., Abhishek, K., Hamarneh, G., 2019. Improved inference via deep in- GANs for memory-efficient generation of high resolution medical images. In:
put transfer. In: Medical Image Computing and Computer Assisted Intervention Medical Image Computing and Computer Assisted Intervention – MICCAI 2019,
– MICCAI 2019, 11769. Springer, pp. 819–827. doi:10.1007/978- 3- 030- 32226- 7_ 11769. Springer, pp. 112–120. doi:10.1007/978- 3- 030- 32226- 7_13.
91. Vayá, M. d. l. I., Saborit, J. M., Montell, J. A., Pertusa, A., Bustos, A., Cazorla, M.,
Taghanaki, S.A., Havaei, M., Berthier, T., Dutil, F., Di Jorio, L., Hamarneh, G., Bengio, Y., Galant, J., Barber, X., Orozco-Beltrán, D., García-García, F., Caparrós, M., González,
2019. InfoMask: masked variational latent representation to localize chest dis- G., Salinas, J. M., 2020. BIMCV COVID-19+: a large annotated dataset of RX and
ease. In: Medical Image Computing and Computer Assisted Intervention – MIC- CT images from COVID-19 patients. arXiv:2006.01174.
CAI 2019, 11769. Springer, pp. 739–747. doi:10.1007/978- 3- 030- 32226- 7_82. Vidya, M.S., Manikanda, K.V., Anirudh, G., Srinivasa, R.K., Vijayananda, J., 2019. Lo-
Takaki, T., Murakami, S., Watanabe, R., Aoki, T., Fujibuchi, T., 2020. Calculating the cal and global transformations to improve learning of medical images applied
target exposure index using a deep convolutional neural network and a rule to chest radiographs. In: Angelini, E.D., Landman, B.A. (Eds.), Medical Imaging
base. Phys. Med. 71, 108–114. doi:10.1016/j.ejmp.2020.02.012. 2019: Image Processing. SPIE, p. 114. doi:10.1117/12.2512717.
Takemiya, R., Kido, S., Hirano, Y., Mabu, S., 2019. Detection of pulmonary nodules on Viergever, M.A., Maintz, J.A., Klein, S., Murphy, K., Staring, M., Pluim, J.P., 2016. A
chest x-ray images using R-CNN. In: International Forum on Medical Imaging in survey of medical image registration – under review. Med. Image Anal. 33, 140–
Asia 2019. SPIE, p. 58. doi:10.1117/12.2521652. 144. doi:10.1016/j.media.2016.06.030.
Tam, L.K., Wang, X., Turkbey, E., Lu, K., Wen, Y., Xu, D., 2020. Weakly supervised Wang, C., Elazab, A., Jia, F., Wu, J., Hu, Q., 2018. Automated chest screening based
one-stage vision and language disease detection using large scale pneumonia on a hybrid model of transfer learning and convolutional sparse denoising au-
and pneumothorax studies. In: Medical Image Computing and Computer As- toencoder. BioMed. Eng. OnLine 17 (1), 63. doi:10.1186/s12938-018-0496-2.
sisted Intervention – MICCAI 2020, 12264. Springer, pp. 45–55. doi:10.1007/ Wang, C., Elazab, A., Wu, J., Hu, Q., 2017. Lung nodule classification using deep
978- 3- 030- 59719- 1_5. feature fusion in chest radiography. Comput. Med. Imaging Graph. 57, 10–18.
Tang, Y., Tang, Y., Sandfort, V., Xiao, J., Summers, R.M., 2019. TUNA-Net: task- doi:10.1016/j.compmedimag.2016.11.004.
oriented unsupervised adversarial network for disease recognition in cross- Wang, H., Gu, H., Qin, P., Wang, J., 2020. CheXLocNet: automatic localization of
domain chest X-rays. In: Medical Image Computing and Computer As- pneumothorax in chest radiographs using deep convolutional neural networks.
PLoS One 15 (11), e0242013. doi:10.1371/journal.pone.0242013.
Wang, H., Jia, H., Lu, L., Xia, Y., 2020. Thorax-Net: an attention regularized deep
27
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
neural network for classification of thoracic diseases on chest radiography. IEEE 40th Annual International Conference of the IEEE Engineering in Medicine and
J. Biomed. Health Inform. 24 (2), 475–485. doi:10.1109/JBHI.2019.2928369. Biology Society (EMBC). IEEE, pp. 5890–5893. doi:10.1109/EMBC.2018.8513560.
Wang, H., Wang, S., Qin, Z., Zhang, Y., Li, R., Xia, Y., 2021. Triple attention learning Yahyatabar, M., Jouvet, P., Cheriet, F., 2020. Dense-Unet: a light model for lung fields
for classification of 14 thoracic diseases using chest radiography. Med. Image segmentation in Chest X-Ray images. In: 2020 42nd Annual International Con-
Anal. 67, 101846. doi:10.1016/j.media.2020.101846. ference of the IEEE Engineering in Medicine & Biology Society (EMBC). IEEE,
Wang, M., Deng, W., 2018. Deep visual domain adaptation: a survey. Neurocomput- pp. 1242–1245. doi:10.1109/EMBC44109.2020.9176033.
ing 312, 135–153. doi:10.1016/j.neucom.2018.05.083. Yang, W., Chen, Y., Liu, Y., Zhong, L., Qin, G., Lu, Z., Feng, Q., Chen, W., 2017. Cas-
Wang, Q., Liu, Q., Luo, G., Liu, Z., Huang, J., Zhou, Y., Zhou, Y., Xu, W., Cheng, J.-Z., cade of multi-scale convolutional neural networks for bone suppression of chest
2020. Automated segmentation and diagnosis of pneumothorax on chest X-rays radiographs in gradient domain. Med. Image Anal. 35, 421–433. doi:10.1016/j.
with fully convolutional multi-scale ScSE-DenseNet: a retrospective study. BMC media.2016.08.004.
Med. Inform. Decis. Mak. 20 (S14), 317. doi:10.1186/s12911- 020- 01325- 5. Yao, L., Prosky, J., Covington, B., Lyman, K., 2019. A strong baseline for domain adap-
Wang, W., Feng, H., Bu, Q., Cui, L., Xie, Y., Zhang, A., Feng, J., Zhu, Z., Chen, Z., 2020. tation and generalization in medical imaging. In: International Conference on
MDU-Net: a convolutional network for clavicle and rib segmentation from a Medical Imaging with Deep Learning – Extended Abstract Track, pp. 1–4.
chest radiograph. J. Healthc. Eng. 2020, 1–9. doi:10.1155/2020/2785464. Yi, P.H., Kim, T.K., Yu, A.C., Bennett, B., Eng, J., Lin, C.T., 2020. Can AI outperform
Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M., 2017. Chestx-ray8: a junior resident? Comparison of deep neural network to first-year radiology
hospital-scale chest x-ray database and benchmarks on weakly-supervised clas- residents for identification of pneumothorax. Emerg. Radiol. 27 (4), 367–375.
sification and localization of common thorax diseases. In: IEEE Conference on doi:10.1007/s10140- 020- 01767- 4.
Computer Vision and Pattern Recognition, pp. 2097–2106. doi:10.1109/cvpr.2017. Yi, X., Adams, S., Babyn, P., Elnajmi, A., 2019. Automatic catheter and tube detection
369. in pediatric X-ray images using a scale-recurrent network and synthetic data. J.
Wang, X., Schwab, E., Rubin, J., Klassen, P., Liao, R., Berkowitz, S., Golland, P., Digit. Imaging 33 (1), 181–190. doi:10.1007/s10278- 019- 00201- 7.
Horng, S., Dalal, S., 2019. Pulmonary edema severity estimation in chest radio- Yi, X., Walia, E., Babyn, P., 2019. Generative adversarial network in medical imaging:
graphs using deep learning. In: International Conference on Medical Imaging areview. Med. Image Anal. 58, 101552. doi:10.1016/j.media.2019.101552.
with Deep Learning–Extended Abstract Track, p. 1,4. Yoo, H., Kim, K.H., Singh, R., Digumarthy, S.R., Kalra, M.K., 2020. Validation of a deep
Wang, X., Yu, J., Zhu, Q., Li, S., Zhao, Z., Yang, B., Pu, J., 2020. Potential of deep learn- learning algorithm for the detection of malignant pulmonary nodules in chest
ing in assessing pneumoconiosis depicted on digital chest radiography. Occup. radiographs. JAMA Netw. Open 3 (9), e2017135. doi:10.1001/jamanetworkopen.
Environ. Med. 77 (9), 597–602. doi:10.1136/oemed- 2019- 106386. 2020.17135.
Wang, Z., Xiao, Y., Li, Y., Zhang, J., Lu, F., Hou, M., Liu, X., 2021. Automatically dis- Yosinski, J., Clune, J., Bengio, Y., Lipson, H., 2014. How transferable are features in
criminating and localizing COVID-19 from community-acquired pneumonia on deep neural networks? In: Advances in Neural Information Processing Systems.
chest X-rays. Pattern Recognit. 110, 107613. doi:10.1016/j.patcog.2020.107613. Curran Associates, Inc., pp. 1–9.
Wehbe, R.M., Sheng, J., Dutta, S., Chai, S., Dravid, A., Barutcu, S., Wu, Y., Cantrell, D.R., Young, M., 1994. Interobserver variability in the interpretation of chest
Xiao, N., Allen, B.D., MacNealy, G.A., Savas, H., Agrawal, R., Parekh, N., Kat- roentgenograms of patients with possible pneumonia. Arch. Intern. Med.
saggelos, A.K., 2020. DeepCOVID-xr: an artificial intelligence algorithm to detect 154 (23), 2729. doi:10.10 01/archinte.1994.0 0420230122014.
COVID-19 on chest radiographs trained and tested on a large US clinical dataset. Yu, D., Zhang, K., Huang, L., Zhao, B., Zhang, X., Guo, X., Li, M., Gu, Z., Fu, G., Hu, M.,
Radiology 203511. doi:10.1148/radiol.2020203511. Ping, Y., Sheng, Y., Liu, Z., Hu, X., Zhao, R., 2020. Detection of peripherally in-
Wei, Y., Feng, J., Liang, X., Cheng, M.-M., Zhao, Y., Yan, S., 2018. Object region min- serted central catheter (PICC) in chest X-ray images: a multi-task deep learn-
ing with adversarial erasing: a simple classification to semantic segmentation ing model. Comput. Methods Programs Biomed. 197, 105674. doi:10.1016/j.cmpb.
approach. arXiv:1703.08448. 2020.105674.
Wessel, J., Heinrich, M. P., von Berg, J., Franz, A., Saalbach, A., 2019. Sequential rib Yuan, J., Liao, H., Luo, R., Luo, J., 2019. Automatic radiology report generation based
labeling and segmentation in chest X-ray using Mask R-CNN. arXiv:1908.08329. on multi-view image fusion and medical concept enrichment. In: Medical Image
Wolleb, J., Sandkühler, R., Cattin, P.C., 2020. DeScarGAN: disease-specific anomaly Computing and Computer Assisted Intervention – MICCAI 2019, 11769. Springer,
detection with weak supervision. In: Medical Image Computing and Computer pp. 721–729. doi:10.1007/978- 3- 030- 32226- 7_80.
Assisted Intervention – MICCAI 2020, 12264. Springer, pp. 14–24. doi:10.1007/ Yue, Z., Ma, L., Zhang, R., 2020. Comparison and validation of deep learning mod-
978- 3- 030- 59719- 1_2. els for the diagnosis of pneumonia. Comput. Intell. Neurosci. 2020, 1–8. doi:10.
Wong, K.C.L., Moradi, M., Wu, J., Pillai, A., Sharma, A., Gur, Y., Ahmad, H., 1155/2020/8876798.
Chowdary, M.S., Chiranjeevi, J., Reddy Polaka, K.K., Wunnava, V., Reddy, D., Zarei, M., Abadi, E., Fricks, R., Segars, W.P., Samei, E., 2021. A probabilistic condi-
Syeda-Mahmood, T., 2020. A robust network architecture to detect normal chest tional adversarial neural network to reduce imaging variation in radiography.
X-Ray radiographs. In: 2020 IEEE 17th International Symposium on Biomedical In: Medical Imaging 2021: Physics of Medical Imaging. International Society for
Imaging (ISBI). IEEE, pp. 1851–1855. doi:10.1109/ISBI45749.2020.9098671. Optics and Photonics, p. 115953Y. doi:10.1117/12.2582336.
Wong, K.C.L., Moradi, M., Wu, J., Syeda-Mahmood, T., 2019. Identifying disease- Zarshenas, A., Liu, J., Forti, P., Suzuki, K., 2019. Separation of bones from soft tissue
free chest x-ray images with deep transfer learning. In: Medical Imaging 2019: in chest radiographs: anatomy-specific orientation-frequency-specific deep neu-
Computer-Aided Diagnosis. SPIE, p. 24. doi:10.1117/12.2513164. ral network convolution. Med. Phys. 46 (5), 2232–2242. doi:10.1002/mp.13468.
Wu, J.T., Wong, K.C.L., Gur, Y., Ansari, N., Karargyris, A., Sharma, A., Morris, M., Zech, J.R., Badgeley, M.A., Liu, M., Costa, A.B., Titano, J.J., Oermann, E.K., 2018. Vari-
Saboury, B., Ahmad, H., Boyko, O., Syed, A., Jadhav, A., Wang, H., Pillai, A., able generalization performance of a deep learning model to detect pneumo-
Kashyap, S., Moradi, M., Syeda-Mahmood, T., 2020. Comparison of chest radio- nia in chest radiographs: a cross-sectional study. PLoS Med. 15 (11), e1002683.
graph interpretations by artificial intelligence algorithm vs radiology residents. doi:10.1371/journal.pmed.1002683.
JAMA Netw. Open 3 (10). doi:10.1001/jamanetworkopen.2020.22779. Zhang, J., Xie, Y., Pang, G., Liao, Z., Verjans, J., Li, W., Sun, Z., He, J., Li, Y., Shen, C.,
Xing, Y., Ge, Z., Zeng, R., Mahapatra, D., Seah, J., Law, M., Drummond, T., 2019. Ad- Xia, Y., 2021. Viral pneumonia screening on chest X-rays using confidence-aware
versarial pulmonary pathology translation for pairwise chest X-ray data aug- anomaly detection. IEEE Trans. Med. Imaging 40 (3), 879–890. doi:10.1109/TMI.
mentation. In: Medical Image Computing and Computer Assisted Intervention – 2020.3040950.
MICCAI 2019, 11769. Springer, pp. 757–765. doi:10.1007/978- 3- 030- 32226- 7_84. Zhang, L., Rong, R., Li, Q., Yang, D.M., Yao, B., Luo, D., Zhang, X., Zhu, X., Luo, J.,
Xu, Y., Mo, T., Feng, Q., Zhong, P., Lai, M., Chang, E.I., 2014. Deep learning of feature Liu, Y., Yang, X., Ji, X., Liu, Z., Xie, Y., Sha, Y., Li, Z., Xiao, G., 2021. A deep
representation with multiple instance learning for medical image analysis. In: learning-based model for screening and staging pneumoconiosis. Sci. Rep. 11
2014 IEEE International Conference on Acoustics, Speech and Signal Processing (1), 2201. doi:10.1038/s41598- 020- 77924- z.
(ICASSP), pp. 1626–1630. doi:10.1109/ICASSP.2014.6853873. ISSN: 2379-190X Zhang, M., Gao, J., Lyu, Z., Zhao, W., Wang, Q., Ding, W., Wang, S., Li, Z., Cui, S., 2020.
Xue, C., Deng, Q., Li, X., Dou, Q., Heng, P.-A., 2020. Cascaded robust learning at im- Characterizing label errors: confident learning for noisy-labeled image segmen-
perfect labels for chest X-ray segmentation. In: Medical Image Computing and tation. In: Medical Image Computing and Computer Assisted Intervention –
Computer Assisted Intervention – MICCAI 2020, 12266. Springer, pp. 579–588. MICCAI 2020, 12261. Springer, pp. 721–730. doi:10.1007/978- 3- 030- 59710- 8_70.
doi:10.1007/978- 3- 030- 59725- 2_56. Zhang, R., Tie, X., Qi, Z., Bevins, N.B., Zhang, C., Griner, D., Song, T.K., Nadig, J.D.,
Xue, F.-F., Peng, J., Wang, R., Zhang, Q., Zheng, W.-S., 2019. Improving robustness Schiebler, M.L., Garrett, J.W., Li, K., Reeder, S.B., Chen, G.-H., 2021. Diagnosis of
of medical image diagnosis with denoising convolutional neural networks. In: coronavirus disease 2019 pneumonia by using chest radiography: value of arti-
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019, ficial intelligence. Radiology 298 (2), E88–E97. doi:10.1148/radiol.2020202944.
11769. Springer, pp. 846–854. doi:10.1007/978- 3- 030- 32226- 7_94. Zhang, T., Fu, H., Zhao, Y., Cheng, J., Guo, M., Gu, Z., Yang, B., Xiao, Y., Gao, S.,
Xue, Y., Xu, T., Rodney Long, L., Xue, Z., Antani, S., Thoma, G.R., Huang, X., 2018. Mul- Liu, J., 2019. SkrGAN: sketching-rendering unconditional generative adversar-
timodal recurrent model with attention for automated radiology report genera- ial networks for medical image synthesis. In: Medical Image Computing and
tion. In: Medical Image Computing and Computer Assisted Intervention – MIC- Computer Assisted Intervention – MICCAI 2019, 11767. Springer, pp. 777–785.
CAI 2018, 11070. Springer, pp. 457–466. doi:10.1007/978- 3- 030- 00928- 1_52. doi:10.1007/978- 3- 030- 32251- 9_85.
Xue, Z., Antani, S., Long, R., Thoma, G.R., 2018. Using deep learning for detecting Zhang, W., Li, G., Wang, F., E, L., Yu, Y., Lin, L., Liang, H., 2019. Simultaneous lung
gender in adult chest radiographs. In: Medical Imaging 2018: Imaging Infor- field detection and segmentation for pediatric chest radiographs. In: Medical
matics for Healthcare, Research, and Applications. SPIE, p. 10. doi:10.1117/12. Image Computing and Computer Assisted Intervention – MICCAI 2019, 11769.
2293027. Springer, pp. 594–602. doi:10.1007/978- 3- 030- 32226- 7_66.
Xue, Z., Jaeger, S., Antani, S., Long, R., Karagyris, A., Siegelman, J., Folio, L.R., Zhang, Y., Miao, S., Mansi, T., Liao, R., 2018. Task driven generative modeling for
Thoma, G.R., 2018. Localizing tuberculosis in chest radiographs with deep learn- unsupervised domain adaptation: application to X-ray image segmentation. In:
ing. In: Medical Imaging 2018: Imaging Informatics for Healthcare, Research, Medical Image Computing and Computer Assisted Intervention – MICCAI 2018,
and Applications. SPIE, p. 28. doi:10.1117/12.2293022. 11071. Springer, pp. 599–607. doi:10.1007/978- 3- 030- 00934- 2_67.
Xue, Z., Long, R., Jaeger, S., Folio, L., George Thoma, R., Antani, a.S., 2018. Extraction Zhang, Z., Fu, H., Dai, H., Shen, J., Pang, Y., Shao, L., 2019. ET-Net: a generic edge-
of aortic knuckle contour in chest radiographs using deep learning. In: 2018 aTtention guidance network for medical image segmentation. In: Medical Image
28
E. Çallı, E. Sogancioglu, B. van Ginneken et al. Medical Image Analysis 72 (2021) 102125
Computing and Computer Assisted Intervention – MICCAI 2019, 11764. Springer, Zhu, J., Shen, B., Abbasi, A., Hoshmand-Kochi, M., Li, H., Duong, T.Q., 2020. Deep
pp. 442–450. doi:10.1007/978- 3- 030- 32239- 7_49. transfer learning artificial intelligence accurately stages COVID-19 lung disease
Zhao, Z.-Q., Zheng, P., Xu, S.-T., Wu, X., 2019. Object detection with deep learning: severity on portable chest radiographs. PLoS One 15 (7), e0236621. doi:10.1371/
a review. IEEE Trans. Neural Netw. Learn.Syst. 30 (11), 3212–3232. doi:10.1109/ journal.pone.0236621.
tnnls.2018.2876865. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A., 2017. Unpaired image-to-image translation
Zhou, H.-Y., Yu, S., Bian, C., Hu, Y., Ma, K., Zheng, Y., 2020. Comparing to learn: sur- using cycle-consistent adversarial networks. In: 2017 IEEE International Confer-
passing ImageNet pretraining on radiographs by comparing image representa- ence on Computer Vision (ICCV). IEEE, pp. 2223–2232. doi:10.1109/iccv.2017.244.
tions. In: Medical Image Computing and Computer Assisted Intervention – MIC- Zou, X.-L., Ren, Y., Feng, D.-Y., He, X.-Q., Guo, Y.-F., Yang, H.-L., Li, X., Fang, J.,
CAI 2020, 12261. Springer, pp. 398–407. doi:10.1007/978- 3- 030- 59710- 8_39. Li, Q., Ye, J.-J., Han, L.-Q., Zhang, T.-T., 2020. A promising approach for screening
Zhou, S., Zhang, X., Zhang, R., 2019. Identifying cardiomegaly in chest X-ray8 us- pulmonary hypertension based on frontal chest radiographs using deep learn-
ing transfer learning. Stud. Health Technol. Inform. 264, 482–486. doi:10.3233/ ing: aretrospective study. PLoS One 15 (7), e0236378. doi:10.1371/journal.pone.
SHTI190268. 0236378.
Zhou, Z., Zhou, L., Shen, K., 2020. Dilated conditional GAN for bone suppression in Zucker, E.J., Barnes, Z.A., Lungren, M.P., Shpanskaya, Y., Seekins, J.M., Halabi, S.S., Lar-
chest radiographs with enforced semantic features. Med. Phys. mp.14371. doi:10. son, D.B., 2020. Deep learning to automate Brasfield chest radiographic scoring
1002/mp.14371. for cystic fibrosis. J. Cystic Fibrosis 19 (1), 131–138. doi:10.1016/j.jcf.2019.04.016.
Zhu, C.S., Pinsky, P.F., Kramer, B.S., Prorok, P.C., Purdue, M.P., Berg, C.D., Gohagan, J.K., Zunair, H., Hamza, A.B., 2021. Synthesis of COVID-19 chest X-rays using unpaired
2013. The prostate, lung, colorectal, and ovarian cancer screening trial and its image-to-image translation. Soc. Netw. Anal. Min. 11 (1), 23. doi:10.1007/
associated research resource. JNCI J. Natl. Cancer Inst. 105 (22), 1684–1693. s13278- 021- 00731- 5.
doi:10.1093/jnci/djt281.
29