0% found this document useful (0 votes)
4 views

Unet and Segnet

This paper investigates the effectiveness of two deep learning techniques, SegNet and UNET, for semantically segmenting infected lung tissues in CT images of COVID-19 patients. Results indicate that SegNet outperforms UNET in binary classification of infected and non-infected tissues, achieving a mean accuracy of 0.95, while UNET excels in multi-class segmentation with a mean accuracy of 0.91. The proposed methods are crucial for aiding in the diagnosis and treatment prioritization of COVID-19 patients during the pandemic.

Uploaded by

Anna Poorani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Unet and Segnet

This paper investigates the effectiveness of two deep learning techniques, SegNet and UNET, for semantically segmenting infected lung tissues in CT images of COVID-19 patients. Results indicate that SegNet outperforms UNET in binary classification of infected and non-infected tissues, achieving a mean accuracy of 0.95, while UNET excels in multi-class segmentation with a mean accuracy of 0.91. The proposed methods are crucial for aiding in the diagnosis and treatment prioritization of COVID-19 patients during the pandemic.

Uploaded by

Anna Poorani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Saood and Hatem

TECHNICAL ADVANCE

COVID-19 Lung CT Image Segmentation Using


Deep Learning Methods: UNET Vs. SegNET
Adnan Saood1 and Iyad Hatem2*

Abstract
Background: Currently, there is an urgent need for efficient tools to assess in the diagnosis of COVID-19
patients. In this paper, we present feasible solutions for detecting and labeling infected tissues on CT lung
images of such patients. Two structurally-different deep learning techniques, SegNet and UNET, are
investigated for semantically segmenting infected tissue regions in CT lung images.
Methods: We propose to use two known deep learning networks, SegNet and UNET, for image tissue
classification. SegNet is characterized as scene segmentation network and UNet as a medical segmentation
tool. Both networks were exploited as binary segmentors to discriminate between infected and healthy lung
tissue, and as multi-class segmentors to learn the infection type on the lung. Each network is trained using 72
data images, validated on 10 images and tested against the left 18 images. Several statistical scores are
calculated for the results and tabulated accordingly.
Results: The results show the superior ability of SegNet in classifying infected/non-infected tissues compared
to the other methods (with 0.95 mean accuracy), while the UNET shows better results as a multi-class
segmentor (with 0.91 mean accuracy).
Conclusion: Semantically segmenting CT scan images of COVID-19 patients is a crucial goal because it would
not only assist in disease diagnosis , but also help in quantifying the severity of the disease ,and hence,
prioritize the population treatment accordingly. We propose computer-based techniques that prove to be
reliable as detectors for infected tissue in lung CT scans. The availability of such a method in today’s pandemic
would help automate, prioritize, fasten, and broaden the treatment of COVID-19 patients globally.
Keywords: COVID-19; pneumonia; SegNet; UNET; Computerized Tomography; Semantic Segmentation

Background contribute and assess in quantifying those three irreg-


COVID-19 is a widespread disease causing thousands ularities. It would help the front-liners of the pandemic
of deaths daily. Early diagnosis of this disease proved to better manage the situation of overloaded hospitals.
to be one of the most effective methods for infection
tree pruning [1]. The large number of COVID-19 pa- Deep learning (DL) has become a very popular
tients is rendering health care systems in many coun- method for constructing netwoks capable of success-
tries overwhelmed. Hence, a trusted automated tech- fully modeling higher-order systems to achieve human-
nique for identifying and quantifying the infected lung like performance. Tumors have been direct targets for
regions would be quiet advantageous. DL-assisted segmentation of medical images. In [4], a
Radiologists have identified three types of irregulari- lung cancer screening tool was implemented using DL
ties related to COVID-19 in Computed Tomography structures aiming to lower the false positive rate in
(CT) lung images: (1) Ground Glass Opacification lung cancer screening with low-dose CT scans. Also,
(GGO), (2) Consolidation, and (3) Pleural Effusion in [5], researchers attempted to segment brain tumors
[2], [3]. Developing a tool for semantically segment- from MRI images with a hybrid network of UNET and
ing medical lung images of COVID-19 patients would SegNet, reaching an accuracy of 0.99. Breast tumor
was also a target for segmentation in [6] using Genera-
*
Correspondence: [email protected]
2
tive Adversarial Networks (GANs) and Convolutional
Mechatronics Program for the Distinguished in Tishreen University,
Distinction and Creativity Agency, Latakia, SY Neural Networks (CNNs) resulting in a mean accu-
Full list of author information is available at the end of the article racy of 0.90. Body parts were subject to segmentation
Saood and Hatem Page 2 of 7

also; researchers attempted to segment: kidneys in [7], class) show an extensive disparity in representation;
Lungs in [8, 9], liver in [10], brain tissue in [11] and the dominant class is larger in order of 1e+3 than the
[12], temporal bones in [13], and arterial walls in [14]. least represented class. See 1. We note here that the
Until today, many research projects have been con- class C0 not only represent the portions of the lungs
ducted for COVID-19 detection using DL analysis of unaffected by pneumonia, but also the lung-enclosing
medical images such as X-Ray and Computerized To- tissue.
mography (CT) scans and revealed significant results.
Table 1 Dataset Class Sizes. Pixel Count denotes the total
However, semantically segmenting those images has number of pixels of the class, and Image Pixel Count is the total
been less appealing. number of pixels of images that had an instance of the class.
Many DL structures were considered by researchers Class Metrics
to detect COVID-19 patients using medical images. A Name Pixel Count Image Pixel Count
recent study designed a binary classifier (COVID-19, C0 2.4394E + 07 2.6214E + 07
C1 1.1965E + 06 2.5166E + 07
No information) and a multi classifier (COVID-19 C2 5.8921E + 05 2.0447E + 07
, No Information, Pneumonia) using a CNN with C3 3.4265E + 04 6.5536E + 06
X-Ray images as an input, reaching 0.98 for binary
classes and 0.87 for a multi-class classifier [15]. Another The dataset source website offers image masks to
study employed Xception and ResNet50V2 networks segment the lungs. Figure 1 shows images for one sam-
for COVID-19 detection from CT scans, resulting in ple.
an accuracy of 0.99 for the target class [16]. References
[17, 18, 19, 20, 21] used various DL systems with med-
ical images and obtained results with accuracy values
ranging from 0.83 to 0.98.

Few attempts for semantically segmenting medical


images of COVID-19 patients were published recently.
A study [22] employed a deep CNN as a binary segmen-
tor and compared it to other structures (FCN, UNET,
Figure 1 Dataset sample. CT scan (left), masked lungs
VNET, UNET++). The authors reached a Sorensen-Dice (middle), and labeled classes (right), where black is class C0,
of 0.73, a sensitivity of 0.75, and a precision score of dark gray is C1, light gray is C2, and white is C3.
0.73. Another usage of DL as a binary segmentation
tool was presented in [23]. The study reached a Dice
of 0.78, an accuracy of 0.86, and a sensitivity of 0.94. By visual inspection of the datset images, we no-
Reference [24] implemented a Fully Convolutional Net- tice that the ifected areas of the lungs are localized in
work (FCN) and a UNET as binary segmentation tools, specific regions. To illustrate the correlation between
their work performed good in terms of precision and infected tissue and its elative location, all the labels
accuracy, lesser in terms of recall and Dice score. of the dataset were summed and plotted with hot col-
Researchers in [25] detailed the design of a novel ormap in Fig. 2. It is clear from the ”accumulation”
DNN structures named Inf-Net and Semi-Inf-Net image that some portions of the lungs are more prone
to semantically segment infected regions and to seg- to infection than others. Therefore, the spatial values
ment GGO and consolidation, omitting pleural effu- of pixels tend to be a key feature in this research.
sion. Their work utilized the same data set that this
research is using. Deep Neural Networks
The overall methodology of semantically segmenting
Methods images is to design a structure that extracts features
The Dataset through successive convolutions and uses that infor-
Images of the dataset used in this work is a collection mation to create a segmentation map as an output.
of the Italian Society of Medical and Interventional See Fig. 3. In the follwong two paragraphs, we present
Radiology [26]. One hundred one-slice CT scans are a brief discription of the two DL networks used in this
provided in a resized 512 × 512 dimensions. Region research.
labels are already compiled into a NIFTI with proper
documentation by the author. UNET architecture
In manual labeling, classes pixel count (total num- The architecture of this network includes two main
ber of pixels in a class) and image pixel count (total parts: contractive and expansive. The contracting path
number of pixels in images that had an instance of a consists of several patches of convolutions with filters
Saood and Hatem Page 3 of 7

Network Training
Training the neural networks is done using the ADAM
stochastic optimizer due to its fast convergence rate
compared to other optimizers [29]. The input images
are resized to 256 × 256 to reduce the training time
and also for memory requirements. The one-hundred
images dataset is divided into three sets for train-
ing, validation, and testing with proportions of 0.72,
0.10, and 0.18 respectively. In spite of the class imbal-
ance discussed earlier, class weights are handed over to
the pixel classification layer in the networks. Weights
are calculated using median frequency balancing. Each
network is trained nine times using different hyperpa-
rameters to find the best configuration possible. Table
2 lists the hyperparameters used for training.

Table 2 Hyperparameters used for training the DNNs. Nine


experiments for each network with different initial learning rates
(ILR) and mini batch sizes.
Figure 2 Accumulation of the dataset’s labels. All the labels SegNet UNET
of the dataset were summed up to form a graphic that Exp. ILR MniBatch ILR MiniBatch
illustrates the regions of the lungs most prune to infection. 1 1e−4 4 1e−4 2
2 1e−4 4 1e−4 2
3 1e−4 4 1e−4 2
4 1e−3 8 5e−4 8
5 1e−3 8 5e−4 8
of size 3 × 3, and unity strides in both directions, fol- 6 1e−3 8 5e−4 8
lowed by ReLU layers. This path extracts the key fea- 7 3e−3 12 1e−3 12
8 3e−3 12 1e−3 12
tures of the input and results a feature vector of a spe- 9 3e−3 12 1e−3 12
cific length. The second path pulls information from
the contractive path via copying and cropping, and
The training process was done using the Deep
from the feature vector via up-convolutions, and gen-
Learning Toolbox version 14.0 in MATLAB R2020a
erates, by a successive operation, an output segmen-
(9.8.0.1323502) in a Windows 10 version 10.0.18363
tation map. The key component of this architecture
machine with an INTEL core-i5 9400F and an NVIDIA
is the operation linking the first path with the second
1050ti 4GB VRAM GPU using CUDA 10.0.130. Usage
one, it allows the network to attain highly accurate in-
of the GPU reduced training times by a factor of 35
formation from the contractive path to help generate
on average.
the segmentation mask as close as possible to the in-
tended output. A detailed overview of the architecture
Evaluation Criteria and Procedure
can be found in [27].
To fully quantify the performance of our models, we
utilized five known classification criteria: sensitivity,
SegNet Architecture specificity, G-mean, Sorensen-Dice (aka. F1), and F2
SegNet is a Deep Neural Network originally designed score. The following equations (1) to (5) describe these
to model scene segmentors such as road images seg- criteria:
mentation. This task requires the network to converge
using highly imbalanced datasets since large areas of
road images consist of classes as road, sidewalk, and TP
sensitivity = (1)
sky, ... etc. The SegNet network is a DNN with an TP + FN
encoder-decoder depth of three. The encoder layers are
identical to the convolutional layers of the VGG16 net-
work. The decoder constructs the segmentation mask TN
by utilizing pooling indices from the max-pooling of specificity = (2)
TN + FP
the corresponding encoder. The creators removed the
fully connected layers to reduce complexity, this re-
duces the number of parameters of the encoder sector 2 × TP
from 1.34e+8 to 1.47e+7. See [28]. Sorensen-Dice = (3)
2 × TP + FP + FN
Saood and Hatem Page 4 of 7

Figure 3 The DNN architectures. The SegNet (top) where the encoder-decoder of the network are illustrated using the gray and
white bubbles, and UNET (bottom) where the contractive and expansive layer patches are encapsulated in blue and yellow bubbles.

Class Level
√ Based on the criteria discussed in subsection of the
G-mean = sensitivity × specificity (4) ”Methods” section, the best two networks found in
the previous section are evaluated. We can see that
the SegNet network surpasses UNET with noticeable
5 × Precision × Sensitivity margins for all metrics except sensitivity and G-mean,
F2-score = (5) where both networks produce similar results. See table
4 × Precision + Sensitivity
5
These criteria are selected because of the dataset im-
balance nature discussed in the “Methods” section. Multi Class Segmentation
The evaluation was carried out as follows: the global
Test images results
accuracy of the classifier was calculated for each test
image and averaged over all the images. Using the Similarly, we obtain the best experiment for each
mean values of global accuracies, the best experiment multi-classification network. The best experiment of
of each network were chosen for a “Class Level” assess- the SegNet architecture is number 7, giving an accu-
ment. Then, statistical scores (1) to (5) were calculated racy of 0.907 with a standard deviation of 0.06. We also
for each class and tabulated properly. find that the overall best accuracy of 0.908 is given by
the fourth experiment of UNET network with a stan-
Results dard deviation of 0.065. All the experiments achieve
Binary Segmentation higher accuracy than 0.8 except for the first three ex-
Test images results periments of SegNet. Refer to table 4.
Table 3 shows results for both models of binary clas-
sifiers after evaluating every experiment of each net-
Class level
work. We can see from the results that our networks
achieve accuracy values larger than 0.90 in all cases, In the same manner as the binary segmentation re-
and 0.954 accuracy in the best case (experiment 4 of sults section, the best experiment of each architecture
the network SegNet). The standard deviation of ex- is evaluated as presented in table 6. Both networks
periment 4 is 0.029. The second best network is exper- struggled to recognize the C3 class. Nevertheless, they
iment 4 of the UNET architecture with an accuracy of achieve good results for C1 and C2. We also notice
0.95 and a standard deviation of 0.043. the high specificity rate regarding all the classes. The
The best experiment of each architecture is selected UNET architecture recorded higher values for all pa-
for further performance investigation on the class level. rameters except the specificity.
Saood and Hatem Page 5 of 7

Table 3 Global accuracy metrics of the Test data images calculated for the nine experiments of the UNET and SegNet networks as
binary class segmentors. The “plot” columns visualize the mean accuracy and the standard deviation of each experiment
SegNet UNET
plot of mean and σ plot of mean and σ
0 1 0 1
Exp. mean σ var mean σ var
1 0.934 0.053 0.004 0.921 0.056 0.003
2 0.919 0.061 0.004 0.901 0.067 0.004
3 0.919 0.066 0.007 0.896 0.069 0.005
4 0.954 0.029 0.004 0.949 0.043 0.002
5 0.940 0.052 0.005 0.933 0.057 0.003
6 0.947 0.042 0.005 0.900 0.069 0.005
7 0.941 0.033 0.003 0.939 0.059 0.003
8 0.935 0.056 0.004 0.927 0.061 0.003
9 0.948 0.046 0.005 0.894 0.070 0.005

Table 4 Global accuracy metrics of the Test data images calculated for the nine experiments of the UNET and SegNet networks as
multi-class segmentors. The “plot” columns visualize the mean accuracy and the standard deviation of each experiment.
SegNet UNET
plot of mean and σ plot of mean and σ
0 1 0 1
Exp. mean σ var mean σ var
1 0.703 0.063 0.004 0.860 0.079 0.006
2 0.657 0.067 0.004 0.852 0.080 0.006
3 0.652 0.086 0.007 0.844 0.084 0.007
4 0.894 0.068 0.004 0.908 0.065 0.004
5 0.877 0.071 0.005 0.880 0.071 0.005
6 0.870 0.072 0.005 0.881 0.073 0.005
7 0.907 0.060 0.003 0.903 0.067 0.004
8 0.891 0.069 0.004 0.883 0.074 0.005
9 0.881 0.075 0.005 0.899 0.075 0.005

Table 5 Statistical results for the binary segmentor. SegNet and


UNET binary segmentation tools results is terms of sensitivity, ness in modeling a trained radiologist for the task in
specificity, dice, G-mean, and F2 score. hand.
Net. Sens. Spec. Dice G-mean F2 Regarding the standard deviation of the results
SegNet 0.956 0.9542 0.749 0.955 0.861
UNET 0.964 0.948 0.733 0.956 0.856
demonstrated in table 3, the values ranged from 0.060
to 0.086. These low values indicate highly consistent
Table 6 Statistical results for the multi segmentor. SegNet and accuracies in the test partition of the dataset.
UNET multi class segmentation tools results is terms of The results of our SegNet show enhancements over
sensitivity, specificity, dice, G-mean, and F2 score. Matching color
rows display the results for the same class. the results presented in [25] in terms of the Dice metric,
Net. Class Sens. Spec. Dice. G-mean F2
specificity, and sensitivity. As well, the UNET outper-
C1 0.638 0.952 0.479 0.780 0.562 forms it only in terms of sensitivity. For sure, both
SegNet C2 0.672 0.965 0.454 0.806 0.564 works utilize the same dataset. SegNet also surpasses
C3 0.574 0.988 0.121 0.753 0.231 the COVID-NET architecture proposed in [22] in sen-
C1 0.804 0.930 0.483 0.865 0.636
UNET C2 0.694 0.983 0.597 0.826 0.652
sitivity and Dice metrics. A more detailed comparison
C3 0.684 0.993 0.225 0.824 0.377 is necessary to further generalize this result.

Multi Class problem


Discussion Table 6 shows how good the UNET is in segmenting
Binary classification problem the Ground Glass Opacification and the Consolidation.
It can be referred from table 6 that SegNet out- The UNET produced moderate results in segmenting
performs UNET architecture by a noticeable margin. the pleural effusion; a Dice of 0.23 and F2 score of 0.38
Both networks have an exceptionally high true pos- which downplays its role as a reliable tool for pleural
itive count for the ”Not Infected” class. The results effusion segmentation.
state in a quantifiable manner how reliable is the DNN It should be noted here that increasing the mini-
models in distinguishing between the non-infected and batch size has a negative effect on the networks per-
infected classes, i.e. ill portions of the lungs. Further formance, further test may lead to a generalized state-
experiments involving a larger dataset is likely to con- ment regarding this matter.
firm this. The high sensitivity (0.956) and specificity The C3 class, as discussed in , is the least represented
(0.945) of the best network (SegNet) indicate its good- class in the dataset, and such a result is expected from
Saood and Hatem Page 6 of 7

a multi-class segmentation model constructed using 72 List of abbreviations


image instances only. GGO: Ground Glass Opacification; DL: Deep Learning; DNN: Deep Neural
Network; COVID-19: COrona VIrus Disease 2019; CT: Computerized
The standard deviation values were, on average, a Tomography; CNN: Convolutional Neural Network; GAN: Generative
little higher than the binary segmentors. Yet, they Adversarial Networks; FCN: Fully Convolutional Network
still indicate that the networks are solid performers Declarations
in terms of accuracy. The high specificity rates clearly Ethics approval and consent to participate
The dataset used in this work is openly accessible and free to the public.
state that the models are reliable in identifying non- No direct interaction with a human or animal entity was conducted in this
infected tissue (class C0). work.

Consent for publication


Network Feature Visualization Not applicable for this paper.
Deep Dream is a method used to visualize the features
extacted by the network after the training process [30]. Availability of data and materials
The data is openly accessible in [26], and the networks used in this work
Since the SegNet proved to be a reliable segmentor are freely available in https://github.com/adnan-saood/COVID19-DL.
considering its high statistical scores, the generated
Deep Dream image should lay out the key features Competing interests
The authors declare that they have no competing interests.
distinguishing each class (non-infected, infected).
We plotted the Deep Dream image in Fig. 4. We can Funding
apparently visualize a discerning pattern between the This research did not require funding.
two classes in this image.
Authors’ contributions
IH proposed the research idea, the dataset, and the overall methodology.
AS developed the methods and performed the experiments, collected the
results and drafted the paper. Both authors drew the conclusions.

Acknowledgements
Data for this study come from the Italian Society of Medical and
Interventional Radiology [26]

Author details
1
Mechatronics Program for the Distinguished in Tishreen University,
Distinction and Creativity Agency, Latakia, SY. 2 Mechatronics Program
for the Distinguished in Tishreen University, Distinction and Creativity
Agency, Latakia, SY.

References
Figure 4 SegNet binary segmentor Deep Dream image.
1. meizhu chen, changli tu, Tan C, Zheng X, xiaohua wang, jian wu,
Deep dream image laying out key features the network is using
et al. Key to successful treatment of COVID-19: accurate identification
to segment the CT scans. infected tissue (right), non-infected
of severe risks and early intervention of disease progression. 2020 Apr;.
(left).
2. Shi H, Han X, Jiang N, Cao Y, Alwalid O, Gu J, et al. Radiological
findings from 81 patients with COVID-19 pneumonia in Wuhan, China:
a descriptive study. The Lancet Infectious Diseases. 2020
Apr;20(4):425–434.
3. Ye Z, Zhang Y, Wang Y, Huang Z, Song B. Chest CT manifestations
Conclusions of new coronavirus disease 2019 (COVID-19): a pictorial review.
European Radiology, Mar. 2020;.
In this paper, the performance of two deep learning 4. Causey JL, Guan Y, Dong W, Walker K, Qualls JA, Prior F, et al..
networks (SegNet & UNET) was compared in their Lung cancer screening with low-dose CT scans using a deep learning
ability to detect diseased areas in medical images of approach; 2019. Available from:
https://arxiv.org/abs/1906.00240.
the lungs of COVID-19 patients. The results demon- 5. Daimary D, Bora MB, Amitab K, Kandar D. Brain Tumor
strated the ability of the SegNet network to distinguish Segmentation from MRI Images using Hybrid Convolutional Neural
between infected and healthy tissues in these images. A Networks. Procedia Computer Science. 2020;167:2419–2428.
6. Singh VK, Rashwan HA, Romani S, Akram F, Pandey N, Sarker
comparison of these two networks was also performed MMK, et al. Breast tumor segmentation and shape classification in
in a multiple classification procedure of infected ar- mammograms using generative adversarial and convolutional neural
eas in lung images. The results showed the UNET network. Expert Systems with Applications. 2020 Jan;139:112855.
7. Zhao W, Jiang D, Queralta JP, Westerlund T. MSS U-Net: 3D
network’s ability to distinguish between these areas. segmentation of kidneys and tumors from CT images with a multi-scale
The results obtained in this paper represent promising supervised U-Net. Informatics in Medicine Unlocked. 2020;19:100357.
8. Skourt BA, Hassani AE, Majda A. Lung CT Image Segmentation
prospects for the possibility of using deep learning to
Using Deep Neural Networks. Procedia Computer Science.
assist in an objective diagnosis of COVID-19 disease 2018;127:109–113.
through CT images of the lung. 9. Huidrom R, Chanu YJ, Singh KM. Automated Lung Segmentation on
Computed Tomography Image for the Diagnosis of Lung Cancer. CyS.
2018 9;22(3).
Saood and Hatem Page 7 of 7

10. Almotairi S, Kareem G, Aouf M, Almutairi B, Salem MAM. Liver


Tumor Segmentation in CT Scans Using Modified SegNet. Sensors.
2020 Mar;20(5):1516.
11. Kumar P, Nagar P, Arora C, Gupta A. U-SegNet: Fully Convolutional
Neural Network based Automated Brain tissue segmentation Tool;
2018. Available from: https://arXiv.org/abs/1806.04429.
12. Akkus Z, Kostandy P, Philbrick KA, Erickson BJ. Robust brain
extraction tool for CT head images. Neurocomputing. 2020
6;392:189–195.
13. Li X, Gong Z, Yin H, Zhang H, Wang Z, Zhuo L. A 3D deep
supervised densely network for small organs of human temporal bone
segmentation in CT images. Neural Networks. 2020 4;124:75–85.
14. Yang J, Faraji M, Basu A. Robust segmentation of arterial walls in
intravascular ultrasound images using Dual Path U-Net. Ultrasonics.
2019 7;96:24–33.
15. Ozturk T, Talo M, Yildirim EA, Baloglu UB, Yildirim O, Acharya UR.
Automated detection of COVID-19 cases using deep neural networks
with X-ray images. Computers in Biology and Medicine. 2020
6;121:103792.
16. Rahimzadeh M, Attar A. A modified deep convolutional neural
network for detecting COVID-19 and pneumonia from chest X-ray
images based on the concatenation of Xception and ResNet50V2.
Informatics in Medicine Unlocked. 2020;19:100360.
17. Xu X, Jiang X, Ma C, Du P, Li X, Lv S, et al. A Deep Learning
System to Screen Novel Coronavirus Disease 2019 Pneumonia.
Engineering. 2020 Jun;.
18. Wang S, Kang B, Ma J, Zeng X, Xiao M, Guo J, et al. A deep
learning algorithm using CT images to screen for Corona Virus Disease
(COVID-19). 2020 Feb;.
19. Zheng C, Deng X, Fu Q, Zhou Q, Feng J, Ma H, et al. Deep
Learning-based Detection for COVID-19 from Chest CT using Weak
Label. 2020 Mar;.
20. Apostolopoulos ID, Mpesiana TA. Covid-19: automatic detection from
X-ray images utilizing transfer learning with convolutional neural
networks. Phys Eng Sci Med. 2020 4;43(2):635–640.
21. Narin A, Kata C, Pamuk Z. Automatic Detection of Coronavirus
Disease (COVID-19) Using X-ray Images and Deep Convolutional
Neural Networks; 2020. Available from:
https://arxiv.org/abs/2003.10849.
22. Yan Q, Wang B, Gong D, Luo C, Zhao W, Shen J, et al.. COVID-19
Chest CT Image Segmentation – A Deep Convolutional Neural
Network Solution; 2020. Available from:
https://arxiv.org/abs/2004.10987.
23. Amyar A, Modzelewski R, Ruan S. Multi-task Deep Learning Based
CT Imaging Analysis For COVID-19: Classification and Segmentation.
Cold Spring Harbor Laboratory, Apr. 2020;21.
24. Voulodimos A, Protopapadakis E, Katsamenis I, Doulamis A, Doulamis
N. Deep learning models for COVID-19 infected area segmentation in
CT images. Cold Spring Harbor Laboratory. 2020 5;.
25. Fan DP, Zhou T, Ji GP, Zhou Y, Chen G, Fu H, et al. Inf-Net:
Automatic COVID-19 Lung Infection Segmentation from CT Images.
2020 Apr;.
26. COVID-19, Medical segmentation;. Available from:
http://medicalsegmentation.com/covid19/.
27. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for
Biomedical Image Segmentation. In: Lecture Notes in Computer
Science. Springer International Publishing; 2015. p. 234–241.
28. Badrinarayanan V, Kendall A, Cipolla R. SegNet: A Deep
Convolutional Encoder-Decoder Architecture for Image Segmentation.
IEEE Transactions on Pattern Analysis and Machine Intelligence. 2017
Dec;39(12):2481–2495.
29. Kingma DP, Ba J. Adam: A Method for Stochastic Optimization;
2014. Available from: https://arxiv.org/abs/1412.6980.
30. Mordvintsev A, Olah C, Tyka M. Inceptionism: Going Deeper into
Neural Networks. Google Research; 2015. Archived from the original
on 2015-07-03.

You might also like