Design of Novel Convolution Neural Network Model For Lung Cancer Detection by Using Sensitivity Maps
Design of Novel Convolution Neural Network Model For Lung Cancer Detection by Using Sensitivity Maps
Corresponding Author:
Sarappadi Narasimha Prasad
Department of Electrical and Electronics Engineering, Manipal Institute of Technology Bengaluru
Manipal Academy of Higher Education
Manipal, Karnataka, India
Email: [email protected]
1. INTRODUCTION
Accurate diagnosis of disease is a challenge in medical research. Lung cancer among many types of
cancer have been identified most dangerous and having negative effects [1]-[3]. According to India's
National Cancer Registry Programme, cancer accounts for about 7,84,821 fatalities each year [4], [5].
Neoplasm, which are abnormal cell growths, can cause benign or malignant tumours or nodules to develop in
lungs. The effectiveness of treatment can be greatly improved, and patient survival rates can be subsequently
raised by early detection of lung or pulmonary nodules.The best diagnostic technique for detecting lung
cancer is a computed tomography (CT) scan because more thorough study of the lungs and quicker detection
of lung nodules are possible as it has high resolution, low distortion, and high contrast . However, noise in
CT scan images can degrade image clarity and make it more difficult for radiologists to identify lung cancer
in its early stages. Therefore, models based on deep neural networks have been developed to significantly
detect the cancer with great accuracy. In recent years, the field of artificial intelligence (AI) has shown
significant progress, particularly in image recognition, and speech recognition [6]-[8].In this paper, a deep
convolution neural network called maximum sensitivity neural network (MSNN) has been proposed which
finds and learns patterns in lung CT scan images to detect lung cancer. This model employs 512×512
grayscale images and can also be trained on a dataset with a small number of images. MSNN can extract
deep features of the image which later can be fed into the k-nearest neighbor (k-NN) classifier for
classification. As a result, the proposed model has a high accuracy for lung cancer detection.
Another important feature of this model is use of pre-batch normalization (BN) and max pooling
layers to reduce model complexity. Sensitivity maps has been plotted to visualize which part of image
contributes more for the classification of image. It is essential to detect a lung nodule with high accuracy.
This is a difficult task for radiologist and more enquiry time is required. Therefore, variety of deep neural
networks have been developed for lung cancer detection and proved to be better than artificial neural network
[9]‒[11]. Some of them are studied and summarized. Zhang et al. [12] used deep neural network to extract
shallow and deep features for ovarian cancer classification. Wu et al. [13] designed convolutional neural
network (CNN) which is based on AlexNet architecture for categorization of ovarian cancer pathological
images, and the model accuracy rate was 78.2%. Tajbakhsh and Suzuki [14] applied artificial neural
networks and CNN for testing benign and malignant nodules in lung CT scan images. The experimental
results showed that when it came to categorising lung lesions and tumours, CNN outperformed other artificial
neural networks. Shen et al. [15] designed a convolutional network named multi crop CNN for classification
of lung nodule malignancy by using learned deep features. Masood et al. [16] proposed deep fully
convolution neural network (DFCNet) model which is a convolution neural network used for classification of
lung nodule into four stages. This proposed network gave an accuracy of 84.58% indicating the effectiveness
of proposed method. Govindarajan and Swaminathan [17] proposed a CNN model with optimized parameters
to detect COVID-19 with sensitivity of 97.63% and F-measure of 97.1%. The model proved to be efficient in
providing visual diagnostic solution. Shelhamer et al. [18] developed new model which is a CNN. In this
model, fully convolutional layer is used in place of fully connected (FC) layer. This modification enabled the
precise pixel-by-pixel prediction of the entire image in a single forward pass. Christ et al. [19] segmented the
liver by cascading two fully connected networks (FCN), with the first FCN segmenting the liver as the region
of interest for the second FCN segmenting the lesions in the liver. This method successfully segmented
lesions in CT images with a dice score of 0.823 and magnetic resonance imaging (MRI) images with a dice
score of 0.85. To limit the number of false positives (FP) brought on by the imbalanced ratio of background
and foreground pixels in medical images, to apply the focal loss on the FCN [20]. In this structure, the
intermediate segmentation results were generated using the FCN, and the FP were subsequently eliminated
using the focal FCN. This paper is organised as follows: This paper is organised as follows: section 2
presents a literature review, section 3 discusses the proposed MSNN architecture, section 4 presents the
experimental results and compared with other models. Finally, concluding remarks and future work to be
carried out is discussed in section 5.
2. PROPOSED METHOD
2.1. Maximum sensitivity neural network architecture
A private database with thousands of lung CT scan images is used in this study. The goal of the
research is to create a novel, effective convolution neural network architecture that detects lung cancer and
outputs a two-dimensional vector for binary data (cancerous vs noncancerous). Therefore, MSNN has been
designed for the diagnosis of lung cancer by using lung CT scan images. It is designed based on the
pretrained deep neural network AlexNet [21]. Each input CT scan image is a member of a specific class, and
a probability score is assigned to it as an output.
AlexNet is a deep convolution neural network which is well suited for image classification making
it viable choice for lung cancer detection. Due to deep architecture, it uses large number of parameters and is
more prone to overfitting, particularly when the dataset is small. The architecture consists of five convolution
layers, three max pooling layer, three FC layers, and a soft max (SM) layer at the output. Hence, MSNN
model has been proposed which is a modified version of AlexNet by utilizing global average pooling (GAP)
layer in the architecture to tackle the problem of model overfitting caused by small dataset. Figure 1 presents
MSNN architectural layout which shows that it is made up of five successive blocks [21]. Block1-block4 are
made up of four layers namely convolution (conv), BN, rectified linear unit (ReLU) and max pooling layer.
Wherein, block 5 is made up of convolution, BN, ReLU, and GAP layer followed by FC layer and softmax
layer. To classify the test images, it learns and identify patterns from lung CT scan images. This network
input layer accepts grayscale images with a 512×512 pixels.
Firstly, MSNN is trained to distinguish between malignant and noncancerous lesions in lung CT
scan images. Secondly, the network extracted features from deep layers of lung CT scan images and then it is
fed to KNN classifier for classification. Although CNN can perform image classification task very well but,
medical applications require fine-grained classification, where the distinctions between classes are subtle. So,
an additional KNN classifier has been utilized to refine the classification and to make more precise
distinctions.
Design of novel convolution neural network model for lung cancer detection by … (Sugandha Saxena)
3220 ISSN: 2252-8938
The combined architecture of MSNN with a k-NN classifier has been designed carefully.
Preprocessed grayscale CT scan image is passed through a series of convolutional layer, BN layer, ReLU,
and max pooling layer multiple times through block1-block5. Each convolutional layer applies multiple
filters to the input image and capture different feature maps like edges and patterns. The extracted features
have been applied to the GAP layer which will then calculate the average value of all elements of each
feature map, resulting in a single value. The output from GAP layer is a 1-dimensional vector which is
further fed as an input to the FC layer. Each neuron in the FC layer takes this input vector, multiplies each
input value by a corresponding weight, and adds a bias term and computes a weighted sum of the input
values. Atlast, the weighted sum of input values from the FC layer is applied to the softmax layer to produce
class probabilities. After the MSNN has performed its feature extraction, the features extracted from the FC
layer can be used as inputs for the k-NN classifier. The k-NN algorithms works as follows:
− Firstly, take all features extracted from the FC layer as an input to knn clasifier, where each image has
attributes and a corresponding class label.
− For a new, unlabeled test image, the algorithm calculates the distances between test image and all the
images in the training dataset. The distance metric used is Euclidean distance.
− Identify the images with the shortest distances to the test image.
− Sort all the calculated distance metric in ascending order.
− The images near to test image are considered as k-NN.
− The algorithm looks at the class labels of these k neighbors and determines the majority class.
− At last, the test image is then assigned to this majority class.
The efficacy of the k-NN classification technique hinges on the selected k value. In this work, the
elbow method was employed to ascertain the ideal k value for the dataset. This involves plotting the sum of
squared error (SSE) values against various k values and identifying the juncture on the graph where elevating
the k value ceases to yield substantial changes. This inflection point, known as the "elbow," designates the
optimal k value. The optimal k value for the k-NN classifier was determined to be 300.
contributes to learning processes by balancing the convergence rate of the network as well as accurate
estimation [22]. However, the size of batch size has not been considered high since this may be costly in
terms of time consumption and memory usage. It is observed that the network will experience overfitting if
the epoch value choosen is too high. Conversely, a low epoch value will cause the network to converge
quickly and cause training to end early. The dataset is divided into training and validation set randomly for
split1. 70% of images from datasets are used for training and the remaining images are used for testing.
Split2, Split3, and Split4 imply that 75%, 80%, and 85% of the database images are used for training
respectively, while the remaining images are used for testing. Therefore, the training process has been
performed four times, each with a different split of the dataset to assess the model's performance across
different training scenarios and to obtain a more robust estimate of the model's generalization capabilities.
In this work, lung CT scan images in DICOM format have been acquired from A. J. Hospital and
Research Centre to test the performance of MSNN. The dataset consists of 434 lung CT scan images of
patients, in which 249 lung CT scan images belong to patients with cancer and the remaining 185 lung CT
scan images belong to patients with healthy lungs. Sample images from the dataset are displayed in Figure 2.
The performance of MSNN has been evaluated by obtaining confusion matrix which helps in
calculating all the metrics related to performance of network. The following is a description of the different
layers used in the architecture:
‒ Input layer: the MSNN architecture accepts grayscale CT scan images.
‒ Convolution layer: it performs convolution operation between the input image(f) and filter size(g) by
using (1) [23].
𝑓(𝑥) ∗ 𝑔(𝑥) = ∑∞
𝑘=−∞ 𝑓(𝑘). 𝑔(𝑥 − 𝑘) (1)
where x and k are spatial variables. In general, a smaller filter size may lead to an overfitting issue, while
a bigger filter size may increase the underfitting issue. Therefore, this layer uses 8 filters with a 6×6
ideal filter size.
‒ BN layer: the next successive layer is BN layer which expedites training speed and lessens network
sensitivity. Therefore, performing normalization over a batch(v) of m instances for ’i’ unit can be done
using the following steps. Firstly, compute batch mean by using (2) [23] for ‘i’ unit. This is done by
summing up the values of unit ‘i’ from all instances (ranging from 1 to m) in the batch and then dividing
it by the total number of instances(m) in the batch.
𝑟
𝜇𝑖 = ∑𝑚
𝑟=1 𝑣𝑖 /𝑚 (2)
Design of novel convolution neural network model for lung cancer detection by … (Sugandha Saxena)
3222 ISSN: 2252-8938
𝜎𝑖2 = ∑𝑚 𝑟 2
𝑟=1(𝑣𝑖 − 𝜇𝑖 ) /𝑚 (3)
The third step is to normalize each instance's value (𝑣𝑛𝑟 ) in the batch using the calculated batch mean
(𝜇𝑖 ) and batch variance (𝜎𝑖 ) by using (4) [23]. For each instance 'r', the value of unit 'i' is subtracted by the
batch mean (𝜇𝑖 ) and then dividing by the batch variance (𝜇𝑖 ).
Lastly, scale with learnable parameters by using (5) [23]. The normalized batch instances (𝑣𝑛𝑟 ) are
scaled using learnable parameters, 𝛾𝑖 , and 𝛽𝑖 . These parameters allow the network to learn the appropriate
scale and shift for each unit 'i' in the batch normalization process.
After applying batch normalization to each unit 'i' in the batch, the network can continue with
further layers and activations.
‒ ReLU layer: it is an activation function used to add nonlinearity to the network by adding a rectifier
function which is computing linear operations during convolution. The function works by using (6) [24].
The ReLU function keeps the positive values unchanged (identity function) and sets negative values to
zero. This non-linear activation helps the network to learn complex relationships in the data and makes
the network capable of learning more complex functions.
𝑓(𝑥) = 0, 𝑥 < 0
𝑓(𝑥) = 𝑥, 𝑥 > 0 (6)
‒ Max pooling layer: it helps to decrease the size of the convolved feature map to reduce computational
costs.
‒ GAP layer: applying this layer to the feature maps summarizes the spatial information within each
channel by taking the average value across all spatial locations. This operation retains the channel-wise
information while discarding the spatial dimensions, resulting in a compressed representation of the
feature maps.
‒ FC layer: it helps in classifying the images.
‒ SM layer: it converts the output of the last layer into a probability distribution.
2.2.1. Accuracy
This parameter defines true positive (TP) and true negative (TN) or correct cases over total number of
cases which includes false cases too. In (7) can be used for accuracy calculation. TP are the number of lung
cancer cases correctly predicted by the model (71 in this case). TN are the number of non-lung cancer cases
correctly predicted by the model (55 in this case). Wherein FP are the number of non-lung cancer cases
incorrectly classified as lung cancer cases (0 in this case). False negatives (FN) are the number of lung cancer
cases incorrectly classified as non-lung cancer cases (4 in this case). A high accuracy value indicates that the
model is making correct predictions for a large proportion of lung cancer cases and non-lung cancer cases [25].
TN+TP
Accuracy= TN+TP+FN+FP (7)
2.2.2. Precision
This parameter determines TP cases over TP and TN cases. In (8) can be used for precision
calculation. This is essential to calculate in medical applications because it helps assess the model's ability to
minimize FP. A high precision value indicates that the model has a low rate of FP, meaning it correctly
identifies most positive cases. Conversely, a low precision value indicates that the model incorrectly
classifies non-lung cancer cases as positive, leading to a higher rate of FP [26].
TP
Precision= TP+FP (8)
2.2.3. Recall/sensitivity
This parameter measures TP rate and is also known as sensitivity. It evaluates the model's ability to
correctly identify positive cases (lung cancer cases) out of all instances that belong to the positive class. A
value close to 100% means its test result is positive and patient has disease. Conversely, a low recall value
suggests that the model is missing many positive cases, leading to a higher rate of FN. In (9) can be used for
recall calculation [26].
TP
Recall= TP+FN (9)
2.2.4. F-score
It is a metric that combines both precision and recall providing a single score that balances their
trade-off. This parameter determines how many cases are classified correctly. A high F1 score indicates that
the model has achieved a good balance between precision and recall, making it effective at correctly
identifying positive cases (lung cancer) while keeping FP and FN in check. In (10) can be used for
calculating this parameter [26].
2*(Precision*Recall)
F-Score= (10)
Precision+Recall
2.2.4. Specificity
Specificity is also known as a TN rate. This parameter determines how many cases are correctly
classified as negative. It is essential to calculate in medical related applications because it evaluates the model's
ability to minimize FP specifically for the negative class. A high specificity value means that the model is
correctly identifying most of the non-lung cancer cases, reducing the rate of FP for patients who do not have
the disease. Wherein a low specificity value indicates that the model is misclassifying many negative cases as
positive, leading to a higher rate of FP. In (11) can be used for calculating this parameter [26].
𝑇𝑁
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = 𝑇𝑁+𝐹𝑃 (11)
Table 1. Performance comparison of MSNN with other existing deep learning models
Deep learning models Performance metrics
Accuracy (%) Sensitivity (%) Precision (%) F-score (%) Specificity (%)
Faster R-CNN [27] 80.1 - - - -
Cascade R-CNN [28] 84 - - - -
SC-dynamic R-CNN [29] 88.1 - - - -
MSNN (proposed method) 96.9 94.6 100 97.22 100
Design of novel convolution neural network model for lung cancer detection by … (Sugandha Saxena)
3224 ISSN: 2252-8938
Figure 4. MSNN Model classifying lung cancer with a probability score and ploting of sensitivity map
In this research work, the proposed model extracted deep layer features. These features contain a lot
of distinguished information about the image. Therefore, by using lung CT scan image, features of different
convolution layers have been extracted which is shown in Figure 5. Convolution layers can extract features
from various angles and therefore, the first convolution layer extracts fundamental information which
contains spots and edges. Results showed that deep layers extracted high level and abstracted features by
merging earlier features. Hence, features retrieved from the deeper layers are more suited for classification
[31]. The extracted features are then fed as an input to KNN classifier. While using classifier different
parameters need to be set manually.
4. CONCLUSION
When building a deep learning model for lung cancer diagnosis, great accuracy is essential. As a
result, MSNN model with rigorous design has been proposed which gives impressive levels of efficiency. In
comparison to previous methods, the proposed model outperformed them with accuracy of 96.9% and
sensitivity of 94.6%. The suggested model considerably aids in the classification process by successfully
extracting features from multiple convolution layers. On the input lung CT scan image, a sensitivity map has
been created, showing the nodule area in red. This map makes it easier to distinguish between cancerous and
non-cancerous areas of the image. Certain factors typically need manual adjusting to attain high accuracy
using a classifier, which can take time. Therefore, fixing this issue could be the main goal of future effort.
Utilising Bayesian optimisation, a tool for automatically choosing the best parameters, is one possible
strategy. This strategy would simplify the parameter selection procedure and increase the classifier's
effectiveness.
Design of novel convolution neural network model for lung cancer detection by … (Sugandha Saxena)
3226 ISSN: 2252-8938
ACKNOWLEDGEMENTS
The Interventional Radiology Division, Department of Radio-Diagnosis, A. J. Institute of Medical
Sciences in Mangalore, India, provided the patient database and provided assistance in analysing the medical
pictures, and the authors thank them for their assistance. The authors are grateful to REVA University for its
support in carrying out this study.
REFERENCES
[1] R. L. Siegel, K. D. Miller, H. E. Fuchs, and A. Jemal, “Cancer statistics, 2022,” CA: A Cancer Journal for Clinicians, vol. 72, no.
1, pp. 7–33, Jan. 2022, doi: 10.3322/caac.21708.
[2] National Lung Screening Trial Research Team, “Reduced lung-cancer mortality with low-dose computed tomographic screening,”
New England Journal of Medicine, vol. 365, no. 5, pp. 395–409, Aug. 2011, doi: 10.1056/NEJMoa1102873.
[3] F. Bray, J. Ferlay, I. Soerjomataram, R. L. Siegel, L. A. Torre, and A. Jemal, “Global cancer statistics 2018: GLOBOCAN
estimates of incidence and mortality worldwide for 36 cancers in 185 countries,” CA: A Cancer Journal for Clinicians, vol. 68,
no. 6, pp. 394–424, Nov. 2018, doi: 10.3322/caac.21492.
[4] P. Mathur et al., “Cancer statistics, 2020: report from National Cancer Registry Programme, India,” JCO Global Oncology, no. 6,
pp. 1063–1075, Nov. 2020, doi: 10.1200/GO.20.00122.
[5] D. Ravi et al., “Deep learning for health informatics,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 4–
21, Jan. 2017, doi: 10.1109/JBHI.2016.2636665.
[6] D. Singh, V. Kumar, V. Yadav, and M. Kaur, “Deep neural network-based screening model for covid-19-infected patients using
chest x-ray images,” International Journal of Pattern Recognition and Artificial Intelligence, vol. 35, no. 3, Mar. 2021, doi:
10.1142/S0218001421510046.
[7] E. Montagnon et al., “Deep learning workflow in radiology: a primer,” Insights Into Imaging, vol. 11, no. 1, pp. 1–15, Dec. 2020,
doi: 10.1186/s13244-019-0832-5.
[8] G. Chartrand et al., “Deep learning: a primer for radiologists,” Radio Graphics, vol. 37, no. 7, pp. 2113–2131, Nov. 2017, doi:
10.1148/rg.2017170077.
[9] M. Savic, Y. Ma, G. Ramponi, W. Du, and Y. Peng, “Lung nodule segmentation with a region-based fast marching method,”
Sensors, vol. 21, no. 5, Mar. 2021, doi: 10.3390/s21051908.
[10] M. F. Abdullah et al., “A comparative study of image segmentation technique applied for lung cancer detection,” in 2019 9th
IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Nov. 2019, pp. 72–77, doi:
10.1109/ICCSCE47578.2019.9068574.
[11] M. Vas and A. Dessai, “Lung cancer detection system using lung CT image processing,” in 2017 International Conference on
Computing, Communication, Control and Automation (ICCUBEA), Aug. 2017, pp. 1–5, doi: 10.1109/ICCUBEA.2017.8463851.
[12] L. Zhang, J. Huang, and L. Liu, “Improved deep learning network based in combination with cost-sensitive learning for early
detection of ovarian cancer in color ultrasound detecting system,” Journal of Medical Systems, vol. 43, no. 8, Aug. 2019, doi:
10.1007/s10916-019-1356-8.
[13] Z. Wu et al., “DeepLRHE: a deep convolutional neural network framework to evaluate the risk of lung cancer recurrence and
metastasis from histopathology images,” Frontiers in Genetics, vol. 11, Aug. 2020, doi: 10.3389/fgene.2020.00768.
[14] N. Tajbakhsh and K. Suzuki, “Comparing two classes of end-to-end machine-learning models in lung nodule detection and
classification: MTANNs vs. CNNs,” Pattern Recognition, vol. 63, pp. 476–486, Mar. 2017, doi: 10.1016/j.patcog.2016.09.029.
[15] W. Shen et al., “Multi-crop convolutional neural networks for lung nodule malignancy suspiciousness classification,” Pattern
Recognition, vol. 61, pp. 663–673, Jan. 2017, doi: 10.1016/j.patcog.2016.05.029.
[16] A. Masood et al., “Computer-assisted decision support system in pulmonary cancer detection and stage classification on CT
images,” Journal of Biomedical Informatics, vol. 79, pp. 117–128, Mar. 2018, doi: 10.1016/j.jbi.2018.01.005.
[17] S. Govindarajan and R. Swaminathan, “Differentiation of COVID-19 conditions in planar chest radiographs using optimized
convolutional neural networks,” Applied Intelligence, vol. 51, no. 5, pp. 2764–2775, May 2021, doi: 10.1007/s10489-020-01941-8.
[18] E. Shelhamer, J. Long, and T. Darrell, “Fully convolutional networks for semantic segmentation,” IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640–651, Apr. 2017, doi: 10.1109/TPAMI.2016.2572683.
[19] P. F. Christ et al., “Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D
conditional random fields,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, 2016,
pp. 415–423, doi: 10.1007/978-3-319-46723-8_48.
[20] X.-Y. Zhou, M. Shen, C. Riga, G.-Z. Yang, and S.-L. Lee, “Focal FCN: towards small object segmentation with limited training
data,” arXiv-Computer Science, pp. 1-17, 2017.
[21] M. Toğaçar, B. Ergen, and Z. Cömert, “Detection of lung cancer on chest CT images using minimum redundancy maximum
relevance feature selection method with convolutional neural networks,” Biocybernetics and Biomedical Engineering, vol. 40, no.
1, pp. 23–39, Jan. 2020, doi: 10.1016/j.bbe.2019.11.004.
[22] T. Hinz, N. N. -Guerrero, S. Magg, and S. Wermter, “Speeding up the hyperparameter optimization of deep convolutional neural
networks,” International Journal of Computational Intelligence and Applications, vol. 17, no. 2, Jun. 2018, doi:
10.1142/S1469026818500086.
[23] C. Garbin, X. Zhu, and O. Marques, “Dropout vs. batch normalization: an empirical study of their impact to deep learning,”
Multimedia Tools and Applications, vol. 79, no. 19–20, pp. 12777–12815, May 2020, doi: 10.1007/s11042-019-08453-9.
[24] H. Ide and T. Kurita, “Improvement of learning for CNN with ReLU activation by sparse regularization,” in 2017 International
Joint Conference on Neural Networks (IJCNN), May 2017, pp. 2684–2691, doi: 10.1109/IJCNN.2017.7966185.
[25] D. L. M. and P. M., “Performance evaluation of convolutional neural network for lung cancer detection,” in 2022 International
Conference on Electronic Systems and Intelligent Computing (ICESIC), Apr. 2022, pp. 293–298, doi:
10.1109/ICESIC53714.2022.9783533.
[26] L. Wang, “Deep learning techniques to diagnose lung cancer,” Cancers, vol. 14, no. 22, Nov. 2022, doi:
10.3390/cancers14225569.
[27] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: towards real-time object detection with region proposal networks,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, Jun. 2017, doi:
10.1109/TPAMI.2016.2577031.
[28] Z. Cai and N. Vasconcelos, “Cascade R-CNN: delving into high quality object detection,” in 2018 IEEE/CVF Conference on
Computer Vision and Pattern Recognition, Jun. 2018, pp. 6154–6162, doi: 10.1109/CVPR.2018.00644.
[29] X. Wang, L. Wang, and P. Zheng, “SC-Dynamic R-CNN: a self-calibrated dynamic R-CNN model for lung cancer lesion
detection,” Computational and Mathematical Methods in Medicine, vol. 2022, pp. 1–9, Mar. 2022, doi: 10.1155/2022/9452157.
[30] N. S. Nadkarni and S. Borkar, “Detection of lung cancer in CT images using image processing,” in 2019 3rd International
Conference on Trends in Electronics and Informatics (ICOEI), Apr. 2019, pp. 863–866, doi: 10.1109/ICOEI.2019.8862577.
[31] A. I. Khan, J. L. Shah, and M. M. Bhat, “CoroNet: a deep neural network for detection and diagnosis of COVID-19 from chest x-
ray images,” Computer Methods and Programs in Biomedicine, vol. 196, Nov. 2020, doi: 10.1016/j.cmpb.2020.105581.
BIOGRAPHIES OF AUTHORS
Design of novel convolution neural network model for lung cancer detection by … (Sugandha Saxena)