International Journal of
INTELLIGENT SYSTEMS AND APPLICATIONS IN
ENGINEERING
ISSN:2147-67992147-6799 www.ijisae.org Original Research Paper
Self-Attentive CNN+BERT: An Approach for Analysis of Sentiment on
Movie Reviews Using Word Embedding
1
Pawan kumar Mall, 2Dr. Manoj Kumar, 3Mr. Ankit Kumar, 4Anuj Gupta, 5Swapnita Srivastava, 6Vipul
Narayan, 7Alok Singh Chauhan, 8Arun Pratap Srivastava
Submitted: 23/11/2023 Revised: 28/12/2023 Accepted: 07/01/2024
Abstract: Social media has developed into a vast user opinion repository in the modern day. Due to the sophistication of the internet and
technological developments, a great amount of data is being generated from a variety of sources, including websites and social blogging.
Websites and blogs are being used as means for gathering product reviews in real time. On the other hand, the proliferation of blogs hosted
on cloud servers has led to a significant amount of data, including thoughts, opinions, and evaluations. As such, techniques for deriving
actionable insights from massive amounts of data, classifying it, and forecasting end-user actions or emotions are desperately needed.
People use social media platforms to instantly share their ideas in the present day. It is difficult to analyze and draw conclusions from this
data for sentiment analysis. Even with automated machine learning methods, it is still difficult to extract meaningful semantic concepts
from a sparse review representation. Word embedding improves text categorization by resolving word semantics and sparse matrix
problems. This paper presents a novel framework to capture semantic links between neighboring words by fusing word embedding with
BERT. A weighted self-attention method is also used to find important phrases in the reviews. by means of an empirical investigation
utilizing the IMDB data-set. In order to address sentiment analysis, this work presents a Hybrid CNN-BERT Model that combines BERT
with an extremely sophisticated CNN model. First, initial word embedding are trained using the Word to Vector (Word2Vec) technique,
which converts text strings into numerical vectors, calculates word distances, and groups related words according to their meaning. The
suggested model then integrates long-term dependencies with characteristics gleaned from convolution and global max-pooling layers
during word embedding. For improved accuracy, the model uses rectified linear units, normalizing, and dropout technologies. The
performance of proposed model in terms of accuracy is 95.91%, pression is 96..80%, recall is 95.07%, f1 score is 95.93%.
Keywords: Sentiment Analysis; Text Classfication; LSTM; Deep Learning
1. Introduction: information. Computers' greatest difficulty is trying to
understand the feeling that is ingrained in differing
Sentiment analysis is an automated process that uses
viewpoints. Sentiment analysis involves the extraction of
textual or spoken input to infer a viewpoint on a given
emotions and the classification of textual or visual data
topic. Considering that we produce over 1.5 quintillion
according to the meanings that they communicate to people
bytes of data every day in this day and age, sentiment
[1]. This method is useful for classifying user evaluations
analysis has become an essential technique for deciphering
and public sentiments about goods, services, and people
and organizing this enormous volume of data. Sentiment
into favorable and unfavorable groups. It is also helpful in
analysis is a tool that businesses use to improve their whole
detecting subtle tones in spoken language, such as sarcasm
business by streamlining procedures and gaining critical
1
or tension, and in locating troll and bot accounts on social
Assistant Professor, G.L. Bajaj Institute of Technology and Management
E-mail Id -
[email protected] media platforms [2]. Even while certain situations could be
2
Associate Professor Department of computer application Swami quite simple, like recognizing specific words in the text,
Vivekanand subharti university Meerut there are a lot of variables to take into account in order to
[email protected]3
correctly convey the overall mood, which goes beyond the
Assistant professor Department of computer application Swami
Vivekanand subharti university Meerut. words' actual meaning.
[email protected]4
LSTM has been gaining a lot of traction lately in the
Assistant Professor, G.L. Bajaj Institute of Technology and Management
[email protected] sentiment categorization space. Since its introduction by
5
Assistant Professor, G.L. Bajaj Institute of Technology and Management Hochreiter and Schmidhuber in 1997, LSTM has been
[email protected] widely used and refined in other works. It is now a
6
School of Computing Science and Engineering, Galgotias University,
commonly used method that has proven remarkably
Grater Noida
[email protected] efficient in a variety of issue arenas. LSTMs are unique in
7
School of Computer Applications and Technology Galgotias University, that they are specifically designed to address long-term
Greater Noida, India dependence problems [4]. LSTMs are naturally good at
[email protected]8
remembering knowledge for long periods of time, in
Lloyd Institute of Engineering & Technology
[email protected] contrast to certain other models that find it difficult to learn
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 612
long-term information. Recurrent neural networks are all intervention. The suggested NLP model offers a
composed of repeated modules, which is their shared quantitative assessment of emotional well-being by
structure. These modules usually have a simple design and utilizing the knowledge gathered from users' language
usually only have one tanh layer when used with RNNs. patterns. This helps people and medical professionals
Our experimental studies utilize the IMDB benchmark recognize possible stresses and take preventative action
dataset, which comprises movie reviews classified as either against them. Social media sites such as Twitter are used
positive or negative. for training and testing, which improves the model's
practicality because it captures real-world emotions and
2. Literature Review:
user experiences.
Sentiment analysis stands out as a widely utilized
A unique text data processing approach designed
application within natural language processing (NLP),
specifically for the Indonesian language is presented by the
where an algorithm takes text as input and outputs its
authors in [4]. Given the restricted amount of data available
inherent sentiment class.
for this language, the model makes use of data
Two different datasets, D1 and D2, that are supplied from augmentation approaches and concentrates on text
Amazon, are used by the authors of [1] to suggest an preparation. Interestingly, the augmentation is done on a
automated method for sentiment analysis. The first step is selective basis by adding words that are derived from
preprocessing the datasets, after which features are IndoBERT, a BERT model that is particular to the
extracted using techniques based on word embedding and Indonesian language. The goal of this creative IndoBERT-
N-grams. N-gram feature extraction uses bag of words based augmentation is to provide more data while
(BoW), global vectors (GloVe), and term frequency- maintaining the original mood and meaning. The
inverse document frequency (TF-IDF), while word experimental evaluation conducted on a Twitter text dataset
embedding features are taken from Word2vec. We use a demonstrates the efficacy of the proposed augmentation
variety of machine learning (ML) models, such as support technique. The results indicate a notable improvement in
vector machines (SVM), random forests (RF), logistic accuracy, showcasing the model's ability to outperform the
regression (LR), and multinomial Naïve Bayes (MNB), to Random Insert augmentation technique. This suggests that
assess the sentiment of the reviews. Additionally, two deep the IndoBERT-driven data augmentation strategy
learning (DL) models, namely convolutional neural contributes positively to the performance of the text data
network (CNN) and long-short term memory (LSTM), are processing model, enhancing its capability to handle
integrated into the classification process. Furthermore, the limited data resources effectively.
study incorporates a standalone bidirectional encoder
A paradigm for unstructured data in its natural forms—
representations (BERT) model for sentiment analysis.
such as audio recordings, movies, and images—has been
In [2], writers use a proprietary dataset that was scraped presented by authors in [5]. The particular goal is to
from Twitter to propose a sentiment analysis model in the investigate data analysis techniques that may reliably
context of the Indian airline business. The dataset consists forecast debtors' job status, with an emphasis on using
of internet evaluations for five particular Indian airlines. audio call records as the main source of data. The study
The primary aim is to do multiclass sentiment analysis makes use of Automatic Speech Recognition (ASR)
using three different classifiers: random forest, K-nearest technology to make the analysis of audio call records more
neighbor, and support vector machine. Together with these convenient. Through the use of this technology, spoken
classifiers, two well-known word embedding methods— language in the recordings is transformed into text,
Word2Vec and TF-IDF (Term Frequency-Inverse enabling additional data processing. After transcribing, the
Document Frequency)—are used to improve sentiment study engages in data cleaning to enhance the quality and
analysis. To further advance the state-of-the-art in reliability of the transcribed text. The transcribed text is
sentiment analysis for this specific domain, the study then transformed into numerical representations using two
introduces AirBERT. AirBERT is an innovative deep distinct methods: Term Frequency-Inverse Document
learning attention model, fine-tuned specifically for this Frequency (TF-IDF) and Count Vectorizer. These
task. It is grounded in bidirectional encoder representations techniques convert the text data into a format suitable for
from transformers, representing a sophisticated quantitative analysis, enabling the application of machine
architecture that leverages contextual embeddings for a learning or statistical models for predictive purposes.
more nuanced understanding of sentiment in airline-related
On the given dataset, the authors' refined BERT-based
reviews.
model outperforms the SVM classifier as a baseline,
A sentiment analysis model on everyday interactions, as exhibiting state-of-the-art accuracy in [6]. Notably, when
suggested by the authors in [3], is a useful tool for applied to data that has never been seen before, the BERT-
identifying stress and facilitating prompt assistance and based model has strong generalization skills. Realizing that
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 613
training Domain Adaptation (DA) classifiers was difficult A methodology has been developed by the authors in [9] to
due to the lack of data, we addressed this problem by using solve common issues with sentiment analysis, especially
several data augmentation strategies and comparing how when working with big datasets of customer reviews. The
well they worked. The successful outperformance of the goal was to build an accurate sentiment prediction model
BERT-based model underscores the efficacy of leveraging that was both highly performant and reasonably priced.
advanced natural language processing techniques for The project made use of Facebook's AI research (FAIR)
classification tasks. BERT's contextualized embeddings Lab's fastText package to do this. For text categorization
and deep learning architecture contribute to its ability to and word embedding, conventional techniques like Linear
capture intricate patterns within the data, leading to Support Vector Machine (LSVM) were also used. The
superior performance compared to traditional classifiers proposed model was subjected to comparisons with a
such as SVM. custom multi-layer Sentiment Analysis (SA) Bi-directional
Long Short-Term Memory (SA-BLSTM) model developed
The authors of [7] have put up a methodology that aims to
by the author. These comparisons aimed to assess the
optimize the automobile rental procedure while
performance of the fastText-based model against a more
customizing the encounter to each customer's unique
complex and custom-designed deep learning architecture.
requirements and preferences. It considers a number of
variables, such as the pick-up time, the kind of car, the The POCA (Poetry and associated Audio) dataset, which
destination, and even more specific needs like carrying includes both written poems and their associated recitals,
racks for sporting goods and infant car seats. The algorithm was proposed by the authors in [10]. It is sourced from an
strives to improve the entire rental vehicle experience by online poetry database. These recitals, which are either
meeting a wide range of client demands. Car rental given by the poet or an authorized performer, are carefully
companies are modifying their procedures in response to selected by the website. The dataset is made to make
changing consumer expectations for customer care in order emotion analysis easier and offers a thorough resource for
to draw in and keep loyal clients. The algorithm aligns with comprehending the subtle emotional messages in poetry.
this trend by prioritizing customization and flexibility in There are 330 poems in the dataset, and each one has
the rental process. This approach not only caters to the textual and audio accompaniments. The dataset's multi-
immediate demands of clients but also contributes to modal structure and variety of emotional annotations offer
building long-term relationships by providing a a solid basis for the development and assessment of
personalized and convenient service. The shift towards sentiment analysis and emotion detection models for
customer-centric practices reflects a broader industry creative material, especially poetry.
acknowledgment of the importance of meeting individual
3. Methodology:
preferences and requirements. Through the implementation
of this algorithm, car rental agencies can not only optimize This study utilized a substantial dataset comprising movie
their operational processes but also differentiate reviews, sourced from the publicly available Internet
themselves in a competitive market, fostering customer Movie Database (IMDB) review dataset. Access to this
loyalty through a more tailored and responsive service. data is possible through Kaggle or directly from Stanford.
This experiment employs a comprehensive plan for
In [8], authors have suggested With a particular focus on
sentiment analysis on the IMDB dataset. In Stage 0, data
assessing public opinion on juvenile criminality, this study
preprocessing is initiated, beginning with the crucial step
employs sentiment analysis. Twitter post dataset is used as
of data cleaning. In this phase, we meticulously address
a fine-tuning tool for BERT transformer models in this
missing values, if any, and eliminate HTML tags, special
technique. The primary aim of this study is to evaluate how
characters, and numbers. Subsequently, the text is
well BERT-based models capture the complex emotions
transformed to lowercase, stop words are removed, and
related to juvenile misbehavior in social media discourse.
lemmatization is performed to refine the dataset. Moving
Apart from the optimized BERT model, the project
to the second step, we delve into Exploratory Data Analysis
presents a comparison study between BERT and
(EDA). Here, we scrutinize the distribution of sentiments
conventional Machine Learning models. Specifically,
within the dataset and employ word clouds to explore the
Random Forest and Support Vector Classifier models are
most prevalent words in both positive and negative
considered, with the latter utilizing BERT embeddings for
reviews. The third step of data cleaning involves the
sentiment classification. This comparative evaluation aims
exploration of tokenization followed by embedding.
to determine whether the process of fine-tuning BERT
Concluding this phase, the dataset is split into two
models offers significant advantages over established
categories: a training set and a test set. In Stage 1, feature
Machine Learning techniques in the context of sentiment
extraction is undertaken using two distinct models, namely
analysis for juvenile delinquency discussions on Twitter.
BERT and CNN. Transitioning to Stage 2, a hybrid model
combining CNN and BERT is generated for the purpose of
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 614
testing the dataset, followed by a comprehensive combining meticulous data preprocessing with advanced
performance evaluation. This multifaceted approach feature extraction and evaluation methodologies [11][12].
ensures a robust analysis of sentiment in the IMDB dataset,
Fig 1: Proposed Model
Algorithm: Sentiment Analysis on IMDB Dataset
Stage 0: Data Preprocessing
1. Initialize the dataset with movie reviews from the IMDB dataset.
2. Perform data cleaning:
a. Address missing values, if any.
b. Eliminate HTML tags, special characters, and numbers.
c. Transform text to lowercase.
d. Remove stop words.
e. Perform lemmatization to refine the dataset.
3. Move to Exploratory Data Analysis (EDA):
a. Scrutinize the distribution of sentiments within the dataset.
b. Utilize word clouds to explore prevalent words in positive and negative reviews.
4. Further Data Cleaning:
a. Explore tokenization.
b. Implement embedding techniques.
5. Split the dataset into two categories:
a. Training set.
b. Test set.
Stage 1: Feature Extraction
6. Undertake feature extraction using two distinct models:
a. BERT (Bidirectional Encoder Representations from Transformers).
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 615
b. CNN (Convolutional Neural Network).
Stage 2: Hybrid Model Testing and Performance Evaluation
7. Generate a hybrid model by combining CNN and BERT.
8. Apply the hybrid model to test the dataset.
9. Perform comprehensive performance evaluation:
a. Analyze accuracy, precision, recall, and F1-score.
b. Explore confusion matrices and other relevant metrics.
10. Conclude the experiment, ensuring a robust sentiment analysis of the IMDB dataset.
4. Result: based on different machine learning technique and
proposed model [13][14].
Tailed breakdown of the steps involved in the data
preprocessing and exploratory analysis based on the 1. Handling Missing Values: There are 0 missing values
information provided, followed by analysis prediction for both the "review" and "sentiment" columns.
Fig 2: Overview of IMDB dataset
2. Duplicate Removal: Identified and removed 418 duplicate reviews from the dataset.
Fig 3: IMDB dataset description details
3. Dataset Statistics After Cleaning: The dataset now contains 49,582 rows and 2 columns after eliminating duplicates.
Fig 4: IMDB dataset description details
4. Sentiment Distribution: Positive reviews: 24,698 (49.81% of the dataset). Negative reviews: 24,884 (50.19% of the
dataset).
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 616
Fig 5: Sentiment positive and negative count
5. Distribution of Number of Words per Reviews: Analyze and visualize the distribution of the number of words per review
to understand the text length patterns in the dataset.
Fig 7: IMDB dataset description details
Fig 8: Number of characters in text count.
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 617
Fig 9: Number of characters in text count.
Fig 10: Number of characters in text count.
6. Unigram Analysis: Conduct unigram analysis for both positive and negative reviews to identify the most frequent single
words [15][16].
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 618
Fig 11: Number of characters in text count.
7. Bigram Analysis: Perform bigram analysis for positive and negative reviews to identify common pairs of adjacent words.
Fig 12: Number of characters in text count.
8. Trigram Analysis: Conduct trigram analysis for positive and negative reviews to identify common triplets of words [17].
Fig 13: Number of characters in text count.
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 619
Logistic Regression: belongs to a particular class, and it's particularly well-
suited for problems with two classes. The performance of
Logistic Regression is used for classification tasks, not
Logistic regression in terms of accuracy is 89.03%,
regression. It models the probability that a given input
pression is 87.37%, recall is 90.29%, f1 score is 88.80%.
Fig 14: Confusion matrix for Logistic regression
Multinomial Naive Bayes Classifier: terms. The Multinomial Naive Bayes classifier is
commonly used for tasks such as document classification,
The Multinomial Naive Bayes classifier is a probabilistic
spam filtering, and sentiment analysis. The performance of
classification algorithm based on Bayes' theorem,
Multinomial Naive Bayes classifier in terms of accuracy is
particularly suited for text classification problems where
86.79%, pression is 87.35%, recall is 86.30%, f1 score is
the features are discrete and represent the frequency of
86.24%.
Fig 14:Confusion matrix for Multinomial Naive Bayes Classifier
Linear Support Vector Classifier: A Linear Support Vector a hyperplane that best separates the classes in the feature
Classifier (Linear SVC) is a type of Support Vector space. The performance of SVM classifier in terms of
Machine (SVM) that is particularly well-suited for linearly accuracy is 89.57%, pression is 88.16%, recall is 90.65%,
separable datasets. The Linear SVC specifically constructs f1 score is 89.39%.
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 620
Fig 15: Confusion matrix for Linear Support Vector Classifier
XGBoost: XGBoost is particularly effective for both XGBoost classifier in terms of accuracy is 84.63%,
classification and regression tasks. The performance of pression is 81.99%, recall is 86.54%, f1 score is 84.14%.
Fig 14:Confusion matrix for XGBoost
Proposed Model evaluation: Combining Convolutional BERT Embeddings: BERT provides contextual
Neural Networks (CNNs) with BERT (Bidirectional embeddings for each token in the input sequence,
Encoder Representations from Transformers) is a powerful considering both left and right context.
approach for natural language processing tasks. This
CNN Feature Extraction: Convolutional layers are applied
hybrid model leverages the strengths of both CNNs, which
to the BERT embeddings to capture local patterns and
are effective in capturing local patterns, and BERT, which
features. The convolutional filters slide over the
excels in capturing global context and semantics. Here's an
embeddings, detecting specific patterns.
overview of the CNN+BERT architecture:
Pooling Layers: Max pooling or average pooling layers are
Architecture Overview:
often used to extract the most relevant features from the
Input Encoding: The input text is tokenized and encoded convolutional outputs.
using BERT's pre-trained embeddings to capture rich
Flattening: The pooled features are flattened into a vector
contextual information.
representation to be fed into subsequent layers.
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 621
Dense Layers: Fully connected layers are added for further Output Layer: The final output layer, usually a softmax
abstraction and to capture complex relationships between layer, produces probabilities for classification tasks or
features. regression values for regression tasks. The performance of
proposed model in terms of accuracy is 95.91%, pression
is 96..80%, recall is 95.07%, f1 score is 95.93%.
Fig 14: Confusion matrix for proposed model
5. Conclusion: References:
In conclusion, the sentiment analysis experiment [1] Iftikhar, S., Alluhaybi, B., Suliman, M., Saeed, A., &
conducted on the IMDB dataset employing a Fatima, K. (2023). Amazon products reviews
comprehensive plan showcased the effectiveness of the classification based on machine learning, deep
CNN+BERT hybrid model. The initial data preprocessing learning methods and BERT. TELKOMNIKA
stages, including thorough cleaning and exploratory data (Telecommunication Computing Electronics and
analysis, set a solid foundation for subsequent stages. The Control), 21(5), 1084-1101.
incorporation of both BERT and CNN in feature extraction [2] Yenkikar, A., & Babu, C. N. (2023). AirBERT: A fine-
during Stages 1 and 2, respectively, demonstrated the tuned language representation model for airlines
model's capability to capture both local and global tweet sentiment analysis. Intelligent Decision
contextual information, leading to a robust sentiment Technologies, 17(2), 435-455.
analysis framework. The distribution analysis revealed a [3] Muftie, F., & Haris, M. (2023, August). IndoBERT
balanced dataset with an almost equal number of positive Based Data Augmentation for Indonesian Text
and negative reviews. The CNN+BERT model exhibited Classification. In 2023 International Conference on
superior performance during evaluation, showcasing its Information Technology Research and Innovation
ability to discern sentiments accurately. This hybrid (ICITRI) (pp. 128-132). IEEE.
approach capitalized on the strengths of both architectures, [4] Muftie, F., & Haris, M. (2023, August). IndoBERT
providing a holistic understanding of the dataset. Further Based Data Augmentation for Indonesian Text
enhancements could involve fine-tuning hyperparameters, Classification. In 2023 International Conference on
exploring additional preprocessing techniques, or Information Technology Research and Innovation
considering other advanced architectures. Overall, the (ICITRI) (pp. 128-132). IEEE.
results signify the potential of integrating state-of-the-art [5] Motitswane, O. G. (2023). Machine learning and deep
models like BERT with traditional architectures such as learning techniques for natural language processing
CNNs for improved sentiment analysis on complex with application to audio recordings (Doctoral
datasets. This experiment not only contributes to the dissertation, North-West University (South Africa)).
growing field of sentiment analysis but also underscores [6] Sultana, M. (2023). An Exploration of Dialog Act
the significance of thoughtful model selection and Classification in Open-domain Conversational
hybridization for achieving optimal results in natural Agents and the Applicability of Text Data
language processing tasks. Augmentation.
[7] S. Nayak, Sonia and Y. K. Sharma, "Efficient
Machine Leaning Algorithms for Sentiment Analysis
In Car Rental Service," 2023 International
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 622
Conference on Computational Intelligence and Target Coverage in Wireless Sensor
Sustainable Engineering Solutions (CISES), Greater Networks." Advanced Wireless Communication and
Noida, India, 2023, pp. 452-463, doi: Sensor Networks. Chapman and Hall/CRC 227-242.
10.1109/CISES58720.2023.10183435.
[8] Khan, A., Hopkins, J., & Gunes, H. (2021,
September). Multi-dimensional Affect in Poetry
(POCA) Dataset: Acquisition, Annotation and
Baseline Results. In 2021 9th International
Conference on Affective Computing and Intelligent
Interaction (ACII) (pp. 1-8). IEEE.
[9] Thapa, A. (2023). Sentiment Analysis On Juvenile
Delinquency Using BERT Embeddings (Doctoral
dissertation, Dublin, National College of Ireland).
[10] Chinnalagu, A., & Durairaj, A. K. (2021). Context-
based sentiment analysis on customer reviews using
machine learning linear models. PeerJ Computer
Science, 7, e813.
[11] Narayan, Vipul, et al. "7 Extracting business
methodology: using artificial intelligence-based
method." Semantic Intelligent Computing and
Applications 16 (2023): 123
[12] Narayan, Vipul, et al. "A Comprehensive Review of
Various Approach for Medical Image Segmentation
and Disease Prediction." Wireless Personal
Communications 132.3 (2023): 1819-1848.
[13] Mall, Pawan Kumar, et al. "Rank Based Two Stage
Semi-Supervised Deep Learning Model for X-Ray
Images Classification: AN APPROACH TOWARD
TAGGING UNLABELED MEDICAL
DATASET." Journal of Scientific & Industrial
Research (JSIR) 82.08 (2023): 818-830.
[14] Narayan, Vipul, et al. "Severity of Lumpy Disease
detection based on Deep Learning Technique." 2023
International Conference on Disruptive Technologies
(ICDT). IEEE, 2023.
[15] Saxena, Aditya, et al. "Comparative Analysis Of AI
Regression And Classification Models For Predicting
House Damages İn Nepal: Proposed Architectures
And Techniques." Journal of Pharmaceutical
Negative Results (2022): 6203-6215.
[16] Kumar, Vaibhav, et al. "A Machine Learning
Approach For Predicting Onset And
Progression"“Towards Early Detection Of Chronic
Diseases “." Journal of Pharmaceutical Negative
Results (2022): 6195-6202.
[17] Chaturvedi, Pooja, Ajai Kumar Daniel, and Vipul
Narayan. "Coverage Prediction for Target Coverage
in WSN Using Machine Learning Approaches."
(2021).
[18] Chaturvedi, Pooja, A. K. Daniel, and Vipul Narayan.
"A Novel Heuristic for Maximizing Lifetime of
International Journal of Intelligent Systems and Applications in Engineering IJISAE, 2024, 12(12s), 612–623 | 623