0% found this document useful (0 votes)
16 views6 pages

Paper 2

Societies are significantly impacted by disasters; fast and accurate information is needed in order to respond. Despite challenge like unstructured data and fake news, Twitter can be used in sharing real-time updates. This paper focuses on a robust framework for categorizing disaster related tweets using dense embeddings such as GloVe, BERT, and FastText, along with machine learning models like Support Vector Machine, Decision Tree, XGBoost, Gradi ent Boost, Light GBM, Multi-layer Perceptr

Uploaded by

manojpemmadi09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views6 pages

Paper 2

Societies are significantly impacted by disasters; fast and accurate information is needed in order to respond. Despite challenge like unstructured data and fake news, Twitter can be used in sharing real-time updates. This paper focuses on a robust framework for categorizing disaster related tweets using dense embeddings such as GloVe, BERT, and FastText, along with machine learning models like Support Vector Machine, Decision Tree, XGBoost, Gradi ent Boost, Light GBM, Multi-layer Perceptr

Uploaded by

manojpemmadi09
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Embedding-Based Framework for Disaster

Tweets Classification With Explainable AI


Insights for Machine Learning Models
2025 8th International Conference on Electronics, Materials Engineering & Nano-Technology (IEMENTech) | 979-8-3315-2076-2/25/$31.00 ©2025 IEEE | DOI: 10.1109/IEMENTECH65115.2025.10959533

Sana Akhila, Kallepalli Rahul Varma, Kurukundu Dhanush Sai, Susmitha Vekkot*, Kirti S. Pande
Department of Electronics & Communication Engineering
Amrita School of Engineering, Bengaluru
Amrita Vishwa Vidyapeetham, India
[email protected], [email protected], [email protected],
v [email protected], sp [email protected]

Abstract—Societies are significantly impacted by disasters; prioritizing disaster related information amongst the
fast and accurate information is needed in order to respond. enormous amount of social media posts.
Despite challenge like unstructured data and fake news,
Twitter can be used in sharing real-time updates. This paper The intended purpose of this work is to build a
focuses on a robust framework for categorizing disaster- sound model in order to decide if a tweet is regarding
related tweets using dense embeddings such as GloVe, BERT, a disaster or not. Since social media data is generally
and FastText, along with machine learning models like
Support Vector Machine, Decision Tree, XGBoost, Gradi- informal, loose, and, therefore, contains much noise,
ent Boost, Light GBM, Multi-layer Perceptron. The GloVe it is essential to distinguish between disaster-related
embeddings with SVM model outperformed all of the other tweets and other unrelated ones. This task is not
models with an accuracy of 90.09%. LIME and SHAP
analyses are implemented to evaluate the significance of each easy because of different language, tone, structure
word in a given tweet in order to provide insights into their and the specificity of certain type of disasters. Em-
contributions to model’s prediction. ploying Natural Language Processing (NLP) and
Index Terms—Text Classification, Natural Language Pro-
cessing, Embeddings, SHAP, LIME, Synthetic Minority
Machine Learning (ML), the model bi-casts tweets
Oversampling Technique into rich features with high-level data embedding
such as transformer embeddings [4]. As a result
I. I NTRODUCTION of the training on labeled data, the model removes
noise and correctly flag relevant posts. Some of the
Natural and technological disasters, such as tools like LIME (Local Interpretable Model-agnostic)
hurricanes, earthquakes, floods, wildfires, bring and SHAP (Shapley Additive Explanations) make the
widespread destruction, displace people, and disrupt model much more dependable and reliable while pro-
communities [1]. When timely and accurate Infor- viding more transparency. In the end, the model seeks
mation is crucial for disaster response, Twitter has to help in disaster response by ranking potentially
developed into an essential platform to share real time helpful tweets out of the massive social media feed.
updates, safety alerts, and requests for assistance.
This paper focuses on the classification of disaster
Clients use social media to report incidents, safety
tweets into Real Disaster or Not Real Disaster cate-
issues, and request help, making it a useful radar for
gories. Prior to classification, the dataset undergoes
disasters [2]. However, the fast-paced nature of social
preprocessing, several ML models were used to train
media makes it susceptible to misinformation, which
the dataset and then SHAP, LIME analysis have been
can lead to confusion, panic, and misallocation of
applied to understand the contribution of each word
resources during emergencies.
in a Disaster Tweets.
While Social media is fast and free way to share
Key contributions of this paper are:
information, its speed and accessibility means rumors
and fake news to spread quickly. In such a situation, • Enhanced Semantic Understanding: Utilizes
misinformation can actually make the disaster worse higher-level embeddings such as BERT, Fast
by confusing and redirecting resources, delaying to Text, and GloVe in an attempt to increase the
prevent an effective response. Imagine a false post contextual accuracy of the tweet classification.
[3]that says a part of the city is evacuated be- • Improved Model Accuracy and Scalability:
cause of flood and that resulted in pointless panic Uses LightGBM and XGBoost models for ac-
or bad decision. It shows an unparalleled urgency curacy and focal on effective working, which is
for automated systems responsible for filtering and the requirement of real-time applications.

979-8-3315-2076-2/25/$31.00 ©2025 IEEE


Authorized licensed use limited to: Siddhartha Academy of Higher Education Deemed To Be University. Downloaded on May 10,2025 at 10:05:37 UTC from IEEE Xplore. Restrictions apply.
• Transparent Decision-Making: Employ SHAP Raghuram M et al. [5], they used DL models with
and LIME to allow the model’s decision to be N-gram features to classify disaster related tweets,
easily explained. but pointed out the limitation of only using English
This paper starts with an introduction where the tweets, and suggested using word embeddings for
challenges of disaster-related misinformation in better accuracy.
social media are described. This is succeeded by K. Ningsih et al. [6] obtained 79% accuracy for the
a Literature Review that synthesizes the existing classification of disaster tweets by combining BERT,
literature on the classification of disaster tweets Sentiment analysis, TF-IDF and LinearSVC that can
and contemporary work in NLP and ML. Data be beneficial in the real-time disaster response system
preprocessing, embedding techniques, and ML as an assistant to emergency management. Finally,
models and interpretability tools like SHAP and Logistic Regression was used to achieve 82% ac-
LIME are highlighted in the Methodology section. curacy by Modafar Ati et al. [7] who analyzed
The Result and Evaluation section presents the 264,800 COVID-19 related tweets. They used LIME
effectiveness and the correctness of the model. and SHAP to improve model transparency and aid
Finally, Summary and Future Work Section, which decision-making during crises.
summarises the conclusions arrived at and also In conclusion, this section emphasizes the impor-
suggests further areas of work in automating disaster tance of high-quality datasets and tailored methods
response. for accurate disaster Tweet classification . This pro-
posed model experiments ML models and embed-
dings to arrive at the best model for Tweet classifi-
II. R ELATED W ORK cation.
Disaster Tweet categorization is a challenging yet
essential activity in enhancing disaster response. This III. PROPOSED WORK
section examines on recent advancements on clas- The proposed work section includes Data descrip-
sifying disaster-related tweets to support emergency tion, Data preprocessing, generating Embeddings,
response by ML and DL models such as SVM, BERT Model training, Model evaluation, LIME and SHAP
and ensemble approaches. It also reviews key feature Analysis as shown in Fig. 1.
extraction methods including TF-IDF, Word2Vec and
FastText to aid in feature extraction, and interpretabil-
ity tools such as LIME and SHAP to increase model
transparency and utility in real time crisis response.
Meghna Jayan et al. [1] compared the use of
ML and DL methods to classify disaster tweets
using Support Vector Machine (SVM), Naı̈ve Bayes
and LSTM models. Their results(SVM, Multi-Layer
Perceptron (MLP)) highlighted the importance of
sentiment analysis and emotions in understanding
disaster intensity, using techniques like TfidfVector-
izer and Word2Vec for feature extraction. Windu
Gata et al. [2] classified tweets into informative
or non-informative regarding disasters in Indonesia,
addressing class imbalance with Synthetic Minority
Over-sampling Technique (SMOTE), with SVM out-
performing Naı̈ve Bayes at 81.03% accuracy.
To improve disaster tweet classification, Ademola
Adesokan et al. [3] created TweetACE, which uses
BERT to reach a 66% accuracy for event types.
Expanding the datasets and combining hybrid models
to improve accuracy was suggested by them. Anurag
Singh et al. [4] employed the FastText, BERT, and
Fig. 1. Proposed Workflow for Disaster Classification
GloVe embeddings along with LSTM and tested
resulting model on the different datasets achieving
better performance than the models and also sug- A. Data Description and Preprocessing
gested the use of the multilingual applications for 1) Dataset Description: The dataset: ”NLP with
real time disaster management. In a recent work by Disaster Tweets” from Kaggle [9] contains 11,369

Authorized licensed use limited to: Siddhartha Academy of Higher Education Deemed To Be University. Downloaded on May 10,2025 at 10:05:37 UTC from IEEE Xplore. Restrictions apply.
records and five attributes: keyword, location, text, • XLM - Multilingual context embedded to
target, and ID. The target attribute classifies tweets capture which allows cross-lingual under-
as either disaster-related (1) or not (0) and the text standing.
attribute contains the actual content of the tweet.
2) Data Preprocessing : Data Preprocessing be-
C. Feature Engineering
gins with a series of steps to ensure data integrity,
and these include dropping unnecessary columns (ID, Feature engineering included keywords extracted
keyword, and location), text normalizing, tokeniz- from the embedding and generating feature vectors
ing, lemmatizing and stop words removal. Several from the mean of all the keywords of a given
Normalization steps like converting all text in lower tweet. This process enhanced classifier performance
case, removing non alphabetic characters such as by providing compact numerical representations that
punctuations and numbers, etc were performed. In captures semantic meaning.
order to solve the problems of class imbalance,
SMOTE was appiled to generate new samples for
D. Data Split
the minority class, achieving a balanced distribution
in the training set. The tweets dataset is further divided into training
data and testing data: 80% are fed to the models to
B. Embeddings
train and the rest 20% for testing and evaluating the
Text data was converted to numerical form which model.
is essential for most NLP tasks, such as classification
and sentiment analysis. The number of features was
limited to 300 to optimize computational efficiency, E. Model Training
as additional features resulted in minimal information In this research, several ML models were con-
gain, but significantly increased computational time. sidered, namely SVM, MLP, Gradient Boost, XGB,
Types of Embeddings LightGBM, and DecisionTree. In this research, sev-
1) Traditional based Embeddings used: Meth- eral ML models were considered, namely SVM,
ods which use fixed, pre-trained vectors for MLP, Gradient Boost, XGB, LightGBM, and De-
words based on co-occurrence in large corpora. cisionTree. For each model, hyperparameter tuning
• Fast Text- It captures sub-word informa- was done using RandomizedsearchCV to get the best
tion useful for handling of Out of Vocabu- hyperparameters for each model. The training data
lary (OOV) words and languages which are was used to fit each model and evaluate for best
morphologically rich languages. performance.
• GloVe - The count-based methods rely on
global co-occurrence counts from the cor- F. Evaluation Metrics
pus to compute word representations.
• Word2Vec(CBOW& Skip-gram) - It evaluates the model’s performance using metrics
a) CBOW (Continuous Bag of Words) - like Accuracy, Precision, F1 Score, and Recall. Ac-
It captures subword information useful curacy is the overall measurement, while precision is
for handling of OOV words and lan- used to assess the relevance of the positive predic-
guages which are morphologically rich tions, recall is meant for capturing all actual positives
languages. and, hence F1 score is therefore achieved to have a
b) Skip-Gram - A technique which pre- complete evaluation of the model’s performance.
dicts contexts from center words.
2) Transformer based Embeddings used: Con- G. SHAP And LIME Analysis
textual embeddings that adjust based on the
sentence context [10]. SHAP and LIME were incorporated to interpret
• BERT- Bidirectional context can help the
the model’s decisions and improve understanding of
language to be deeply understood, and also feature contributions.
improves accuracy in tasks. 1) SHAP : Explored how features influence the
• ALBERT - A lighter, more efficient version model’s predictions through contribution to the
of BERT with reduced model size which final decision.
makes it faster in retaining the performance. 2) LIME : It approximates the model locally in
• TinyBERT - A compact BERT variant for order to determine the most favorable feature
real-time applications. influence [11].

Authorized licensed use limited to: Siddhartha Academy of Higher Education Deemed To Be University. Downloaded on May 10,2025 at 10:05:37 UTC from IEEE Xplore. Restrictions apply.
IV. R ESULTS & D ISCUSSION B. Embeddings Performance
The experimental results are analyzed in the fol- This research paper evaluates ten embeddings:
lowing section, assessing the performance of the FastText, GloVe, CBOW, Skip-gram, RoBERTa,
proposed methodology. A variety of ML models were BERT, TinyBERT, ALBERT, XLM, and Distilled
used. Evaluation based on ROC curve and Precision, BERT across multiple classifiers, including Light-
Recall, and F1-Score were employed. LIME and GBM, XGBoost, Gradient Boosting, SVM, and MLP,
SHAP analyses were conducted to understand the using Evaluation metrics as shown in Table II. Based
important features responsible for the outcomes. on the F1 Score, which provides a balanced view
A. Overall Model Performance of Precision and Recall, Six embeddings: FastText,
GloVe, Skip-gram, BERT, TinyBERT, and ALBERT
The evaluation of six distinct ML models is per-
are selected for their consistent performance. These
formed using evaluation metrics like Accuracy, F1-
embeddings resulted in top results, indicating ro-
Score, Precision, and Recall. The ML models in-
bust suitability for high accuracy text classification
clude MLP, XGBoost, GradientBoost, Decision Tree,
tasks. The ROC curves for the top three embeddings
LightBGM, and SVM. The efficiency of these models
(GloVe, ALBERT, and FastText), based on their AUC
has been presented in Table I.
values, are displayed in the next section for further
TABLE I
analysis.
P ERFORMANCE C OMPARISON OF ML M ODELS WITH D IFFERENT
E MBEDDINGS IN % TABLE II
E MBEDDINGS P ERFORMANCE ACROSS C LASSIFIERS IN %
Models Embedding Accuracy Precision Recall F1
Score
Embeddings Classifiers Accuracy Precision Recall F1
Fast-text 86.95 87 87 87
Score
GloVe 88.13 88 88 88
MLP Skip-gram 77.46 85 77 80 LightGBM 89.65 89 90 89
Fast-Text
Albert 89.01 89 89 89 XGBoost 89.5 89 90 89
TinyBert 88.45 88 88 88 SVM 90.09 90 90 90
BERT 88.83 89 89 89 GloVe
LightGBM 89.89 90 90 90
Fast-text 88.65 89 89 89
GloVe 90.09 90 90 90 LightGBM 86.40 86 86 86
Skip-Gram
Skip-gram 79.30 85 79 81 XGBoost 86.10 86 86 86
SVM
Albert 89.06 89 89 89 XGBoost 89 90 90 89
TinyBert 88.89 89 89 89 BERT
MLP 89 89 89 89
BERT 86.19 88 86 87
XGBoost 89.65 90 90 90
Fast-text 78.63 83 79 80 Albert
GloVe 77.75 81 78 79 LightGBM 89.89 89 89 89
Decision Skip-gram 75.32 80 75 77 XGBoost 89.50 88 90 89
Tiny BERT
Tree Albert 79.89 83 80 81 LightGBM 89.09 89 89 89
TinyBert 79.39 83 79 81
BERT 77.54 84 78 80
Fast-text 89.53 89 90 89
GloVe 89.83 90 90 90 C. ROC Curve
Skip-gram 86.10 86 86 86
XGBoost
Albert 89.65 90 90 90 The Receiver Operating Characteristic (ROC)
TinyBert 89.50 88 90 89
BERT 89.56 89 90 89 which is a graphical model used to show how well
Fast-text 89.65 89 90 89 classification models have performed, by plotting
GloVe 89.89 90 90 90
Skip-gram 86.40 86 86 86 True positive rate (TPR) against False Positive Rate
LightGBM
Albert 89.89 89 89 89 (FPR) at various threshold levels and Area Under the
TinyBert 89.09 89 89 89
BERT 88.21 89 88 88
Curve (AUC) indicates the model’s ability to classify
classes.
In Table I, six ML Models (MLP, SVM, Deci- For GloVe embeddings, the highest AUC of 92%
sion Tree, XGBoost, GRBoost, and LightGBM) are was reached by XGBoost, SVM and LightGBM,
evaluated using different word embeddings: FastText, Gradient Boosting was at 91%, and finally MLP was
GloVe, Skip-gram, Albert, TinyBERT, and BERT. at 90%. The lowest AUC of 71% was had by the
Each model’s performance is assessed using four Decision Tree as shown in Fig. 2.
metrics: Accuracy, Precision, Recall, and F1-score. In Fig. 3, ALBERT embeddings achieved the high-
Both LightGBM and XGBoost are top-performing est AUC of 92% by XGBoost and Gradient Boosting.
models over different embeddings. It shows that LightGBM and SVM followed closely with an AUC
LightGBM performs best (89.89%) when using either of 91%. MLP achieved an AUC of 89%. The lowest
BERT (89.89%) or GloVe (89.89%) embeddings, AUC of 72% was recorded by the Decision Tree.
demonstrating its versatility with various embed- For FastText embeddings Fig. 4, the highest AUC
dings. of 91.75% was achieved by XGBoost, followed

Authorized licensed use limited to: Siddhartha Academy of Higher Education Deemed To Be University. Downloaded on May 10,2025 at 10:05:37 UTC from IEEE Xplore. Restrictions apply.
how feature changes affect predictions. While SHAP
offers an importance value of each feature in relation
to the model’s output, giving a consistent importance
measure. LIME and SHAP combined to provide a
comprehnsive understanding of feature contribution,
and thus highlight interpretability and reliability of
the model.

Fig. 2. ROC Curve for GloVe Embeddings

Fig. 3. ROC Curve for ALBERT Embeddings


Fig. 5. SHAP Analysis for Disaster-Related Tweet Classification
closely by LightGBM at 91.63%. Gradient Boost-
ing and SVM both attained AUCs of 91.35% and The image in Fig. 5 employs SHAP values to
91.45%, respectively. MLP reached an AUC of visualize the impact of specific words) on the pre-
89.66%. The lowest AUC of 68.63% was recorded dictions of the best-performing GloVe-based SVM
by the Decision Tree. model for disaster-related tweet classification. The
Ensemble methods [12] (XGBoost, Gradient prediction terms ”please,” ”burning,” ”storm,” and
Boosting, LightGBM) consistently outperformed ”survivor” have positive SHAP values, indicating
other models, with Decision Trees showing poor that they have made strong contributions to disaster-
performance across all embeddings. related predictions. In contrast, SHAP values exhibit
lower for words such as ’who’ and ’apologize,’ which
D. Explainable AI Analysis makes it less likely to get classified as a disaster.
This is a relatively new concept that is the ability
to understand the actions that were done by the
AI models. Two of the most used methods in XAI
are LIME and SHAP [13]. LIME approximates the
model locally with an interpretable one, showing
Fig. 6. Prediction for a Real Disaster Instance in LIME Analysis

Fig. 7. Prediction for a Not Real Disaster Instance in LIME Analysis

Fig. 6 and Fig. 7 utilizes a LIME analysis to


visualize prediction probabilities and highlight the
contribution of individual words to the classification
decision [14]. In Fig. 6, “Real Disaster” words like
‘found’ & ‘wreckage,’ positively influence the pre-
Fig. 4. ROC Curve for FastText Embeddings diction, while ’unnoticed’ has a negative impact. As

Authorized licensed use limited to: Siddhartha Academy of Higher Education Deemed To Be University. Downloaded on May 10,2025 at 10:05:37 UTC from IEEE Xplore. Restrictions apply.
shown in Fig. 7, words such as ‘cause’, ‘structural’ [7] Ati, Modafar, H. Farok, R. Al-Bostami and K. Yahya, “Sentiment
and ‘failure’ which do not contribute to the ‘Real Analysis of COVID-19 Tweets: Combining Explainable Artificial
Intelligence and Traditional Machine Learning for Business and
Disaster’, supports the classification of a ‘Not Real Entrepreneurship Insights,” in *Journal for International Business
Disaster’. and Entrepreneurship Development*, 2023.
[8] A. Khattar and S. M. K. Quadri, “CAMM: Cross-Attention
This analysis underscores the interpretability and Multimodal Classification of Disaster-Related Tweets,” in *IEEE
effectiveness of the GloVe-based SVM model, Access*, vol. 10, pp. 92889–92902, 2022, doi: 10.1109/AC-
CESS.2022.3202976.
demonstrating how LIME and SHAP help in under- [9] https://www.kaggle.com/datasets/shailjakanttiwari/nlp-with-
standing the key features that drive predictions. disaster-tweets
[10] S. S. H. Samudrala, J. Thambi, S. R. Vadluri, A. Mahalingam
and P. B. Pati, “Enhancing Parkinson’s Disease Diagnosis using
V. C ONCLUSION A ND F UTURE W ORK Speech Analysis: A Feature Subset Selection Approach with LIME
and SHAP,” in *2024 3rd International Conference for Innovation
In conclusion, this research work presents a frame- in Technology (INOCON)*, Bangalore, India, 2024, pp. 1–5, doi:
10.1109/INOCON60754.2024.10511805.
work for the classification of tweets within the dis- [11] P. D. Reddy et al., “Enhancing Content Based Collaborative Fil-
aster using ML and NLP techniques. By leveraging tering Recommendations Using Weighted Word Embeddings,” in
*2024 15th International Conference on Computing Communica-
embeddings such as FastText, GloVe, and BERT, tion and Networking Technologies (ICCCNT)*, Kamand, India,
along with ML classifiers like LightGBM, XGBoost, 2024, pp. 1–7, doi: 10.1109/ICCCNT61001.2024.10724582.
[12] A. H. Meti, B. U. Maheswari and A. Vijjapu, “Advancing Alpha-
and SVM, the project achieves significant improve- Thalassemia Carrier Screening for Better Predictions Using Ex-
ments in classification accuracy and interpretability. plainable AI,” in *2023 4th International Conference on Commu-
Notably, the SVM at the highest accuracy of 90.09% nication, Computing and Industry 6.0 (C216)*, Bangalore, India,
2023, pp. 1–5, doi: 10.1109/C2I659362.2023.10430520.
was achieved with GloVe embeddings. LIME and [13] B. U. Maheswari, A. A, A. Avvaru, A. Tandon and R. P. de Prado,
SHAP are incorporated to improve the interpretatal- “Interpretable Machine Learning Model for Breast Cancer Predic-
tion Using LIME and SHAP,” in *2024 IEEE 9th International
ity of the models with more details, such as how Conference for Convergence in Technology (I2CT)*, Pune, India,
features contribute to prediction, with focus on SVM 2024, pp. 1–6, doi: 10.1109/I2CT61223.2024.10543965.
[14] A. Aswathy, R. Prabha, L. S. Gopal, D. Pullarkatt and M. V.
model.This result highlights the potential of combin- Ramesh, “An efficient twitter data collection and analytics frame-
ing semantic-rich embeddings with robust classifiers work for effective disaster management,” in *2022 IEEE Delhi
Section Conference (DELCON)*, New Delhi, India, 2022, pp. 1–6,
to effectively identify relevant tweets during disaster doi: 10.1109/DELCON54057.2022.9753627.
events.
Future work could involve extending the dataset for
analysis to the multilingual ones, using the tweets of
multiple disasters and incorporated it into the model
to enhance its effectiveness. Furthermore, combining
modal data such as images and location could add to
the understanding of content related to disaster.

R EFERENCES
[1] K. Asinthara, M. Jayan and L. Jacob, “Classification of Disaster
Tweets using Machine Learning and Deep Learning Techniques,” in
*2022 International Conference on Trends in Quantum Computing
and Emerging Business Technologies (TQCEBT)*, Pune, India,
2022, pp. 1–5, doi: 10.1109/TQCEBT54229.2022.10041629.
[2] W. Gata, F. Amsury, N. K. Wardhani, I. Sugiyarto, D. N. Sulisty-
owati and I. Saputra, “Informative Tweet Classification of the Earth-
quake Disaster Situation In Indonesia,” in *2019 5th International
Conference on Computing Engineering and Design (ICCED)*, Sin-
gapore, 2019, pp. 1–6, doi: 10.1109/ICCED46541.2019.9161135.
[3] A. Adesokan, S. Madria and L. Nguyen, “TweetACE: A Fine-
grained Classification of Disaster Tweets using Transformer
Model,” in 2023 IEEE Applied Imagery Pattern Recognition
Workshop (AIPR), St. Louis, MO, USA, 2023, pp. 1–9, doi:
10.1109/AIPR60534.2023.10440656.
[4] A. Singh, P. Soni, A. Singh, I. Potle and A. Singh, “Enhancing
Disaster Tweet Classification with Ensemble Models and Multiple
Embeddings,” in *2023 4th IEEE Global Conference for Advance-
ment in Technology (GCAT)*, Bangalore, India, 2023, pp. 1–7, doi:
10.1109/GCAT59970.2023.10353510.
[5] R. M, S. M, N. OS and E. T, “An Enhanced Framework for
Disaster-Related Tweet Classification using Machine Learning
Techniques,” in 2023 International Conference on Inventive Com-
putation Technologies (ICICT), Lalitpur, Nepal, 2023, pp. 108–111,
doi: 10.1109/ICICT57646.2023.10134249.
[6] A. K. Ningsih and A. I. Hadiana, “An Investigation into Disaster
Classification,” in IOP Conf. Ser.: Mater. Sci. Eng., vol. 1115, pp.
012032, 2021, doi: 10.1088/1757-899X/1115/1/012032.

Authorized licensed use limited to: Siddhartha Academy of Higher Education Deemed To Be University. Downloaded on May 10,2025 at 10:05:37 UTC from IEEE Xplore. Restrictions apply.

You might also like