AI System that Detects and Explain Fake News 2025-26
Abstract
Fake news has become one of the most damaging digital threats today, influencing politics,
public health, economics, and social stability. Although ai based models have achieved
significant accuracy in detecting misinformation, most systems operate as opaque “black
boxes,” limiting user trust. This research proposes a hybrid AI system that not only detects
fake news but also generates clear explanations for its predictions, using Machine Learning
(ML), Deep Learning (DL), Natural Language Processing (NLP), stance detection,
propagation modeling, and Explainable AI (XAI). The proposed model integrates linguistic
features, propagation behavior, and LLM-based justification to enhance both accuracy and
transparency.
1. Introduction
The widespread use of social media has transformed information sharing. However,
the same platforms enable rapid dissemination of fake news, intentionally created to mislead
readers [1]. Traditional fact-checking approaches are time-consuming, while automated AI
methods, though accurate, generally lack transparency, making it difficult for users to
understand why a piece of news is classified as fake.
Machine Learning and Deep Learning methods have been widely applied to fake
news detection [1], [3], but they still face challenges in interpretability. Recent works
emphasize the importance of enabling models to justify their decisions, especially in high-
impact scenarios such as political misinformation [4]. This motivates the development of an
AI system capable of both detecting and explaining fake news, bridging the gap between
prediction accuracy and interpretability
2. Problem Statement
Fake news has become one of the most harmful digital threats worldwide, influencing
politics, public health, finance, and societal stability. The rapid spread of misinformation
through social media platforms misguides millions of people and creates large-scale
misunderstanding. Existing automated fake news detection systems primarily focus on
classification and often function as black-box models, providing no explanation for their
decisions.
This lack of transparency creates a trust deficit among users, especially in critical
cases like political misinformation and health-related fake news. There is a clear gap in the
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
availability of AI systems that can both detect fake news accurately and justify the
classification with understandable explanations.
This research aims to address this gap by developing an AI-based system capable of
detecting fake news using ML/DL techniques while simultaneously generating human-
understandable explanations using Explainable AI (XAI) and Large Language Models
(LLMs).
3. Objectives of the Study
The primary objectives are:
1. To design an AI model capable of detecting fake news using ML/DL and NLP-
based techniques.
2. To integrate stance analysis and propagation behavior to improve classification
accuracy.
3. To incorporate Explainable AI (XAI) methods such as LIME, SHAP, and attention
visualization to justify model predictions.
4. To develop an LLM-based explanation module that provides simple and
meaningful reasoning for users.
5. To evaluate the accuracy, interpretability, and usefulness of the proposed system
using standard datasets.
4. Literature Survey
M. Desamsetti et al.[1] the paper concludes that ai-based techniques—especially
machine learning and deep learning—are highly promising for detecting fake news, but no
single method is sufficient on its own. Effective detection requires hybrid approaches that
combine text analysis, social-context features, and advanced models to handle the evolving
and sophisticated nature of misinformation. The authors emphasize that real-world fake-news
detection remains challenging due to limited quality datasets and the dynamic strategies of
misinformation creators, and they recommend developing more robust, integrated
frameworks for future research.
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
H. Rathod et al.[2], “Building an Explainable and Scalable AI System for Fake
News Detection,” The authors conclude that their AI-powered system — which combines
deep-learning and NLP (textual cues, sentiment/emotion analysis), propagation-pattern
tracking, and audience response modelling — can robustly detect fake news across various
digital platforms. The system is scalable (able to handle large volumes of content) and
supports real-time detection, addressing key limitations of manual fact-checking. Moreover,
by emphasizing explainability and transparency in its design, the system seeks to make its
decisions more interpretable — a step toward more trustworthy and responsible
misinformation detection tools.
A. Ahmed et al.[3] concludes that machine learning techniques are effective for fake
news detection, with models such as SVM, Random Forest, and Naïve Bayes showing strong
performance depending on the dataset and feature representation. The paper highlights that
proper feature extraction—especially using linguistic and content-based features—is crucial
for accuracy. Overall, the research demonstrates that ML-based approaches can significantly
reduce misinformation but still require improvements in handling diverse writing styles and
rapidly evolving fake news patterns.
B. Hu et al.[4], “An Overview of Fake News Detection,” .conclude that fake news
detection has evolved into a multidisciplinary field integrating machine learning, deep
learning, natural language processing, and social network analysis. Their overview highlights
that while current models achieve high accuracy, challenges remain in generalizing across
domains, detecting multimodal misinformation (text, images, videos), and adapting to rapidly
changing deception strategies. The paper emphasizes the need for more robust, explainable,
and scalable detection systems to effectively address the growing complexity of fake news in
digital environments.
X. Wang et al.[5] conclude that integrating Large Language Models (LLMs) into fake
news detection significantly enhances both accuracy and interpretability. Their study shows
that LLMs can not only classify misinformation effectively but also provide human-readable
explanations, improving trust and transparency. The authors emphasize that explainability is
essential for real-world deployment, especially in sensitive domains. They also note that
future work should focus on reducing model biases, improving computational efficiency, and
refining explanation quality for diverse news formats.
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
S. Alnabhan et al.[6] conclude that deep learning models—particularly CNNs,
RNNs, and transformer-based architectures—significantly improve fake news detection
accuracy compared to traditional machine learning approaches. Their study highlights that
deep learning excels at capturing semantic, contextual, and linguistic patterns in news
content. However, the authors also note challenges such as the need for large, high-quality
datasets, high computational cost, and limited model interpretability. They suggest that future
research should focus on hybrid models, multimodal data, and explainable deep learning
techniques to enhance reliability and practical deployment.
S. Akter et al.[7] conclude that advanced AI techniques, including predictive
modeling and temporal analysis, can not only detect fake news but also forecast future
misinformation trends. Their work shows that integrating deep learning, network analysis,
and time-series forecasting improves early detection and prevention. The paper emphasizes
that proactive models are essential for combating rapidly evolving fake news and
recommends further research into real-time systems and cross-platform data integration.
H. F. Villela et al.[8] conclude that although a wide range of machine learning, deep
learning, and hybrid approaches have been applied to fake news detection, no single method
consistently outperforms others across all datasets. Their systematic review highlights key
challenges such as dataset imbalance, lack of standard benchmarks, and limitedgeneralization
to real-world scenarios. They stress the need for more diverse datasets, explainable models,
and unified evaluation frameworks to advance the field.
5. Proposed Methodology
The proposed methodology integrates ML, DL, NLP, stance detection, propagation
modeling, and XAI tools to build a robust and interpretable fake news detection system.
5.1 Techniques / Tools
Machine Learning: SVM, Logistic Regression, Random Forest o
Deep Learning: LSTM, CNN, Transformers (BERT, roberta) o
NLP Tools : Tokenization, Lemmatization, TF-IDF, Word2Vec o
XAI Tools: LIME, SHAP, Attention Visualization o
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Programming Languages: Python o
Libraries: tensorflow, pytorch, Scikit-Learn, NLTK, huggingface o
Database: mongodb / mysql o Environment: Jupyter Notebook / Google Colab
5.2 Workflow / Approach
1. Data collection
use datasets like liar, fakenewsnet, isot.
2. Data preprocessing
text cleaning, stopword removal
tokenization, lemmatization
embedding creation (tf-idf, word2vec, bert)
3. Model development
train ml models (baseline)
train dl models: lstm, bi-lstm, or transformer-based models
integrate stance detection and propagation modeling
4. Explainability Integration
Apply LIME/SHAP for feature explanations
Use attention heatmaps o Generate LLM-based natural language justification
5. Testing & Validation
80/20 train-test split
Cross-validation
6. Evaluation Metrics
Accuracy
Precision, Recall, F1-score
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
6. Experiment Design
1. Experiments:
Compare ML vs DL performance
Evaluate impact of stance/propagation features
Analyze quality of XAI explanations
Compare human satisfaction with LLM-based explanation
2. Parameters to Vary:
Embedding types (TF-IDF, Word2Vec, BERT)
Model architecture depth
Learning rate, batch size, epochs
3. Analysis Approach:
Plot accuracy curves
Compare confusion matrices
Conduct user study for explanation clarity
7. Expected outcomes
The proposed work is expected to produce:
• a fully functional ai system that detects fake news with high accuracy.
• an integrated xai module providing clear, understandable explanations.
• improved performance through stance and propagation modeling.
• a dashboard/report visualizing results and explanations.
• contribution to trustworthy and transparent misinformation detection systems.
These outcomes are measurable and align with the research objectives.
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
8. References
[1] M. Desamsetti et al., “Fake News Detection Using AI Techniques,” 2023.
[2] H. Rathod et al., “Building an Explainable and Scalable AI System for Fake News
Detection,” 2025.
[3] A. Ahmed et al., “Detecting Fake News Using Machine Learning, ” 2023.
[4] B. Hu et al., “An Overview of Fake News Detection,” 2024.
[5] X. Wang et al., “Explainable Fake News Detection Using LLMs,” 2024.
[6] S. Alnabhan et al., “Fake News Detection Using Deep Learning,” 2023.
[7] S. Akter et al., “Advanced Detection and Forecasting of Fake News,” 2025. [8] H. F.
Villela et al., “Fake News Detection: Systematic Review, ” 2023.
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Dept of Cse,GEC Hassan page 1
AI System that Detects and Explain Fake News 2025-26
Dept of Cse,GEC Hassan page 1