0% found this document useful (0 votes)
140 views11 pages

AI-Powered Fake News Detector System

The document presents a project on an AI-powered fake news detection and summarization system developed by students for their Bachelor of Technology degree. The system utilizes real-time data from trusted sources and employs a Large Language Model to evaluate news statements, providing verdicts, reasoning, and summaries. It aims to address the limitations of traditional fake news detection methods by offering a modular, scalable, and explainable solution.

Uploaded by

Shivam Singhal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
140 views11 pages

AI-Powered Fake News Detector System

The document presents a project on an AI-powered fake news detection and summarization system developed by students for their Bachelor of Technology degree. The system utilizes real-time data from trusted sources and employs a Large Language Model to evaluate news statements, providing verdicts, reasoning, and summaries. It aims to address the limitations of traditional fake news detection methods by offering a modular, scalable, and explainable solution.

Uploaded by

Shivam Singhal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Synopsis

On

AI POWERED FAKE NEWS DETECTOR


AND
SUMMARIZER
Submitted in partial fulfillment of the requirement
For the award of the degree of

Bachelor Of Technology
In

Computer Science & Engineering


Submitted By
ANMOL KUMAR 2200680100059
ASIT UPADHYAY 2200680100086
MUSKAN JAIN 2200680100217
YASH GUPTA 2200680100379
Under the guidance of
Dr. Satendra Kumar

Department of Computer Science & Engineering


Meerut Institute of Engineering & Technology, Meerut
TABLE OF CONTENTS

TOPIC PAGE NO.


1. Introduction 3
2. Literature Review/Survey 4
3. Identification of Research Problem/Objectives 5
4. Expected Impact on Academics/Industry 6
5. Research Methodology 7
6. References 9
1. Introduction

The unchecked spread of fake news across digital platforms poses a significant threat to public
awareness, safety, and trust in media. With the increasing use of social media, fake news travels
faster than ever before, often leading to confusion, misinformation, and in some cases, real-
world harm. Manual fact-checking is slow, limited, and insufficient in handling the volume
and speed of today’s information flow.

Limitations of traditional models: Conventional fake news detection systems rely on pre-
trained machine learning models that must be retrained periodically to stay relevant. This
approach is not only time-consuming and resource-intensive but also fails to respond to newly
emerging news in real time. Even if models are retrained at fixed intervals, there remains a
significant lag in detecting fresh misinformation.

Our solution: This project introduces a dynamic, AI-powered fake news detection system that
eliminates the need for traditional model training. Instead of relying on static datasets, it gathers
real-time, factual content from trusted sources on the internet using the Tavily API. The system
then leverages a Large Language Model (LLM) like OpenAI’s GPT-4 to compare the user’s
news statement against the gathered evidence. Based on this intelligent evaluation, the system
returns a final decision.

The output includes a clear verdict (Fake or Valid), a natural language explanation, a 60-
word summary, and a list of reliable source links used in the verification process. This
enables users not only to receive accurate results but also to verify the reasoning and references
behind them.

The system uses a modular, agent-based architecture powered by UAgent, where intelligent
agents manage input handling, query validation, and response generation. The frontend is built
in React, providing an intuitive interface, while the backend is entirely agent-driven, ensuring
scalability, modularity, and ease of deployment. The complete system is deployed using
AgentVerse, a cloud platform designed to host and run autonomous agents in real-time.
2. Literature Review/Survey

1. Shu et al. (2022)


Introduced a data mining approach for fake news detection using linguistic patterns
and user engagement behaviour. Models like RoBERTa achieved significant accuracy
in identifying misinformation based on textual features.
Result: Language and context-aware patterns are crucial for reliable detection.

2. Ahmed et al. (2021)


Developed a hybrid classifier to support multilingual fake news detection. It
combined content-based and metadata-based classifiers to improve accuracy across
different languages.
Result: Highlighted the importance of language flexibility and regional context in
fake news identification.

3. Chen et al. (2023)


Demonstrated the use of large language models such as GPT for interpretability and
reasoning in classification systems. It used prompt-based queries to extract
explanation-style results from LLMs.
Result: GPT models can enhance user trust by justifying their predictions in natural
language.

4. Zhou et al. (2020)


Proposed a hierarchical attention network to detect fake news by capturing both
sentence-level and document-level semantics. The model achieved strong
performance on multi-class fake news datasets by leveraging contextual embeddings.
Result: Hierarchical language understanding improves deep classification
performance for complex narratives.

5. Monti et al. (2019)


Introduced a graph-based approach where news articles, users, and user comments
were modeled as nodes in a relational graph. GCN (Graph Convolutional Networks)
were applied to classify news authenticity based on propagation patterns.
Result: Social engagement structure and user interaction data significantly enhance
fake news detection.

6. Kaliyar et al. (2021)


Developed a fake news detection model named FNDNet using deep convolutional
networks trained solely on textual features. The model was designed for low-latency
predictions on online news content.
Result: Lightweight CNNs can deliver fast and reliable detection performance with
minimal computational resources.
3. Identification of Research Problem/Objectives

Identification of Research Problem

With increasing reliance on online platforms for news and updates, misinformation has
become a prominent issue. Fake news spreads rapidly and has the potential to mislead
the public, damage reputations, and incite social unrest. Traditional manual verification
systems are not scalable and often lack transparency.
Furthermore, most existing fake news detection models are either limited to
classification only or rely on complex backend architectures that are difficult to scale
or modularize. These systems also rarely explain their verdict or summarize the content
for better user understanding.

This project seeks to address these challenges by building a system that:

• Detects fake or valid news statements automatically


• Justifies its decision through human-readable reasoning
• Generates a short, informative summary
• Uses a fully agent-based architecture for modularity and scalability
• Runs entirely through UAgent agents

Objectives of the Study

• To understand agent-based AI architecture using the UAgent framework.


• To build a fake news detection model capable of returning verdict, reasoning, and
summary.
• To design an intuitive web interface using React JS.
• To integrate modular agents such as REST agents and logic agents.
• To deploy the entire system using Agent Verse for accessibility.
• To explore future support for multilingual inputs and voice commands.
4. Expected Impact on Academics/Industry

Academic Impact

This project provides a deep insight into modern approaches to misinformation detection using
agent-based systems. It highlights how AI can be made modular and explainable through
intelligent agents.

The academic significance includes:

• Demonstrates the use of UAgent-based intelligent agent architecture.


• Offers hands-on application of Natural Language Processing (NLP) techniques.
• Introduces Explainable AI (XAI) through reasoning-based outputs.
• Encourages modular and scalable design thinking in AI systems.
• Enhances understanding of agent communication and coordination.
• Reinforces frontend-backend integration skills using modern frameworks.
• Serves as a real-world example for misinformation detection research.
• Promotes interdisciplinary learning across AI, UX, and social computing.
• Demonstrates integration of real-time web search and LLM validation in agent-based
systems.

This project can also be extended into advanced academic research areas such as deep
learning-based misinformation detection, context-aware content analysis, and
deployment of intelligent agents in mobile or low-resource environments for real-time fake
news filtering.

Industry Impact

From an industry perspective, this project contributes to the growing demand for content
moderation tools. Media houses, government fact-checkers, and tech platforms can benefit
from:

• Automated content verification at scale


• Transparent decisions with explainable reasoning
• Modular deployment without maintaining full-scale server backends
• Easy extensibility into real-time systems like messaging apps and browser plugins

By eliminating the dependency on heavy infrastructure, this agent-based solution opens doors
to lightweight deployment, especially in resource-constrained environments.
5. Research Methodology

1. Frontend Design (React JS)


A clean and responsive web interface is developed using React JS. It includes components for
user input, a “Check News” button, and sections to display the verdict (Fake/Valid), reasoning,
and a 60-word summary. The UI is designed to be user-friendly and accessible.

2. Input Handling via REST Agent


A REST Agent is implemented using the UAgent framework. It acts as the communication
bridge between the frontend and internal logic. It receives the user’s input from the interface
and passes it to the processing agent for analysis.

3. Fake News Detection Agent


The core of the project lies in the FakeNewsDetector agent, which manages all core processing
tasks. Upon receiving the user’s query, it first uses the Tavily API to perform a live web search
and fetch trusted, relevant content. Then, using the OpenAI LLM (GPT-4), it validates the
news statement against the retrieved information. Based on this comparison, the agent
generates:
• A classification verdict (Fake or Valid)
• A short reasoning explaining the decision
• A 60-word summary of the input
• A list of the sources used during verification

4. Agent-Based Architecture
The system is designed using UAgent’s modular, asynchronous agent-based architecture. Each
agent functions independently and communicates using defined protocols. This structure
ensures scalability and flexibility for future enhancements like multilingual support or plugin
integration.

5. Response Handling and Output Display


The output from the FakeNewsDetector agent (verdict, reasoning, summary) is passed back to
the REST agent, which forwards it to the frontend. The results are displayed in 4 structured
blocks on the frontend: verdict, reasoning, summary, and source URLs. The system also
presents relevant source URLs used during verification, allowing users to cross-check the
information with trusted references.

6. Tools & Technologies Used


• Frontend: React JS, JavaScript, HTML, CSS
• Backend/AI Logic: UAgent Framework, REST Agent, FakeNewsDetector Agent
• Communication: JSON over HTTP (REST architecture)
• Deployment Platform: AgentVerse for scalable agent hosting
• Version Control: Git and GitHub for team collaboration
• Tavily API – for contextual search
• OpenAI GPT-4 API – for LLM-based reasoning and summarization
• Testing – Postman for local API testing and debugging.
Agent-Based Processing Flow of the Proposed System
6. References
[1]. Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu, “Fake News
Detection on Social Media: A Data Mining Perspective,” ACM SIGKDD Explorations
Newsletter, Vol. 19, Issue 1, 2017.
[Link]/10.1145/3137597.3137600
(Keywords: “Fake news, Social media, Text classification, Data mining, NLP,
Disinformation, Trustworthiness.”)

[2]. Hossain Ahmed, Mohammad Tahmid Hossain, and Md. Samiul Haque, “Detecting
Fake News Using Machine Learning Techniques,” Proceedings of the 2017 International
Conference on Innovations in Science, Engineering and Technology (ICISET),
Chittagong, Bangladesh, 2017.
[Link]/10.1109/ICISET.2017.8264682
(Keywords: “Fake news detection, Machine learning, News classification, Natural language
processing, TF-IDF, Naive Bayes.”)

[3]. Yizhong Chen, Jian Guan, Zihan Liu, and Minlie Huang, “Generating Reasonable
and Diverse Explanations for Fake News Detection,” 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP), Online, 2020.
[Link]/10.18653/v1/[Link]-main.584
(Keywords: “Explainable AI, Reason generation, Fake news, Neural networks, Natural
language understanding, Text generation.”)

[4]. UAgent Development Team, “UAgent Framework: Building Autonomous Intelligent


Agents,” Signals Dev, 2024.
[Link]/signals-dev/UAgent (accessed July 8, 2025)
(Keywords: “UAgent, Intelligent agents, Modular backend, REST agents, Agent
architecture, Python, AI frameworks.”)

[5]. OpenAI Communications Team, “GPT-4 API Documentation,” OpenAI Developer


Docs, 2024.
[Link]/docs (accessed July 8, 2025)
(Keywords: “GPT-4, Prompt engineering, Natural language processing, Reasoning,
Summarization, Language models.”)

[6]. ReactJS Core Team, “React: A JavaScript Library for Building User Interfaces,” Meta
Platforms Inc., 2024.
[Link] (accessed July 8, 2025)
(Keywords: “React JS, Component-based UI, JavaScript, Frontend development, Web
apps.”)

[7]. Tavily Technologies Documentation Team, “Tavily API for Web Search,” Tavily AI
Docs, 2024.
[Link] (accessed July 8, 2025)
(Keywords: “Tavily, Real-time web search, API integration, AI data retrieval, Search
indexing.”)
[8]. Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, and V. Stoyanov, “RoBERTa: A
Robustly Optimized BERT Pretraining Approach,” Facebook AI Research (FAIR), 2019.
[Link]/abs/1907.11692
(Keywords: “RoBERTa, NLP, Pretraining, Transformers, Text classification.”)

[9]. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, “BERT: Pre-
training of Deep Bidirectional Transformers for Language Understanding,” Google AI
Language, 2019.
[Link]/abs/1810.04805
(Keywords: “BERT, Transformers, Text understanding, NLP, Transfer learning.”)

[10]. Hugging Face Team, “Transformers Library for NLP,” Hugging Face
Documentation, 2024.
[Link]/docs/transformers (accessed July 8, 2025)
(Keywords: “Transformers, BERT, RoBERTa, T5, NLP models, HuggingFace.”)

[11]. Railway Engineering Team, “Railway: Infrastructure for Developers,” Railway App
Docs, 2024.
[Link] (accessed July 8, 2025)
(Keywords: “Railway, Backend hosting, Environment deployment, Agent execution.”)

[12]. AgentVerse Development Team, “Deploying Agents using AgentVerse,”


[Link] Documentation, 2024.
[Link] (accessed July 8, 2025)
(Keywords: “AgentVerse, Agent hosting, Agent deployment, UAgent cloud integration.”)

[13]. Poetry Development Team, “Dependency Management with Poetry,” python-


[Link] Documentation, 2024.
[Link] (accessed July 8, 2025)
(Keywords: “Poetry, Virtual environments, Python dependencies, Package management.”)

[14]. Axios Contributors, “Axios: Promise-based HTTP Client for the Browser,” Axios
Documentation, 2024.
[Link] (accessed July 8, 2025)
(Keywords: “Axios, HTTP client, REST communication, API requests, JavaScript.”)

[15]. Git Special Interest Group, “Version Control with Git,” [Link] and GitHub
guides, 2024.
[Link]/[Link] (accessed July 8, 2025)
(Keywords: “Git, GitHub, Version control, Code collaboration, Repositories.”)

[16]. Microsoft Corporation, “Visual Studio Code Editor,” [Link]


Documentation, 2024.
[Link] (accessed July 8, 2025)
(Keywords: “VS Code, IDE, JavaScript editor, Python development, Extensions.”)

[17]. Natural Language Toolkit Development Team, “NLTK Python Library,” [Link]
Documentation, 2024.
[Link] (accessed July 8, 2025)
(Keywords: “Tokenization, Lemmatization, Stopword removal, Text preprocessing, NLP.”)

Common questions

Powered by AI

The document integrates explainable AI (XAI) into its fake news detection technique by utilizing a large language model (LLM) to generate human-readable explanations and a concise 60-word summary for each verdict. This approach ensures that users understand the reasoning behind the system's decisions, enhancing trust and transparency . This is significant because it addresses the typical opacity of AI systems, making the decision-making process accessible to users. It emphasizes the importance of not only providing accurate detection but also empowering users through clearer communication and justification of AI outputs .

Large Language Models (LLMs) like GPT-4 are significant in the proposed fake news detection system due to their sophisticated natural language understanding and generation capabilities. LLMs enable the system to provide clear, detailed reasoning for its verdicts in natural language, which traditional methods that rely on static pre-trained models lack. This enhances user trust through greater transparency and justifies decisions with human-readable explanations . Additionally, LLMs facilitate adaptability and responsiveness to emerging news and linguistic nuances, offering more precise and context-aware detections compared to the fixed patterns used in traditional systems .

The system employs a modular, agent-based architecture using the UAgent framework. This design allows each agent to function independently, communicating through defined protocols, which facilitates scalability and ease of deployment. The modularity ensures flexibility, enabling seamless integration of new features such as support for multilingual inputs and plugin integration without major architectural changes . Potential future enhancements include adding support for voice commands and deploying the system in mobile environments or low-resource settings for real-time fake news filtering .

The development and deployment of the AI-powered system are rooted in a combination of frontend and backend methodologies. The frontend is developed using React JS to offer a clean, responsive user interface. The backend utilizes a REST Agent to handle input and communicate with the core FakeNewsDetector agent through the UAgent framework. This agent performs web searches using the Tavily API and processes information using an LLM like GPT-4 to deliver verdicts, reasoning, and summaries. The use of a cloud platform like AgentVerse for deployment ensures scalability and real-time processing capabilities . These methodologies collectively contribute to the system's effectiveness by ensuring fast, reliable, and accessible detection of fake news .

Deploying the AI-powered fake news system in mobile environments or low-resource settings offers significant benefits like real-time fact-checking on personal devices, increasing accessibility and immediacy of information. Such deployment could empower users globally, including those in areas with limited technological infrastructure . However, challenges include ensuring the lightweight nature of the system to comply with the limited computational resources of mobile devices, managing real-time data acquisition and processing efficiently, and dealing with variable network connectivity that may impact performance. Overcoming these will require optimizing system architecture and possibly utilizing efficient data processing techniques to maintain functionality without imposing significant resource demands .

The project serves as a real-world example for misinformation detection research by demonstrating the application of modern AI and agent-based systems for practical problem-solving. It incorporates interdisciplinary aspects by integrating areas such as Natural Language Processing (NLP) for language understanding, software engineering for system design using tools like React JS and the UAgent framework, and human-computer interaction to ensure an intuitive user interface . The project also emphasizes Explainable AI (XAI), enhancing transparency in AI decisions, and showcases modular design thinking, making it a comprehensive study across AI, UX, and social computing domains .

The literature review highlights various approaches and models that have influenced the design of the AI-powered system. For example, Shu et al. emphasized the need for language and context-aware patterns in detection , influencing the system's use of LLM for natural language explanations. Ahmed et al. demonstrated the importance of language flexibility and regional context, directing the future support for multilingual inputs . Chen et al.'s work on using GPT models for interpretability and reasoning has informed the inclusion of natural language reasoning in the AI system. These insights have collectively shaped a system that is adaptable, scalable, and capable of providing explainable and reliable detections .

Academically, the AI-powered system is expected to enhance understanding of modular, agent-based AI systems and explainable AI (XAI) through reasoning-based outputs. It encourages interdisciplinary learning and showcases real-world applications of misinformation detection research . In industry, the system contributes by offering automated content verification at scale with transparent, explainable outputs. It enables lightweight deployment, addressing the demand for efficient content moderation tools without requiring extensive infrastructure, thus benefiting media, government, and tech platforms .

Traditional fake news detection systems rely on pre-trained machine learning models that require periodic retraining to remain effective, which is both time-consuming and resource-intensive. These conventional systems often fail to respond quickly to new misinformation due to this retraining lag . The AI-powered system presented in the document addresses these limitations by employing a dynamic approach that does not require constant retraining. It utilizes real-time data from trusted sources via the Tavily API and a large language model like GPT-4 to assess news statements, ensuring up-to-date and accurate detection of misinformation .

In the UAgent framework used for the fake news detection system, different agents perform distinct roles to manage processing tasks efficiently. The REST Agent serves as the bridge between the frontend and backend, handling inputs and outputs. The core processing is managed by the FakeNewsDetector agent, which uses the Tavily API for real-time data and the GPT-4 LLM for validation and reasoning. Communication between these agents facilitates scalability and allows for independent operation, ensuring that the system can adapt and extend to new functionalities and integration requirements .

You might also like