0% found this document useful (0 votes)
32 views25 pages

Final 18

The document presents a project on an NLP-driven virtual educator aimed at enhancing personalized learning through intelligent, context-aware responses to user queries. It details the system's architecture, advantages, and technical specifications, emphasizing its ability to efficiently retrieve and summarize information from multiple document formats. The project demonstrates high accuracy in response generation and aims to improve the teaching-learning process by providing real-time assistance to students and teachers.

Uploaded by

Sathish Loal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views25 pages

Final 18

The document presents a project on an NLP-driven virtual educator aimed at enhancing personalized learning through intelligent, context-aware responses to user queries. It details the system's architecture, advantages, and technical specifications, emphasizing its ability to efficiently retrieve and summarize information from multiple document formats. The project demonstrates high accuracy in response generation and aims to improve the teaching-learning process by providing real-time assistance to students and teachers.

Uploaded by

Sathish Loal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 25

PRAGATI ENGINEERING COLLEGE

(Approved by AICTE & Permanently Affiliated to JNTUK & Accredited by NBA and NAAC)
1-378, ADB Road, Surampalem, Kakinada Dist., A.P, Pin-533437.

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


A Major Project Work Presentation on
NLP – DRIVEN VIRTUAL EDUCATOR FOR SMART TEACHING

BY

SUNKARA SATHISH (21A31A05J1)


REPAKA M V S D K ANJALI (21A31A05F3)
MUPPANA ANAND KUMAR (21A31A05H9)
TALASILA KOWSHIK RAM (21A31A05J2)
BANDARU LAKSHMI VENKATA SANDEEP (21A31A05G6)

Guided by: Mrs.D.Kanaka Mahalakshmi Devi


-Assistant Professor ,CSE DEPARTMENT

A.Y : 2024-2025
ABSTRACT

The NLP-based Teaching Assistant is a cutting-edge system that utilises natural language
processing and deep learning to deliver intelligent, context-aware responses to user queries.
Understanding and retrieving information from multiple document formats (PDF, DOCX, TXT)
presents a significant challenge in document-driven learning environments. As the demand for
AI-driven education tools grows, this assistant offers an interactive, real-time solution for
personalised learning. This project explores and experiments with diverse methods for
information retrieval and conversational interaction. The ConversationalRetrievalChain and
FAISS are employed for efficient document retrieval, while Replicate’s Llama 13B generates
accurate, human-like responses. The proposed system enhances user experience with
conversational memory and semantic search, ensuring seamless information flow and achieving
high precision in multi-document querying.
MOTIVATION
The motivation for the project lies in the growing demand for intelligent, NLP-based virtual assistants
that can provide personalized, real-time learning support. As the volume of digital content continues
to grow, users face challenges in efficiently accessing relevant information from unstructured
documents.
•Enhancing Learning Efficiency: Delivering instant, accurate responses reduces time spent searching
through lengthy documents and improves understanding.
•Facilitating Personalized Learning: Interactive, context-aware responses create a customised
learning experience tailored to the user’s queries.
•Bridging the Gap Between Documents and Users: Traditional search methods often fail to deliver
precise information from complex documents. This assistant bridges that gap with semantic search
capabilities..
EXISTING SYSTEM
1.Existing systems for document-based question-answering often utilize transformer-based language
models like BERT and GPT. Pre-trained models are widely used for generating responses and
improving accuracy.
2.Diverse datasets for training and testing are essential for developing reliable systems. Some
advanced solutions incorporate semantic search, vector embeddings, and conversational memory to
enhance response quality.
3.Open-source libraries and frameworks like Hugging Face, TensorFlow, and PyTorch are commonly
used in these projects.
4. Real-time document retrieval can be achieved by deploying the model within a multi-document
processing pipeline. Continuous research and development are crucial to handle evolving NLP
challenges.
PROPOSED SYSTEM
In the proposed system for “NLP-based Teaching Assistant for Multi-Document Conversational
Interaction,” we aim to overcome the limitations of existing systems.
By combining advanced transformer models with semantic search and conversational memory, the
system ensures more accurate and context-aware responses to user queries.
Through multi-document support and extensive use of vector embeddings, the system can efficiently
retrieve and summarize information, improving response relevance and user experience.
The system prioritizes user personalization and response context, addressing challenges in
fragmented interactions and ensuring seamless knowledge retrieval.
Advantages:
1.Accurate and Contextual Responses
2.Multi-Document Support
3.Enhanced User Personalization
TECHNICAL SPECIFICATIONS

HARDWARE REQUIREMENTS
System AMD Ryzen 5 5500U with Radeon Graphics
Hard Disk 512SSD

RAM 16GB

SOFTWARE REQUIREMENTS
Operating System Windows 11

Coding Language Python 3.9.12

Software Tool VS Code 22


NLP ARCHITECTURE
DESIGN MODEL
• Algorithm: GPT (Generative Pre-trained Transformer)
• Input: PDF files or documents
• Output: natural language response (summarization)

•Collect and preprocess student queries using NLP techniques.


•Train the GPT-based model for generating accurate answers.
•Implement a multi-modal response system for text and pdf interaction.
•Evaluate and refine the assistant's performance for better accuracy and
adaptability.
Use-case diagram
Class Diagram:
Sequence Diagram:
Activity Diagram:
Relational Diagram
MODULES

Module 1:
• Executing the source code in local server :

In the above screen click on ‘Submit’ button to upload image


Module 2:

• User interface :
browse the files for uploading
Module: 3

• Uploading pdf or doc files :


In this module we upload files from local storage
Module: 4

• Prompting in natural language about pdf:


In this module an image is selected as input through file selection process.
Module 5:
• Conversational chat and memorization:
the past responses are remembered and make a logical conversation.
Module 6:
• Real time search in websites dynamically:
this app can also make real time data fetching from live websites
Deployment options
Features

Customization Recording screen


themes

Dark theme Light theme


CONCLUSION

• In this project, an NLP-based Teaching Assistant was developed to improve the teaching-learning process
through natural language understanding and response generation. Various NLP techniques and deep
learning models were employed for tasks such as intent recognition, question answering, and
personalized recommendations.
• The system was evaluated using different test parameters like response accuracy, user query handling
capacity, and processing time. The proposed model achieved high performance with an accuracy of
98.74% in intent detection and response generation.
This project significantly contributes to enhancing classroom experiences by offering real-time assistance
to students and teachers. Future work can extend this system by incorporating more advanced
transformers, multi-language support, and speech-to-text capabilities to create a fully interactive
teaching assistant.
REFERENCES
•Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., &
Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing
Systems, 30, 5998–6008.
•Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019). BERT: Pre-training of deep
bidirectional transformers for language understanding. Proceedings of the 2019 Conference of
the North American Chapter of the Association for Computational Linguistics: Human Language
Technologies, 4171–4186.
•Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language
understanding by generative pre-training. OpenAI preprint.
•Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan,
A., et al. (2020). Language models are few-shot learners. arXiv preprint arXiv:2005.14165.
•Zhou, L., Gao, J., Li, D., & Shum, H. Y. (2020). The design and implementation of XiaoIce, an
empathetic social chatbot. Computational Linguistics, 46(1), 53–93.
THANK YOU

You might also like