SecA Group18 Report GREENGUARDIAN
SecA Group18 Report GREENGUARDIAN
1
Department of Computer Science and Engineering
CERTIFICATE
The project titled “Green Guardian: “An AI-Based Intelligent Environmental
Monitoring and Advisory System” is aimed at developing a smart solution to monitor
environmental parameters and provide actionable insights using Artificial Intelligence.
The system integrates sensor-based data collection, intelligent analysis, and real-time
recommendations to promote eco-friendly
practices and raise awareness about environmental sustainability. This project was
undertaken as a part of the partial fulfillment for the degree of Bachelor of Technology
(B.Tech.) in Computer Science and Engineering at Shri Ramdeobaba College of
Engineering and Management, Nagpur, during the academic year 2024–2025. The
project emphasizes the practical application of AI and IoT technologies in addressing
real-world environmental challenges, and reflects the team's commitment to creating
innovative and impactful solutions for a greener future.
Date:
Place: Nagpur
_______________ ________________
Prof. Sushmit Saantra Dr. Preeti Voditel
Project Guide H.O.D
Department of CSE & ET Department of CSE & ET
________________
Dr. M. B. Chandak
Principal
2
3
APPROVAL SHEET
This report entitled “An Intelligent System for Plant Disease Detection and Cure Assistance
using Artificial Intelligence” by Gunjan Rathi , Aditya Singh, Ayam Bharadwaj , Ayush
Ramteke , Sujal Kothari is approved for the degree of Bachelor of Technology (B.Tech).
Date:
Place: Nagpur
_______________ ________________
Prof. Sushmit Saantra External Examiner
Project Guide
_______________
Dr. Preeti Voditel
H.O.D
Department of CSE & ET
4
ACKNOWLEDGEMENTS
We would like to express our deepest gratitude to Prof. Sushmit Saantra, our
project guide, for her invaluable guidance, constructive feedback, and unwavering
support throughout the development of our project, “An Intelligent System for
Plant Disease Detection and Cure Assistance using Artificial Intelligence” Her
expertise and mentorship have been essential in bringing this project to fruition.
We are also grateful to our team members — Gunjan Rathi , Aditya Singh, Ayam
Bharadwaj , Ayush Ramteke and Sujal Kothari — for their dedication,
collaboration, and hard work. The success of this project is a reflection of our
shared vision, teamwork, and the countless hours spent brainstorming, designing,
and implementing solutions together. Each team member brought unique strengths
and perspectives that enriched the overall outcome.
We would also like to acknowledge the support and resources provided by our
institution, which created a conducive environment for learning and innovation.
The tools, facilities, and guidance from our faculty members were invaluable in
completing this project.
Date:
-Projectees
Gunjan Rathi
Aditya Singh
Ayam Bharadwaj
Ayush Ramteke
Sujal Kothari
5
ABSTRACT
Agriculture plays a vital role in sustaining the economy and feeding the growing
global population. However, one of the major challenges faced by farmers is the
timely detection and management of plant diseases, which can significantly
reduce crop yield and quality. To address this issue, our project, “Green
Guardian: An AI-Based Intelligent System for Plant Disease Detection and Cure”,
presents a smart solution that leverages the power of Artificial Intelligence and
Machine Learning for accurate diagnosis and effective treatment
recommendations.
This project aims to assist farmers and agricultural professionals in early disease
detection, thereby reducing crop losses, minimizing pesticide use, and promoting
sustainable farming practices. The solution is designed to be user-friendly,
accessible via mobile or web platforms, and adaptable to various types of crops.
By integrating technology with agriculture, Green Guardian represents a step
forward in smart farming and precision agriculture.
6
TABLE OF CONTENTS
Certificate ii
Declaration iii
Approval Sheet iv
Acknowledgment v
Abstract vi
List of Figures ix
Chapter 1: Introduction 9
1.2 Motivation 10
Chapter 3: Methodology 18
7
21
3.2CNN Model Architecture
25
3.7 Summary
Chapter 4: Implementation 26
Chapter 5: Discussions 32
References 47
8
CHAPTER 1
INTRODUCTION
Plant diseases have always been a major concern in agriculture, directly impacting crop
production and quality. The early identification and treatment of these diseases are
critical to ensure healthy plant growth and optimal yield. However, in many regions,
especially rural and remote areas, farmers face challenges in recognizing diseases early
due to limited access to agricultural experts and lack of awareness. This often results in
the unchecked spread of infections, heavy use of pesticides, and significant crop losses.
Recent advances in Artificial Intelligence (AI), Machine Learning (ML), and Computer
Vision have opened new avenues for solving agricultural problems. Particularly, image
classification techniques using Convolutional Neural Networks (CNNs) have shown
great promise in the field of plant disease recognition. By analyzing visual symptoms on
plant leaves, such systems can identify diseases with a high degree of accuracy.
9
The project Green Guardian is built upon this technological foundation. It aims to create
an intelligent system that not only detects plant diseases from images but also provides
useful recommendations for treatment and prevention. This system is designed to be
easily accessible, helping farmers make informed decisions and reduce dependency on
manual methods or costly expert consultations.
The integration of AI into agriculture marks a shift toward smart and precision farming,
where technology assists in enhancing productivity, reducing waste, and ensuring the
sustainable use of resources. Green Guardian is a step in this direction, empowering
farmers with a digital assistant that bridges the gap between traditional practices and
modern technological solutions.
1.2.Motivation
In recent years, the advancement of artificial intelligence (AI) and deep learning has opened
up new possibilities for tackling real-world problems. Computer vision, a subset of AI, has
shown remarkable success in image classification and pattern recognition tasks, making it a
promising tool for plant disease detection. With the proliferation of smartphones and
affordable cameras, capturing images of diseased crops is now easier than ever. By
integrating AI with agriculture, it is possible to develop intelligent systems that can identify
plant diseases from images, diagnose them accurately, and provide actionable insights for
10
treatment — all within seconds.
The motivation behind this project lies in the urgent need for scalable, accurate, and
accessible solutions to support plant health monitoring. Our aim is to build a deep
learning-based system that can automatically detect plant diseases from leaf images and
suggest appropriate cures or preventive measures. Such a tool not only helps farmers act
swiftly and effectively but also reduces the reliance on chemical treatments by promoting
targeted and timely interventions.
Furthermore, early and precise disease detection contributes to reducing crop loss, improving
food security, and minimizing environmental damage. It empowers even those with minimal
agricultural training to manage plant diseases with confidence. By leveraging technology in
this way, we can bridge the gap between modern agricultural knowledge and those who need
it the most — creating a more sustainable, resilient, and informed farming ecosystem.
While the primary focus of this project is on detecting and treating plant diseases, it is important
to consider the broader implications of artificial intelligence (AI) in agriculture and food systems
— particularly its growing relevance in culinary applications. AI is increasingly transforming
how food is produced, processed, and even prepared, with significant intersections emerging
between agricultural technology and the culinary world.
In the context of this project, which involves identifying diseases in plants and suggesting
appropriate cures, the role of AI can extend beyond the farm. Once healthy crops are ensured
through early disease detection, the produce makes its way through the food chain — ultimately
landing in kitchens, restaurants, and food manufacturing units. Here, AI has the potential to
enhance culinary decision-making by optimizing ingredient use, ensuring food safety, and even
recommending recipes based on the freshness and health status of the crops.
11
For example, AI systems can be integrated with agricultural data to predict crop yield and
availability, which in turn influences culinary planning and food supply management.
Additionally, AI models trained on plant disease data can also assist in identifying signs of
spoilage or contamination in post-harvest crops used in food preparation, thereby ensuring
only healthy, safe ingredients make it to the consumer’s plate.
In essence, AI forms a bridge between agriculture and gastronomy. The same technologies
used to detect diseases in the field can inform decisions in the kitchen — enhancing food
quality, safety, and sustainability. This project contributes to that ecosystem by ensuring the
first and most critical step: growing healthy plants. By doing so, it supports a pipeline of AI
applications that extend from farm to fork, reinforcing the role of technology in every stage
of the food journey.
While the core of this project revolves around plant disease detection using image-based
deep learning, NLP adds a valuable layer of user interaction and engagement. For
example, after the AI model identifies a disease from a plant image, NLP can be used to
12
interpret the results and generate clear, actionable descriptions in the user’s preferred
language. This allows users to understand the diagnosis and follow treatment
recommendations without needing technical knowledge or agricultural expertise.
Such interactions can be supported in multiple languages and dialects, making the system
more inclusive and regionally adaptable. This is particularly valuable in rural areas where
literacy levels or language barriers might otherwise limit access to technology.
NLP can also facilitate the integration of the system with agricultural knowledge bases,
allowing it to search and summarize relevant information from large datasets, manuals, or
online sources. This provides users with curated and up-to-date information without
needing to sift through complex documents.
13
CHAPTER 2
LITERATURE SURVEY
2.1 Introduction
Agriculture is one of the cornerstones of the Indian economy and food security. A significant
threat to crop yield and quality is plant diseases, which, if not identified early and accurately, can
lead to devastating economic losses and food shortages. Traditionally, disease identification is
performed manually by experts through visual inspection of leaf patterns and other plant parts.
This process is time-consuming, requires domain expertise, and is often subjective—leading to
inconsistent results.
Recent developments in Artificial Intelligence (AI), specifically in computer vision and machine
learning, have paved the way for automating this detection process. Leveraging the power of
Deep Convolutional Neural Networks (DCNNs), researchers can now classify diseases in plant
leaves with high accuracy, even outperforming traditional image processing methods. This
chapter explores the evolution of plant disease detection methods, focusing on traditional
techniques, the rise of deep learning, dataset challenges, and current state-of-the-art research
aligned with our model’s architecture.
Case Study 1: Plant Disease Detection Using Image Processing and Machine Learning [3]
14
lighting and orientation.
● Relation to Our Project: Unlike handcrafted features and multi-stage pipelines in [3],
our DCNN learns hierarchical features end-to-end. This yields better generalization on
noisy, real-world images and simplifies scaling across multiple crops.
Case Study 2: On the Image-Based Detection of Tomato and Corn Leaves Diseases [1]
Case Study 3: Plant Disease Detection and Classification: A Systematic Literature Review
[4]
Case Study 4: An Advanced Deep Learning Models-Based Plant Disease Detection [2]
15
near-desktop accuracy on smartphones/IoT.
● Relation to Our Project: Guides our roadmap for optimizing the custom CNN for
mobile deployment, aligning with our future goal of accessible, edge-based disease
diagnosis.
The shift from handcrafted features to DCNNs addressed limitations of traditional pipelines. In
[1], deep models with residual or depthwise separable convolutions achieved >95% accuracy.
Zhou et al. [4] confirmed CNN superiority in precision and recall, enabling subtle symptom
detection. Khan et al. [2] and Simonyan & Zisserman [7] highlight VGG and ResNet’s deep
architectures for robust feature hierarchies, while He et al. [8] introduced residual connections to
train ultra-deep networks.
Our custom DCNN aligns with these findings, balancing depth (five convolutional blocks) and
computational feasibility for potential on-device use.
While PlantVillage (50,000+ images) remains a benchmark, realistic datasets like Kaggle’s New
Plant Diseases (38,000+ images) [5] offer diverse backgrounds and lighting. Data augmentation
(flips, rotations, zooms) from [1] enhances generalization. We further validate against Unsplash
Random Images [6] to measure false positives and refine sensitivity.
1. Dataset Bias & Imbalance: Lab-collected images lack environmental noise; rare
diseases underrepresented.
2. Environmental Noise: Wind damage or water droplets can mimic disease symptoms.
3. Model Interpretability: Black-box CNNs hinder trust; tools like Grad-CAM and LIME
are proposed [4].
4. Resource Constraints: Edge deployment requires pruning/quantization [2].
16
2.6 Summary and Insights
Plant disease detection has evolved from feature engineering to end-to-end DCNNs. The
surveyed works validate deep architectures, data augmentation, and the need for interpretability
and edge optimization. Our project builds on these principles with a custom CNN trained on a
diverse dataset, incorporating unknown-class handling and preparing for mobile-ready
deployment.
17
CHAPTER 3
METHODOLOGY
Chapter 3: Methodology
This chapter describes the end-to-end methodology of the GreenGuard Classifier, covering data
acquisition and preprocessing, CNN model design, training procedures, unknown class handling,
and integration into a Streamlit application with multilingual support.
To ensure a robust and generalizable model, two primary data sources were utilized:
1. Scope & Diversity: Over 38,000 high-resolution images covering eight plant species
(Apple, Corn, Potato, Tomato, Grape, Peach, Pepper, Strawberry) and their most common
diseases, plus healthy samples.
2. Label Quality: Expert‑annotated disease labels following PlantVillage taxonomy,
ensuring consistency across classes.
3. Environmental Variability: Includes images captured under diverse lighting conditions,
backgrounds, and orientations to simulate real-world scenarios.
4. Train/Validation Split: Stratified sampling maintained per-class balance with an 80/20
split, preserving representative distributions in both sets.
5. Licensing & Provenance: Publicly available under permissive Kaggle license; metadata
retained to track image origins and ensure reproducibility.
18
3.1.2 Custom Unknown Samples
1. Purpose: Teach the model to recognize out‑of‑distribution inputs by including non‑leaf
and irrelevant imagery.
2. Collection Strategy: 2,000 images sourced from public‑domain repositories (e.g.,
Unsplash, Pixabay) representing random objects, animals, urban scenes, and abstract
textures.
3. Preprocessing Consistency: Unknown samples resized, normalized, and augmented
identically to plant images to avoid statistical discrepancies.
4. Class Balance: Composed to represent ~5% of the training set, preventing the unknown
class from dominating or being underrepresented.
5. Quality Control: Manual review removed ambiguous or low‑quality images (e.g.,
blurred, extreme color casts) to maintain clear non‑plant characteristics
Directory Organization:
dataset/
├── Apple/
│ ├── Apple_scab/
│ ├── Black_rot/
│ ├── Cedar_apple_rust/
│ └── healthy/
├── Corn/
│ ├── Cercospora_leaf_spot/
│ ├── Common_rust/
│ ├── Northern_leaf_blight/
│ └── healthy/
└── unknown/
19
1. Image Resizing & Normalization:
20
3.2 CNN Model Architecture
Inspired by VGG-style deep networks, the custom CNN comprises five convolutional blocks:
1 32 2 2×2 Max
2 64 2 2×2 Max
●
Dropout: 25% after final pooling; 40% after dense layer to combat overfitting.
21
3.3 Training Procedure
2. Callbacks:
3. Validation:
○ Achieved 94% train accuracy and 92% validation accuracy on held-out set.
22
3.4 Unknown Class Handling
Two-tier strategy:
1. Training:
23
○ st.button (Analyze / Translate): Triggers conditional callbacks for inference and
translation.
3. User Feedback Elements:
○ st.write, st.info, st.success, st.warning, st.error: Convey status updates (upload
confirmation, classification progress, error messages, and confidence warnings).
4. Session State Management:
○ Leveraged st.session_state to persist description and treatment text across widget
interactions, ensuring a seamless UX when toggling between original and
translated views.
5. Multilingual Support:
○ Positioned translation controls adjacent to results, enabling on-demand conversion
without page reload.
24
○ Imported structured class_info mapping to populate multi-point descriptions and
treatment bullet lists corresponding to the predicted class.
5. Translation Integration:
○ Instantiated googletrans.Translator to perform per-line translation of
descriptions/treatments on demand, handling network calls asynchronously under
the hood.
File Structure:
app.py
class_info.py
god_plant_disease_model_final.keras
dataset/licenses/...
●
● Execution: streamlit run app.py exposes a local web interface at localhost:8501.
3.7 Summary
25
CHAPTER 4
IMPLEMENTATION
In this section, we present the results obtained from the implementation and
evaluation of the intelligent system. This includes insights from user testing,
performance metrics, and data analysis that illustrate how well the system meets its
objectives.
Note: Due to the limited dataset size, it could affect the diversity of available diseases.
26
4.1.2 Accuracy
From the given test we can derive and say the accuracy is more than 95%. Disease prediction
Accuracy is 90-95 % based on the input provided
27
Fig 1.2 Model Analyzing the Image
28
Fig 1.3 Giving Details After Analyzing
29
Fig 1.4 Google Translation on the Recommended Treatment
30
Fig 1.5 Friendly UI for phone use(app)
31
CHAPTER 5
DISCUSSIONS
This section interprets the results of the plant disease detection and cure system with
respect to its performance, limitations, and potential improvements. Key findings of the
development and implementation processes are discussed below:
The use of Convolutional Neural Networks (CNNs) and transfer learning techniques has
proven effective in classifying plant diseases from leaf images. The model demonstrates
high accuracy when evaluated on the test set, especially for well-represented diseases.
The integration of pre- trained models improved both performance and training
efficiency. However, accuracy varies slightly across classes depending on the visual
distinctiveness and quantity of training data available for each disease.
32
3. Limitations
Despite the promising results, the system has certain limitations. The accuracy of disease
classification may drop if images are of poor quality, under bad lighting, or contain
background noise (e.g., soil, other plants). Additionally, the NLP component currently
offers static recommendations and lacks adaptive learning — it does not yet evolve based
on user feedback or outcomes of suggested treatments. Also, the dataset used is limited in
terms of variety of crops and disease stages, which may hinder generalizability to new
regions or plant types.
33
CHAPTER 6
6.1 Conclusion
This project presents an intelligent, AI-powered system for the detection of plant diseases
and recommendation of appropriate treatments, aiming to assist farmers and agricultural
workers in maintaining healthy crops. By leveraging deep learning techniques such as
Convolutional Neural Networks (CNNs) for image-based classification, the system is
capable of identifying plant diseases with high accuracy, offering a reliable tool for early
detection and response.
The integration of Natural Language Processing (NLP) further enhances the user
experience by translating technical results into understandable, actionable advice. This
not only supports informed decision-making but also makes the technology accessible to
a broader audience, including those with limited technical knowledge. The system’s
potential is further extended through features like personalized suggestions, intuitive
interfaces, and the ability to incorporate user preferences in future iterations.
While the current system demonstrates strong performance, it is not without limitations.
These include dependency on high-quality images, limited personalization based on
ongoing user behavior, and restricted language support. However, the foundational
architecture is robust and scalable, providing a strong base for future enhancements such
as adaptive learning, multilingual support, and mobile deployment for offline use.
In summary, this project bridges the gap between AI research and practical agricultural
needs, offering a scalable solution to a real-world problem. It contributes to the ongoing
transformation of agriculture through intelligent automation and sets the stage for more
comprehensive, user- centered agri-tech innovations.
34
6.2 Future Scope
35
5. Integration with Agricultural Databases and Weather APIs
The system could be enhanced by integrating with external agricultural knowledge bases
or real- time weather data APIs. This would allow for context-aware recommendations,
such as adjusting treatments based on upcoming weather conditions or region-specific
disease outbreaks.
36
REFERENCES
[1] Affan, Y., & Fatima, R. (2023). On the Image-Based Detection of Tomato
and Corn Leaves Diseases. arXiv. https://arxiv.org/pdf/2312.08659
[2] Khan, A., Atif, M., & Others. (2023). An advanced deep learning
models-based plant disease detection. Frontiers in Plant Science, 14,
1158933.
https://www.frontiersin.org/articles/10.3389/fpls.2023.1158933/full
[3] Kulkarni, P., Karwande, A., Kolhe, T., & Others. (2021). Plant Disease
Detection Using Image Processing and Machine Learning. Department of
Electronics and Telecommunication, Vishwakarma Institute of Technology,
Pune, India. https://arxiv.org/pdf/2106.10698
[4] Zhou, X., Zhang, Y., & Others. (2023). Plant Disease Detection and
Classification: A Systematic Literature Review. Sensors, 23(10), 4769.
https://www.mdpi.com/1424-8220/23/10/4769
[5] Kaggle. (n.d.). New Plant Diseases Dataset.
https://www.kaggle.com/datasets/vipoooool/new-plant-diseases-dataset
[6] Kaggle. (n.d.). Unsplash Random Images Collection.
https://www.kaggle.com/datasets/lprdosmil/unsplash-random-images-collect
ion
[7] Simonyan, K., & Zisserman, A. (2014). Very Deep Convolutional
Networks for Large-Scale Image Recognition. arXiv.
https://arxiv.org/abs/1409.1556
37
[8] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for
Image Recognition. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition (pp. 770–778).
https://doi.org/10.1109/CVPR.2016.90
[9] Howard, A. G., et al. (2017). MobileNets: Efficient Convolutional Neural
Networks for Mobile Vision Applications. arXiv.
https://arxiv.org/abs/1704.04861
[10] Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking Model Scaling for
Convolutional Neural Networks. In Proceedings of the 36th International
Conference on Machine Learning (pp. 6105–6114).
https://arxiv.org/abs/1905.11946
[11] Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using Deep
Learning for Image-Based Plant Disease Detection. Frontiers in Plant
Science, 7, 1419. https://doi.org/10.3389/fpls.2016.01419
38