Pimpri Chinchwad Education Trust’s
Pimpri Chinchwad College of Engineering
Seminar Synopsis
Department: Computer Engineering Academic Year: 2025-26 Semester: I
Class: B.Tech Date: 28/07/2025
Problem Statement: Satellite Imagery-Based Object Detection and Retrieval Using Visual Search
Identified Thrust Area of topic: Security and Public Safety, E-Governance
Sustainable Development Goal (SDG): Artificial Intelligence/Machine Learning, Computer Vision,
Remote Sensing
Team Members: Group ID: GB6
Student PRN Student Name Division Signature
122B1B081 Arpit Gaikwad B
122B1B085 Badal Gaurkhade B
122B1B090 Pranjal Godse B
122B1B094 Nishantkumar Gupta B
SIG Name: SDG 9 (Industry, Innovation and Infrastructure), SDG 11 (Sustainable Cities and
Communities)
Guided By: Mrs. Shraddha Ovale
Abstract:
This project addresses the critical need for automated analysis of vast volumes of satellite imagery to
detect and monitor man-made changes over time. Given the impossibility of manual human analysis
at scale, we propose developing an AI-driven visual search engine integrating object detection and
content-based image retrieval (CBIR). CNN-based object detectors, such as YOLO, will be trained on
large-scale satellite datasets (e.g., xView, DOTA) to identify key man-made features like vehicles,
roads, and buildings. Concurrently, the CBIR component will extract deep features from satellite
image regions, enabling similarity-based search and retrieval. Utilizing frameworks like PyTorch,
OpenCV, and Detectron2, the system will provide an automated solution for change detection,
significantly reducing false alarms prevalent in traditional methods. Performance will be rigorously
evaluated using metrics such as mean Average Precision (mAP) for object detection and retrieval
accuracy for CBIR. This prototype will offer substantial operational usage for continuous monitoring
of landmasses, aiding critical applications in urban planning, national security, and emergency
response by efficiently identifying new developments and human-induced alterations.
Related work
xView (Lam et al., 2018), DOTA (Xia et al., 2018), SpaceNet, YOLO-SAT (IEEE Access 2021),
deep feature-based CBIR, visual search systems (Keisler et al., 2020).
Innovative Concept & Relevance
This project stands at the forefront of telecommunications innovation by holistically integrating
several critical and emerging research streams: the Internet of Things (IoT), Beyond 5G (B5G)
networks, advanced AI-based orchestration, and a fully virtualized infrastructure powered by
Software-Defined Networking (SDN) and Network Function Virtualization (NFV).
The innovative concept lies in moving beyond theoretical models to construct a tangible, operational
platform. This platform is designed to support diverse, real-life services, such as mission-critical smart
transport systems (e.g., autonomous vehicles, traffic management) and time-sensitive emergency
response scenarios. These services possess highly stringent and distinct Quality of Service (QoS)
profiles, demanding specific requirements for latency, bandwidth, reliability, and security that
traditional network architectures struggle to guarantee.
The relevance of this integration is profound:
● B5G Connectivity for Ubiquitous IoT: B5G networks are essential for enabling the massive
scale and diverse connectivity requirements of next-generation IoT devices and applications,
providing the underlying high-capacity, low-latency communication fabric.
● SDN/NFV for Network Agility and Flexibility: SDN decouples the network's control plane
from the data plane, enabling centralized, programmable management, while NFV virtualizes
network functions from proprietary hardware to software, allowing for flexible deployment,
scaling, and reduced operational costs. Together, they create a highly agile and adaptable
network infrastructure.
● AI-based Orchestration for Intelligent Automation: AI is the intelligence layer that makes
the network truly dynamic and self-optimizing. AI algorithms will be employed to intelligently
orchestrate network resources, predict traffic demands, manage slice lifecycle, and dynamically
adjust policies. This ensures that the unique QoS profiles for disparate services are continuously
met, even under fluctuating network conditions, moving towards a "zero-touch" network
paradigm.
● Dynamic Network Slicing for Isolation and Optimization: The project's core slicing
framework leverages these integrated technologies to create isolated virtual networks (slices) on
a shared physical infrastructure. Crucially, these slices are managed by dynamic policies that
intelligently respond to real-time traffic demand, ensuring optimal resource utilization and
guaranteed performance for each service without interference from others.
Market Potential & Competitive Advantage
The solution addresses a critical need in the rapidly expanding global IoT and smart city landscapes,
where network slicing is indispensable for scalable and isolated resource management. By integrating
SDN/NFV, the project aligns with current industry trends, providing valuable expertise in system design
and network operations relevant to operators, equipment manufacturers, and policymakers.
Objective of the Project:
1. Develop a CNN-based object detection model for to identify objects within satellite imagery.
2. Create a system for retrieval of images based on the visual resemblance to an image.
3. Evaluate on xView and DOTA for detection and retrieval..
Technical details:
Languages: Python for machine learning models, JavaScript for web interface
Frameworks: TensorFlow/Keras for deep learning, XGBoost for ensemble learning
Platforms: Real-time data APIs for market data, web-based user interface
Tools: Data preprocessing libraries (Pandas, NumPy), visualization tools (Matplotlib, Plotly)
Technical Keywords: Content-Based Image Retrieval, Object Detection, Satellite Imagery,
Convolutional Neural Networks (CNNs), Geospatial Analysis.
Plan for Conference/Journal Publication:
We plan to publish our research in reputed conferences or journals such as:
• CCCS2025 International Conference
• ICEI2026
• IEEECBDS2025
References / Bibliography:
1. [CNNs] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11), 2278-2324. (Early influential work on
Convolutional Neural Networks)
2. [R-CNN] Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate
object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision
and pattern recognition (pp. 580-587).
3. [Faster R-CNN] Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time
object detection with region proposal networks. In Advances in neural information processing systems
(pp. 91-99).
4. [YOLO] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified,
real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern
recognition (pp. 779-788). (Cite the specific YOLO version you plan to use, e.g., YOLOv3, YOLOv4,
YOLOv5, YOLOv8 for more modern work).
5. [CBIR Survey] Smeulders, A. W., Worring, M., Santini, S., Gupta, A., & Jain, R. (2000). Content-based
image retrieval at the end of the early years. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 22(12), 1349-1380. (A classic survey paper on CBIR).
6. [Deep Features for CBIR] Razavian, A. S., Amirghodsi, H., Goerg, M., & van der Maaten, L. (2016).
CNN Features Off-the-shelf: An Astounding Baseline for Recognition. arXiv preprint arXiv:1403.6382.
(Illustrates the power of pre-trained CNN features for various tasks including retrieval).
7. [Faiss] Johnson, J., Douze, M., & Jégou, H. (2019). Billion-scale similarity search with GPUs. IEEE
Transactions on Parallel and Distributed Systems, 30(8), 1734-1744. (The foundational paper for the
Faiss library).
8. [xView Dataset] Lam, D., Kuzma, R., McGee, K., Dooley, S., Laielli, M., Klaric, M., ... & McCord, B.
(2018). xView: Objects in Context in Overhead Imagery. arXiv preprint arXiv:1802.07856. (Essential for
citing the xView dataset).
9. [DOTA Dataset] Xia, G. S., Bai, X., Ding, J., Zhou, Z., Xiong, B., Wang, W., ... & Zhang, S. (2018).
DOTA: A large-scale dataset for object detection in aerial images. IEEE Transactions on Image
Processing, 28(2), 969-983. (Essential for citing the DOTA dataset).
10. [Remote Sensing OD Survey] Kang, J., Tariq, S., Oh, H., & Woo, S. S. (2022). A Survey of Deep
Learning-Based Object Detection Methods and Datasets for Overhead Imagery. IEEE Access, 10,
20118-20134. (Provides context on the field of object detection in remote sensing).
Seminar Outcome:
● Paid Consultancy project ( Yes/ No) : No
● Sponsored Project ( Yes/ No) : No
● Multidisciplinary project ( Yes/ No) : No
● Scopus / SCI Paper Publication ( Yes/ No): Yes
● File a Patent ( Yes/ No): No
● Participate in competitions and awards ( Yes/ No): Yes
Name & Signature of Guide