FINAL.RESEARCH
FINAL.RESEARCH
Overview
Who We Are
TEAM
MEMBERS
-VISHAL
SINGH
E22CSEU1500
PRABHJEET
SINGH-
E22CSEU0650
-KANISHK
E22CSEU0670
Yuvraj Rathi -
E22CSEU0695
INTRODUCTION
With their promise of previously unheard-of levels of economy,
safety, and convenience, autonomous vehicles (AVs) and intelligent
edge computing are developing at a rapid pace, drastically altering the
automobile industry. However, there are also a lot of difficulties in
the field of vehicle security because of this development. Because
autonomous driving environments require real-time decision-making,
sensor fusion, and connection between vehicles and infrastructure,
traditional security solutions frequently fall behind. Robust defense
against cyberattacks, data breaches, and hostile meddling becomes
crucial as autonomous vehicles (AVs) integrate more and more with
edge computing, which processes data closer to the vehicle's sensors
and actuators. Due to the fact that generative AI in particular allows
systems to anticipate, identify, and fix possible defects.
KEYWORDS-Keywords: Autonomous
Vehicles, Security and Privacy, Generative AI,
Edge Computing.
LITERATURE
SURVEY
ADVANTAGES
Improved Recognition of Threats
Generative AI models, such Generative Adversarial Networks (GANs) and
Variational Autoencoders (VAEs), may identify deviations from normal
operational patterns, enabling real-time detection of potential intrusions or
assaults.
Using generative AI, hostile inputs such as phoney graphics or sensor data may
be prevented by identifying and removing modified data.
Generative AI models positioned at the edge can process and respond to threats with
the least degree of latency for real-time applications. Efficient Utilisation of Resources:
Optimal generative models balance computing needs with the need for robust security in
scarce edge scenarios. Generative AI is a crucial technique for improving the security of
intelligent edge autonomous cars due to its ability to predict, model, and respond to
threats in an adaptable way.
DISADVANTAGES
Resource and computation limitations
High Demand for Resources: The high processing, memory, and energy
needs of generative AI models-particularly GANs and diffusion models-
may strain the limited resources of edge devices. Latency problems:
Complex generative models might cause processing delays that hinder
real-time security applications.
Errors and Bias: Generative AI may offer barriers that are insufficient or
erroneous when trained on incomplete or biased datasets, thus raising
the dangers.
Availability Problems
processing optimization
power
Confidentiality Potential data Edge-based
issues security breach learning,
privacy-
preserving data
techniques
DATA PRIVACY
Generative AI has the potential to address data privacy concerns in significant ways. One
method to safeguard personal information while still retaining the data's usefulness for AI
models is data anonymization. Another approach is federated learning, a privacy-focused
machine learning technique that eliminates the need to centralize sensitive data. With
federated learning, the AI model is trained directly on the device, such as in a vehicle, and
only model updates-not the raw data-are sent to centralized servers. This method enhances
privacy while still enabling the system to improve and adapt. Additionally, generative AI can
simulate potential privacy threats and generate synthetic data for training, helping antivirus
systems better protect users' privacy in complex and ever-changing scenarios.
As autonomous vehicles (AVs) increasingly depend on artificial intelligence (AI) and edge
computing, ensuring data privacy has become a major concern. AVs collect large amounts of
data through sensors, cameras, and communication systems, much of which may involve
personal or sensitive details. This data may include driving habits, current location, and
contacts with other vehicles or infrastructure. The handling and processing of sensitive data
presents significant privacy risks since misuse or unauthorized access could compromise
personal security. Additionally, AVs' dependence on communication networks and cloud
services for in-the-moment decision-making raises the possibility of data breaches or
unauthorized interception.
For example, a model built on a big amount of cybersecurity data for one type of
autonomous vehicle can be moved to another, boosting security without costly retraining.
Both of these strategies let AI-driven security systems in autonomous vehicles be flexible and
real-time, allowing them to learn and respond to emerging threats efficiently.
These models and techniques are foundational in building intelligent, adaptive security
systems capable of managing the dynamic and evolving threats faced by autonomous
vehicles. Let me know if you'd like more detail on any specific model
Malicious use of
generative AI
Generative AI has great potential to increase AV safety, but when applied incorrectly, it can
be highly harmful. One of the most concerning aspects of generative AI is its potential to
create frighteningly realistic synthetic data, including images, videos, and even sensor
outputs, which may be utilised to operate AV systems. For example, attackers can use
generative artificial intelligence (AI) techniques, like Generative Adversarial Network , to
create fake sensor data, like LIDAR point clouds or manipulated photos, to trick the
vehicle's visual system. If a car makes a poor decision and misses a barrier or a road risk, it
could result in an accident.
It is difficult to discern between malicious and legitimate data modifications since these
changes are so minute, which raises the possibility that these attack would go undetected.
This highlight the need for robust security measure and continuous monitoring to ensure
that the positive effect of AI on car safety are not compromised by the malicious use of
generative AI.
Privacy Concerns
Training generative models requires huge datasets, and if sensitive or
personal data isn’t handled properly, it could be exposed or misused.
Over-Reliance on Automation Automating security with AI can lead to less
human involvement, which might be a problem in situations where
human judgment is essential to resolve complex issues. System Failures
Errors in the AI’s predictions or responses could cause unexpected
interruptions in the vehicle's operations, potentially putting passengers,
pedestrians, and other road users at risk. Dual-Use Risks Unfortunately,
the same technology that improves security can be misused by bad actors
to develop new and more advanced ways to attack.