GA1 RR Aditya
GA1 RR Aditya
Introduction
Artificial Intelligence has been the craze of the crowd, especially for the last few years
- and understandably so. The human race has always been taking steps towards
making discoveries and inventions that would make our work and lives easier.
Artificial Intelligence (AI), is literally the epitome of convenience. Machinery prior to
artificial intelligence were stunted from the aspect that, it relied heavily on human
coordination and input. They were capable of performing actions which were
pre-coded by humans, and nothing beyond that. Artificial Intelligence, as the name
suggests, is intelligent - meaning it can learn from experience. It can adapt to changes
to its environment, and making decisions and take actions in response to a change in
circumstances.
Of course, with all the convenience of technology comes some pretty unfortunate and
dangerous risks. Most of these risks, if not all, come from issues surrounding
cybersecurity. Cybersecurity is an umbrella term coined for the process of information
security that is facilitated with the safeguarding of computers, networks, and digital
data against cyber attacks. It provides the users with the security from not only the
attempts to access their assets but also the cases of theft and destruction.
Cybersecurity is based on one key principle – the need to secure the data from digital
environment from the point of perspective of confidentiality (the lack of unauthorized
access), integrity (the lack of unauthorized changes), and also availability (the lack of
the access limitation) of that data. In this case encryption and data access should be
applied to data to make it confidential, and also methods of tracking to ensure the
integrity are deployed.
The cybersecurity landscape becomes dynamic day by day and the application of
Artificial Intelligence brings about vivid discussions and a series of virtual incidents
that lead us to consider the prospects of cybersecurity. Situations in which machines
learning purposes are able to dodge even the smartest cyber threats and they are doing
it automatically. It would be dangerous if we let machines to learn how to do
everything in the cyber-security field especially when there is lots of potential risks
resulting in catastrophic consequences. Nevertheless, the increase in AI-driven
malware and sophisticated attacks opens questions regarding the inherent ethics and
unspoken risks that may involve the release of smart programs in this cybersecurity
field. With every unfolding stages of cybersecurity facing AI, it is now important to
understand what role this technology is playing in making this cat-and-mouse game
experienced by holders and attackers. Is it the time for AI to take control over the
cybersecurity environment, or do the human capacities still remain vital in filtering
and sorting cyber threats?
Threat Detection
The cyber security world phenomenom is threat detection which is the proactive
mindset of detecting hazardous intruders or weird patterns in the system which are not
part of the general flow. AI algorithms perform the most important function of this
intelligent monitoring by analyzing big data to recognize anomalies and possible
security breaches in real time through which it is possible to tighten the digital
immunity like no other method.
Incident Response
Incident response in cybersecurity means making swift and organized action in
dealing with an attack on the computer and network security. AI is becoming a routine
feature of the incident handling scheme; the technology is used for rapid evaluation of
the situation and application of mitigation measures to ensure that damage
minimization is achieved in record time and the victims’ lives are saved.
Zero-Day Vulnerabilities
Zero-Day vulnerabilities is a period of time during which softwares are most
vulnerable. This is usually during the time period right after the release - prior to
public testing, or right after a major update before a patch.
Biometric Authentication
Biometric identification which is enhanced by AI and relies on peculiar physiological
or behavioral attributes of a user to ensure his/her availability. Biometrics technology
ensures security since it is more sophisticated and user-friendly as compared to the
conventional use of passwords it allows access based on specific features of a person
and so can detect any unauthorized user easily. It in the long run minimizing the risks
of the traditional password-based authentication technique.
Blockchain Security
AI helps blockchain to be more secure from cyber assaults since only particular
actions within decentralized networks can be identified and terminated. By means of
endless observation and processing, AI improves the protection level of the
blockchain systems, where the encryption of the transactions and their integrity taken
to the account in the distributed ledger technology is of the highest interest.
Key Issues
Adversarial Attacks and Exploitation of Algorithms
The integration of AI in cybersecurity has been demonstrated by examples in this
dynamic environment in the wild with adverse attacks highlighting the dawning of
these threats. For instance, the email system machinery was hacked during certain
activities through a creative use of an advanced email filter that could easily identify
malicious emails. In these cases, hackers have searched for weak spots in the
algorithms for natural language processing which in result affected the email filter, the
malicious sender were misidentified as regular correspondents and the other way
around. The deliberate distortion affects a lot because many of these phishing scams
make it through AI-based defenses because they don't seem to be fishy to the AI but
are damaging to organizations and individuals.
To shed light on that issue, let's consider visual attack methods against a vision
recognition system. In some others research, the investigators proved that by just
changing the small section of pixels of an image, an AI algorithm may mistake
situation sign for yield mark. This practical case demonstration has the potential of
manifesting the impacts of the adversarial attacks on self-driving cars whose AI
processing is depended on the image recognition, signaling for the necessity of
stronger countermeasures to protect the security of critical systems against intended
hacker’s attacks. (refer false negatives for further development of ideas)
Furthermore, financial sector is not left out from the negative impacts such advanced
fraud detection systems of AI and machine learning have been deployed for.
Adversaries take advantage of existing system vulnerabilities by cloning the two-party
protocol and then introducing error-masked dummy data that mimics the legitimate
transactions. The very purpose of these processes is to trick the AI systems, thus, false
positive results and the fraud creeps up on the system. While such an attack would
shore up the importance of the AI-based cyber security tracking, adaption, and
reinforcement of the programs to beat the growth of non-standard approaches.
These real-life examples show the need to deepen AI knowledge and alleviate
consequent security threats. As more and more organisations leverage AI in order to
build their security barriers, it therefore stands to reason that such activities as hacking
constitute the best example of the fact that the very technologies put forward for
strengthening security can become part of the adversary’s attack on it. Such that these
challenges are to be achieved, demands an all-embracing and flexible policy that
combines ongoing research, collaboration, and innovativeness. The scariest part is the
major role of AI itself, in these adversarial attacks.
One of the most relevant cases regards the use of AI to identify a person’s face in the
public sphere. The usage of AI-powered verifications systems by state bodies and
corporations for more security but in so doing a challenge emerge around the
non-discriminatory collection of biometric data. For instance in Hong Kong, the
citizens have manifested the adverse reactions to the autocratic facial recognition
technology administered by the power holders who are afraid that it may violate their
privacy. AI-based authentication is beneficial for providing data security, but ethical
issues arise over the balance between that purpose and the respect to privacy rights.
False positives, where the reputable activities are erroneously accepted for signs of
potential threat, the organization can experience operations inefficiencies or rudely
awakened. Banking experience through AI-driven fraud detection systems may be
interfered as a result of wrong alarms when honest transactions are unintentionally
marked as illegitimate after which customers and operations are disturbed. One of the
clear signals of the problem of over-tightening the processes on false positives is
instances of credit card transactions being declined with no grounds for the refusal
whatsoever, thus illustrating the difficulty of achieving high security while minimizing
these situations.
On the other side, manufacturers would need to worry about the false negatives, which
points to a possibility of real risks staying undetected.This emergence of critical cyber
threats that the organizations face leads to the existence of another problem, which is
data breach. An instance in that point can be digitally guides which are made to
confuse Computer-based antivirus programs. Imposters of cybercrime are doing smart
manipulations for placing their program code in a way to make them invisible to AI
software for finding malicious programs. It is true that, in healthcare, even AI
malware detection systems capable of missing these hazards could end up by allowing
the intrusion of critical systems with the consequent jeopardy of patient data and
subsequent downfall of the services.
The aviation industry presents us with an additional stimulating example of how the
consequence for false positives can be detrimental. AI systems utilised in aircraft
engines for aforementioned anomaly detection may not be able to identify the minute
discrepancies in the data, so it may lead to the precluding of the early signs of a
mechanical fault. But the challenge is to develop AI algorithms consistently for the
reduction of false negatives but not overloading the operators with the unwanted false
positives, to find that fine line between threat detection timeliness and operational
productivity.
Powerful examples from the real-world world will give you a glimpse of the shortage
of professionals in this discipline. AI-based devices are being used increasingly in
post-major cyber-incidents of WannaCry vicious virus attack in 2017 and the breach
of supply chain through Solarwinds in 2020 as there is a lot of investment in finding
experts good in AI-focused threat detection and response. On the other side, almost no
one knew where the demand for security professionals would come from and the
supply of qualified professionals proved to be quite insufficient given the constantly
evolving cyber threats.
Yet, US hosts the world’s cutting edge research centers and universities which
are the main forces in AI development. Institutes like MIT, Stanford, and Carnegie
Mellon that for example, MIT, Stanford, and Carnegie Mellon explore the engineering
of new AI algorithms, machine learning methods, and security tactics are few that
drive AI revolution. The mixing of these academic institutions and the private sector
increases the national ability to unify AI in combating cyber attacks.
EU
China
China has been very quick to capture a prominent role among players in the AI
sphere and, specifically, this development and use in cybersecurity has been the focus
of its approach. The impressive levels of investment in AI research and development
by the Chinese government illustrates that a strategic priority is pursued for utilising
the fast pace of innovations to boost national security. As far as cybersecurity is
concerned, China makes use of its advanced AI for the purpose of reinforcing their
defense systems that are continuously in a state of growth. Thus, this facilitates the
fighting against different kinds of cyber attacks. This is evident in the development of
AI-integrated security tools for threat detection, incident response and E.g. cyber
initiative protection. The country's extensive AI investments is an integral part of the
country's mission toward to the technological leadership in the world which in turn
has made the public question the security issue of AI technologies from China seeing
that AI technologies from the nation are becoming huge globally.
Russia
Russia has manifested cyber combat skills, including realistic cases, such as
notoriously NotPetya ransomware of 2017 that damaged many points of the global
digital infrastructure. This cyberattack (which was a joint effort by Russian agents) led
the way how the nation can employ AI-driven methods to harm the cyber security.
Such stunts could be blamed for drawing nationwide discussions on AI militarization
as well as for triggering international surveillance of AI usage in cybersecurity field.
The UK
The UK as a country has been seen to have actually supported the use of AI
technology in cyberworld through solid projects and operational ways. Considering
the changing nature of the threat environment, the UK government has heavily funded
projects like the NCSC's Active Cyber Defence project which intends to make the UK
cyberspace more secure from external influences. The use AI driven technologies for
identifying and eliminating cyber threats highlighted the actual possibility of AI as an
effective tool for defenders of the cyberenforcement. Additionally, collaborating with
international partners like joint cybersecurity exercises and information sharing
represent the UK’s responsible and future-looking approach to combating global
cyberfalls and eventually achieving the attainment of the cyber security threats across
the globe through this technology advancement.
The institutions like the Oxford university and the Imperial college London
have been leaders in academic Cybersecurity AI in the realm of cybersecurity.
Through their contribution, they not only support the development and achievement of
their own national initiatives but also set examples and build international best
practices. Additionally, the UK's participation in GPAI forums as well as other such
Global Partnerships like the Global Partnership on Artificial Intelligence, manifests its
commitment to shaping responsible AI use at a global level. Establishing the ethical
principles and the regulatory frameworks in cooperation with the UK, the country
makes sure that AI-based cybersecurity progress is reflected in terms of the worldwide
standards of cyber security, thus promoting collaboration and security amid emerging
cyber threats.
India
It is just gradually, that India's niche in the nexus of AI and cybersecurity is
getting more recognized, nothing but the integration of technology innovation,
state-supported kinds of initiatives and a mushrooming tech ecosystem. India is within
a hairline of a multi-emerging digital scenery where to keep safe its cybersecurity
assets, technological aids with AI needs to be developed. The Digital India program
spearheaded by the government touches upon the importance of technology in India's
development plans. The key objective is to capitalize on AI driven solutions to
strengthen country's cyber security. The most evident trends are in the use of AI for
the design of threat intelligence, anomaly detection, and incident response. Ultimately,
this results in a stronger cyber infrastructure.
1988 The first computer The first computer virus - called ‘Morris
virus Worm, graced the internet. This first cyber
threat sent ripples across the world,
reminding humanity that this newfound
utopia also required vigilance and
intelligence in its use, and further
highlighted the need for cybersecurity.
1997 Deep Blue vs Garry On this date, Deep Blue (a chess AI)
Kasparov reigned victorious over the world chess
champion Garry Kasparov, further
showcasing the potential in success that AI
boasts.
2017 Equifax Data Breach (read more on this) 147 million people had
their personal data leaked. This marks the
first major public data breach.
2020 - Present The question of After breaches like the Yahoo breach and
transparency google being exposed for not respecting
user data privacy, people have started to
raise eyebrows towards them trusting
artificial intelligence. The general public
loves the notion of self privacy, and a lot of
the consensus believe that their privacy is
not being respected with them having to
rely on the internet for many things. The
governments counter by saying that
monitoring certain bits of data is crucial to
managing populations today.
Previous Attempts to Solve the Issue
Utilising Machine Learning for Threat Detection
Machine learning has grown into somewhat of a double edged sword. While intended
to be used for minimizing the work and effort put behind manipulating large amounts
of data, it is being used for detecting vulnerabilities in defense systems. One attempted
solution to detect threats to vulnerabilities, is ofcourse, using machine learning. With
its capabilities of detecting changes in patterns, it can be a viable option. But here
comes the question of - which side uses it better - Attackers, or Cybersecurity
Software?
Additionally, military cyber commands are not only the defenders but also the
combatants in the cyber world. They are actively involved in strategic use of
cyberspace for an offensive and defensive cyber warfare. This includes the building of
an articulation of ways and means of conducting cyber warfare such as network
disruption and disabling adversary communications and exploiting cyber tools toward
military purposes. The fact that the function to command offensive acts of war is
entrusted to them in these controls subtly indicates the need to address modern wars
and conflict where cyber operations act on a par with the more traditional domains of
warfare.
Possible Solutions
Red Team Exercises in the Military
Red team simulation is a strategic process, used by military organizations, as
their initial reaction to the necessity to test and improve their cyber defense systems
that are robust and reliable. These cyber-attacks, in a sense, are imitated and created
meticulously to mirror the real-life conditions where military forces have the
opportunity to identify existing problems associated with the network as well as detect
the points of entry by means of adversaries’ exploitation. The objective is to determine
the actual capability of the existing cybersecurity regulation, incident response matrix
and incident management mechanism in a computerized environment.
The ideas acquired from doing red team activities are basically what the
cybersecurity strategies need in terms of modifications and making them efficient.
Identifying the vulnerabilities in their systems helps military organizations to better
determine which upgrades, updations, or needed enhancements are the most critical.
Beyond that, these tests serve for the measurement of the efficiency of intrusion
detection systems, incident response protocols and other factors that characterize the
readiness of military cyberspace for the occurrence of similar events.
Cyberthreat Intelligence Sharing
Cyber Threat Intelligence Sharing - the central pillar of the fight -, by military and
intelligence agencies - in the general battle of cyber threats - against their growing
diversity, is indeed concise and clear. Understanding the intricacy in cyber threats and
main defects which states experience, collective actions and information sharing are
viewed as leading to the improvement of cybersecurity defenses. Military bodies, intel
agencies, and also wider international military all lias appear to be sufficiently strong
in respect to sharing cyber threat intelligence. They can unite to face cyber
adversaries. A powerful working together method includes a constant lengthening of
knowledge about the tactics, practices, and standards, which is utilised by the cyber
threat actors. Thereupon these computer skills are manifested in indicators of
compromise (IOCs) that facilitate the identification of the malware signatures and
provide the insights into the emerging cyber threats. Through the consolidation of
resources and experience, armed forces can grow their observing ability as an entire,
having a holistic vision of the danger picture.
It must be acknowledged that in reality the said idea is still far from being an ordinary
and universally accepted practice. In real life, the practical side of the total data
ownership concept is not so simple due to the complicated process of reading and
understanding terms of service agreements, policies, and the less cumbersome
data-sharing practices that are commonplace on the Internet. Users are definitely being
told to reckon with their privacy and exercise their data control, ideologically
speaking. Nevertheless, the challenges of applying this concept such as the “data
avalanche”, “the data-sharing network” and “technology evolution” remain. In
addition, the self-realization of individual data ownership cannot only be user
awareness but also tech companies, policy makers, and regulators concertedly make
an achievement is to give data ownership with the principles of data sovereignty.
In essence, this notion urges the general consensus to take charge of their own data,
reducing the reliance on things they don’t even know in the internet.
Cybersecurity Education
The heightened attention on cyber security training seems to be a critical factor in
response to the dynamic digital platform that is diversely dangerous. We developed
this training course to give people a understanding and techniques required to deal
with cyber hazards whatever from public threat to those associated with AI-driven
modern technology.
Furthermore, cybersecurity studies extend their attention towards the main and rising
threats of phishing attacks and social engineering techniques which are occurring
everywhere. People are coached consider to detect the biggest deceiving signals of
hacker attempts, such as emails, messages or deceiving websites, among others. Being
aware of the social engineering manipulation, which use psychological approach to
led users astray, stand as an integral part of this training. Attention to all these tricks
will not only create a sort of shield against the necessity to be manipulated but also
will protect individual information from getting into the wrong hands.
Bibliography
www.morganstanley.com/articles/ai-cybersecurity-new-era.
www.ibm.com/ai-cybersecurity#:~:text=AI-powered%20risk%20analysis%20can,by
%20an%20average%20of%2055%25.&text=The%20AI%20technology%20also%20
helps,against%20cybercriminals%20and%20cyber%20crime.
www.honeywellforge.ai/us/en/solutions/ot-cybersecurity/increase-ot-cybersecurity-res
ilience?utm_source=google&utm_medium=cpc&utm_campaign=23-q4-hce-cyber-w
w-outcome_increase_ot_resiliency-7016S000002KWLeQAO-meta&utm_adgroup=&
utm_term=&gad_source=1&gclid=Cj0KCQiAw6yuBhDrARIsACf94RUj_hjoSPWlt
QNTmhuCCXXcA-IelfbglSWNGpU3VstmzYxeP0VTaWYaAi2EEALw_wcB.
Lawton, George. “AI Transparency: What Is It and Why Do We Need It?” CIO, 2 June 2023,
www.techtarget.com/searchcio/tip/AI-transparency-What-is-it-and-why-do-we-need-it
www.linkedin.com/pulse/military-cybersecurity-challenges-ai-willian-oliveira-ctsaf.
The Economist. “Artificial Intelligence Is Changing Every Aspect of War.” The Economist,
13 Feb. 2021,
www.economist.com/science-and-technology/2019/09/07/artificial-intelligence-is-cha
nging-every-aspect-of-war?utm_medium=cpc.adword.pd&utm_source=google&ppcc
ampaignID=18151738051&ppcadID=&utm_campaign=a.22brand_pmax&utm_conte
nt=conversion.direct-response.anonymous&gad_source=1&gclid=Cj0KCQiAw6yuBh
DrARIsACf94RVQEEJ751BBW6uPNNqYW6TnG-TJpWKjuL5_GIZQIHTtgWK8L
ZDqF_8aAqoDEALw_wcB&gclsrc=aw.ds.