0% found this document useful (0 votes)
77 views

GA1 RR Aditya

This document discusses using artificial intelligence for cybersecurity. It defines key terms like artificial intelligence, machine learning, cybersecurity, threat detection, and incident response. It also examines key issues with using AI for cybersecurity like adversarial attacks exploiting algorithms, data privacy concerns, and ensuring proper human oversight of AI systems.

Uploaded by

jackbrowndagod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views

GA1 RR Aditya

This document discusses using artificial intelligence for cybersecurity. It defines key terms like artificial intelligence, machine learning, cybersecurity, threat detection, and incident response. It also examines key issues with using AI for cybersecurity like adversarial attacks exploiting algorithms, data privacy concerns, and ensuring proper human oversight of AI systems.

Uploaded by

jackbrowndagod
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Forum: First General Assembly (GA1)

Issue: Evaluating the Applications of Artificial Intelligence in Cyber


Security
Student Officer: Aditya Banerjee
Position: Head Chair
_________________________________________________________________

Introduction
Artificial Intelligence has been the craze of the crowd, especially for the last few years
- and understandably so. The human race has always been taking steps towards
making discoveries and inventions that would make our work and lives easier.
Artificial Intelligence (AI), is literally the epitome of convenience. Machinery prior to
artificial intelligence were stunted from the aspect that, it relied heavily on human
coordination and input. They were capable of performing actions which were
pre-coded by humans, and nothing beyond that. Artificial Intelligence, as the name
suggests, is intelligent - meaning it can learn from experience. It can adapt to changes
to its environment, and making decisions and take actions in response to a change in
circumstances.

Of course, with all the convenience of technology comes some pretty unfortunate and
dangerous risks. Most of these risks, if not all, come from issues surrounding
cybersecurity. Cybersecurity is an umbrella term coined for the process of information
security that is facilitated with the safeguarding of computers, networks, and digital
data against cyber attacks. It provides the users with the security from not only the
attempts to access their assets but also the cases of theft and destruction.
Cybersecurity is based on one key principle – the need to secure the data from digital
environment from the point of perspective of confidentiality (the lack of unauthorized
access), integrity (the lack of unauthorized changes), and also availability (the lack of
the access limitation) of that data. In this case encryption and data access should be
applied to data to make it confidential, and also methods of tracking to ensure the
integrity are deployed.

The cybersecurity landscape becomes dynamic day by day and the application of
Artificial Intelligence brings about vivid discussions and a series of virtual incidents
that lead us to consider the prospects of cybersecurity. Situations in which machines
learning purposes are able to dodge even the smartest cyber threats and they are doing
it automatically. It would be dangerous if we let machines to learn how to do
everything in the cyber-security field especially when there is lots of potential risks
resulting in catastrophic consequences. Nevertheless, the increase in AI-driven
malware and sophisticated attacks opens questions regarding the inherent ethics and
unspoken risks that may involve the release of smart programs in this cybersecurity
field. With every unfolding stages of cybersecurity facing AI, it is now important to
understand what role this technology is playing in making this cat-and-mouse game
experienced by holders and attackers. Is it the time for AI to take control over the
cybersecurity environment, or do the human capacities still remain vital in filtering
and sorting cyber threats?

Definition of Key Terms


Artificial Intelligence (AI) and Machine Learning
AI can simulate the human brain-like abilities and process information with rigour like
we do allowing them to execute tasks such as problem-solving and decision-making.
In cybersecurity, AI is leveraged to gain the necessary adaptability that enables threat
detection, incident response, as well as the resilience to the constant digital dangers
that continue to evolve. As a part of AI, machine learning is a way to automatically
make systems to learn without direct instruction, understanding and self-improvement,
from what they were taught. Such characteristics of machine learning are very
important for cybersecurity for instance: pattern recognition and data analysis.
Through continuous adaptation to new and emergent threats, machine learning helps
to build more reliable, or able, threat detection and mine control systems.
Cybersecurity
Cybersecurity by protecting online environments is the safeguard of digital spheres,
the most important tasks including blocking unlawful entrances as well as malicious
attacks. Under the ever-changing cyber threats scenario, the role of AI cannot be
overemphasized. The AI features such as proactive AI, predictive AI, automation, and
robust defense are becoming crucial in ensuring the confidentiality and integrity of
digital data.

Threat Detection
The cyber security world phenomenom is threat detection which is the proactive
mindset of detecting hazardous intruders or weird patterns in the system which are not
part of the general flow. AI algorithms perform the most important function of this
intelligent monitoring by analyzing big data to recognize anomalies and possible
security breaches in real time through which it is possible to tighten the digital
immunity like no other method.

Incident Response
Incident response in cybersecurity means making swift and organized action in
dealing with an attack on the computer and network security. AI is becoming a routine
feature of the incident handling scheme; the technology is used for rapid evaluation of
the situation and application of mitigation measures to ensure that damage
minimization is achieved in record time and the victims’ lives are saved.

Zero-Day Vulnerabilities
Zero-Day vulnerabilities is a period of time during which softwares are most
vulnerable. This is usually during the time period right after the release - prior to
public testing, or right after a major update before a patch.

Biometric Authentication
Biometric identification which is enhanced by AI and relies on peculiar physiological
or behavioral attributes of a user to ensure his/her availability. Biometrics technology
ensures security since it is more sophisticated and user-friendly as compared to the
conventional use of passwords it allows access based on specific features of a person
and so can detect any unauthorized user easily. It in the long run minimizing the risks
of the traditional password-based authentication technique.

Blockchain Security
AI helps blockchain to be more secure from cyber assaults since only particular
actions within decentralized networks can be identified and terminated. By means of
endless observation and processing, AI improves the protection level of the
blockchain systems, where the encryption of the transactions and their integrity taken
to the account in the distributed ledger technology is of the highest interest.

Key Issues
Adversarial Attacks and Exploitation of Algorithms
The integration of AI in cybersecurity has been demonstrated by examples in this
dynamic environment in the wild with adverse attacks highlighting the dawning of
these threats. For instance, the email system machinery was hacked during certain
activities through a creative use of an advanced email filter that could easily identify
malicious emails. In these cases, hackers have searched for weak spots in the
algorithms for natural language processing which in result affected the email filter, the
malicious sender were misidentified as regular correspondents and the other way
around. The deliberate distortion affects a lot because many of these phishing scams
make it through AI-based defenses because they don't seem to be fishy to the AI but
are damaging to organizations and individuals.

To shed light on that issue, let's consider visual attack methods against a vision
recognition system. In some others research, the investigators proved that by just
changing the small section of pixels of an image, an AI algorithm may mistake
situation sign for yield mark. This practical case demonstration has the potential of
manifesting the impacts of the adversarial attacks on self-driving cars whose AI
processing is depended on the image recognition, signaling for the necessity of
stronger countermeasures to protect the security of critical systems against intended
hacker’s attacks. (refer false negatives for further development of ideas)

Furthermore, financial sector is not left out from the negative impacts such advanced
fraud detection systems of AI and machine learning have been deployed for.
Adversaries take advantage of existing system vulnerabilities by cloning the two-party
protocol and then introducing error-masked dummy data that mimics the legitimate
transactions. The very purpose of these processes is to trick the AI systems, thus, false
positive results and the fraud creeps up on the system. While such an attack would
shore up the importance of the AI-based cyber security tracking, adaption, and
reinforcement of the programs to beat the growth of non-standard approaches.

These real-life examples show the need to deepen AI knowledge and alleviate
consequent security threats. As more and more organisations leverage AI in order to
build their security barriers, it therefore stands to reason that such activities as hacking
constitute the best example of the fact that the very technologies put forward for
strengthening security can become part of the adversary’s attack on it. Such that these
challenges are to be achieved, demands an all-embracing and flexible policy that
combines ongoing research, collaboration, and innovativeness. The scariest part is the
major role of AI itself, in these adversarial attacks.

Data Privacy and Ethical Concerns


Data privacy becomes a key element of strong cybersecurity strategies just as AI
becomes part of them. The AI-data privacy nexus implies several ethical dilemmas.
The practicality of the situation encourages the consideration of the difficulties and the
danger of the current AI-enabled cyber protection measures, whose sole purpose is to
safeguard individual privacy.

One of the most relevant cases regards the use of AI to identify a person’s face in the
public sphere. The usage of AI-powered verifications systems by state bodies and
corporations for more security but in so doing a challenge emerge around the
non-discriminatory collection of biometric data. For instance in Hong Kong, the
citizens have manifested the adverse reactions to the autocratic facial recognition
technology administered by the power holders who are afraid that it may violate their
privacy. AI-based authentication is beneficial for providing data security, but ethical
issues arise over the balance between that purpose and the respect to privacy rights.

AI often bypasses human-centralised threat intelligence and analysis, which may in


turn inadvertently lead to the creation of biassed algorithms. Welcoming the historical
data that is used in AI training may reflect biases which the AI algorithms will
eventually learn and proliferate the unfairness. A real life illustration could be
interpretation as those self learning cybersecurity devices often show biases of
profiling individuals based on peculiarities such as ethnicity or social economic
backgrounds. This, in turn, may lead to ethical questions regarding the possibility of
AI to discriminate against people in the field of cyber security and the fact that the
reason to pay attention to biases which, as a consequence may ensure fair and just
outcome of different cases, is being underlined.

These Two Scenarios Capacitate Individuals with Basic Understanding of the


pertinent ethical issues involved in the use of AI in Cybersecurity. On the one hand, in
order to maintain security, as well as for better security options, AI should be taken
into account. At the same time, however, AI functions should be designed in such a
way that personal privacy is not violated that would rebuild public confidence in AI
practices. Now, as the field is going through the continuous growth, the industry
players, as well as decision makers, including cybersecurity experts, should coordinate
to examine the ethical issues and to develop standards which value the security of
information and the preservation of privacy in the cyber world driven by artificial
intelligence. (further research - read about AI gender discrimination within large
databases, Hospitals in the UK for example)

False Positives & False Negatives


Integration of Artificial Intelligence (AI) in Cybersecurity is becoming a popular
practice, but also it has brought both advantages and disadvantages. Among this, the
occurrence of false positives and negatives is a critical matter. Practical cases show the
degree to which these problems interchange with AI systems capability of
cybersecurity and it also portrays how the AI systems are bootstrapped by these
problems.

False positives, where the reputable activities are erroneously accepted for signs of
potential threat, the organization can experience operations inefficiencies or rudely
awakened. Banking experience through AI-driven fraud detection systems may be
interfered as a result of wrong alarms when honest transactions are unintentionally
marked as illegitimate after which customers and operations are disturbed. One of the
clear signals of the problem of over-tightening the processes on false positives is
instances of credit card transactions being declined with no grounds for the refusal
whatsoever, thus illustrating the difficulty of achieving high security while minimizing
these situations.

On the other side, manufacturers would need to worry about the false negatives, which
points to a possibility of real risks staying undetected.This emergence of critical cyber
threats that the organizations face leads to the existence of another problem, which is
data breach. An instance in that point can be digitally guides which are made to
confuse Computer-based antivirus programs. Imposters of cybercrime are doing smart
manipulations for placing their program code in a way to make them invisible to AI
software for finding malicious programs. It is true that, in healthcare, even AI
malware detection systems capable of missing these hazards could end up by allowing
the intrusion of critical systems with the consequent jeopardy of patient data and
subsequent downfall of the services.

The aviation industry presents us with an additional stimulating example of how the
consequence for false positives can be detrimental. AI systems utilised in aircraft
engines for aforementioned anomaly detection may not be able to identify the minute
discrepancies in the data, so it may lead to the precluding of the early signs of a
mechanical fault. But the challenge is to develop AI algorithms consistently for the
reduction of false negatives but not overloading the operators with the unwanted false
positives, to find that fine line between threat detection timeliness and operational
productivity.

Lack of Skilled Workforce


AI has become the main driving factor in the development of cybersecurity and has
underlined the urgency for professional workforce empowered with the necessary
skills set for overcoming the challenges posed by this digital domain currently. Lack
of professionals possessing expertise in AI and cybersecurity at the same time is one
of the important problems that companies face today when trying to use AI powered
cyber security tools to the fullest.

Powerful examples from the real-world world will give you a glimpse of the shortage
of professionals in this discipline. AI-based devices are being used increasingly in
post-major cyber-incidents of WannaCry vicious virus attack in 2017 and the breach
of supply chain through Solarwinds in 2020 as there is a lot of investment in finding
experts good in AI-focused threat detection and response. On the other side, almost no
one knew where the demand for security professionals would come from and the
supply of qualified professionals proved to be quite insufficient given the constantly
evolving cyber threats.

Furthermore, the introduction of AI in cybersecurity involves an overlapping skillset


that consists of data science, machine learning, ethical hacking, and situational
awareness regarding cybersecurity standards. The absence of the knowledgeable
personnel with this particular skill set of the team is the main challenge hindering
companies from completely utilizing their AI to gain security benefits.

Major Parties Involved and Their Views


USA
The US boasts of being not only one of the largest markets in the world but
also a home to some state of the art technology companies for example, Google,
Microsoft and Amazon. These organizations (companies) not only accept challenges
while they are in research and development of AI, but also expand the sphere of their
application in cybersecurity. By means of huge funds and joint works with academia,
the tech gurus make the artificial intelligence (AI)-based apps which are used in
different security operations, such as threat detection, incident response, security, etc.

Yet, US hosts the world’s cutting edge research centers and universities which
are the main forces in AI development. Institutes like MIT, Stanford, and Carnegie
Mellon that for example, MIT, Stanford, and Carnegie Mellon explore the engineering
of new AI algorithms, machine learning methods, and security tactics are few that
drive AI revolution. The mixing of these academic institutions and the private sector
increases the national ability to unify AI in combating cyber attacks.

In the governmental sphere, the U.S. government is to take a leading role in


shaping all that having to do with AI in relation to cybersecurity policies and
strategies. The cyber security consequences are a matter that national security
agencies, especially the Department of Defense and the Department of Homeland
Security, debate for the creation of regulatory frameworks and for the appropriate use
of AI for applications that strengthen cybersecurity. One of the key US Cyber
Command missions is the defense of US military networks, a task it carries out by
using AI to spot and respond to the incoming cyber-threats in real time.

EU

The European Union (EU)'s approach to AI should be framed as proactive,


especially if we take the example of GDPR as its Regulation. GDPR not only affects
the used of AI in securing cyberspace but also 'real world consequences' are also
witnessed as in the case of known individuals. For example, cyber attacks on big tech
companies that uncovered data leaks afterwards led to fines following the GDPR
entries, which resulted in the need to re-examine cybersecurity prioritization. The
EU's strict data protection framework not only safeguards personal privacy but also
prompts organizations to incorporate AI systems that harmonize with GDPR
principles thus framing the effect of European regimes on the cyber security domain in
a more concrete way.

Besides that the EU's cohesiveness in regulating AI is highly noticeable in


movements like the European AI Alliance with different stakeholders working
together to shape the regulations. This is however not limited to only the local
operations but extends to the national level whereby Global Partnership on Artificial
Intelligence (GPAI) becomes part of the efforts. Through its commitments to these big
programs, the EU assists in the setting out of internationally accepted norms and
standards in AI used for cybersecurity. By doing an example as the EU’s involvement
in GPAI, a pledge of good ethics is made, understating the transparency and
accountability of AI in cybersecurity on a global level. These cases demonstrate the
pursue by the EU that is a combination of practical and interesting for both domestic
and global spheres of AI cyber security. Such mixture creates the ethical and
regulatory environment the AI in cyber security.

China
China has been very quick to capture a prominent role among players in the AI
sphere and, specifically, this development and use in cybersecurity has been the focus
of its approach. The impressive levels of investment in AI research and development
by the Chinese government illustrates that a strategic priority is pursued for utilising
the fast pace of innovations to boost national security. As far as cybersecurity is
concerned, China makes use of its advanced AI for the purpose of reinforcing their
defense systems that are continuously in a state of growth. Thus, this facilitates the
fighting against different kinds of cyber attacks. This is evident in the development of
AI-integrated security tools for threat detection, incident response and E.g. cyber
initiative protection. The country's extensive AI investments is an integral part of the
country's mission toward to the technological leadership in the world which in turn
has made the public question the security issue of AI technologies from China seeing
that AI technologies from the nation are becoming huge globally.

However, China's AI technology in cybersecurity not only offers an


opportunity to monitor the internet use and combat cyber crimes, but at the same time
raises an issue on the rights of citizens to data security and privacy. The role of AI
systems utilized for the purpose of cybersecurity may be essentially connected with
the trends of surveillance that pose complicated questions regarding the inviolability
and the level of rights of the individual. With the widespread implementation of AI-
enabling widespread surveillance systems, both nationally and globally, ethics of tech
use and a probability of infringement of rights has been the topic of many on-going
discussions. As the deployment of Chinese AI across the world is getting bigger, these
considerations are indeed escalated, and it is the time for the world to begin talking
about the vitality of international collaboration and standards clear enough to ensure
responsible uses of AI concerning the cybersecurity matter.

Russia
Russia has manifested cyber combat skills, including realistic cases, such as
notoriously NotPetya ransomware of 2017 that damaged many points of the global
digital infrastructure. This cyberattack (which was a joint effort by Russian agents) led
the way how the nation can employ AI-driven methods to harm the cyber security.
Such stunts could be blamed for drawing nationwide discussions on AI militarization
as well as for triggering international surveillance of AI usage in cybersecurity field.

Russia’s AI-based cybersecurity strategy is not only a question around


technology, but it also ties the political issues within. As per the country’s strategic
plans, the AI is being integrated into the national security plans which accentuates the
use of technology to make cyber defenses more efficient. This military use of AI
capabilities raises a wider global debate on the ethical principles and risks of general
armament and automation of warfare. Russia's conduct and investment in AI for
cybersecurity participates in the modifying, the whole question of which nation will
master the advantage of global security technology will continue to be a concern in the
whole process of balancing technological development, security requirements, and the
ethical issues.

The UK
The UK as a country has been seen to have actually supported the use of AI
technology in cyberworld through solid projects and operational ways. Considering
the changing nature of the threat environment, the UK government has heavily funded
projects like the NCSC's Active Cyber Defence project which intends to make the UK
cyberspace more secure from external influences. The use AI driven technologies for
identifying and eliminating cyber threats highlighted the actual possibility of AI as an
effective tool for defenders of the cyberenforcement. Additionally, collaborating with
international partners like joint cybersecurity exercises and information sharing
represent the UK’s responsible and future-looking approach to combating global
cyberfalls and eventually achieving the attainment of the cyber security threats across
the globe through this technology advancement.

The institutions like the Oxford university and the Imperial college London
have been leaders in academic Cybersecurity AI in the realm of cybersecurity.
Through their contribution, they not only support the development and achievement of
their own national initiatives but also set examples and build international best
practices. Additionally, the UK's participation in GPAI forums as well as other such
Global Partnerships like the Global Partnership on Artificial Intelligence, manifests its
commitment to shaping responsible AI use at a global level. Establishing the ethical
principles and the regulatory frameworks in cooperation with the UK, the country
makes sure that AI-based cybersecurity progress is reflected in terms of the worldwide
standards of cyber security, thus promoting collaboration and security amid emerging
cyber threats.
India
It is just gradually, that India's niche in the nexus of AI and cybersecurity is
getting more recognized, nothing but the integration of technology innovation,
state-supported kinds of initiatives and a mushrooming tech ecosystem. India is within
a hairline of a multi-emerging digital scenery where to keep safe its cybersecurity
assets, technological aids with AI needs to be developed. The Digital India program
spearheaded by the government touches upon the importance of technology in India's
development plans. The key objective is to capitalize on AI driven solutions to
strengthen country's cyber security. The most evident trends are in the use of AI for
the design of threat intelligence, anomaly detection, and incident response. Ultimately,
this results in a stronger cyber infrastructure.

AI technologies and their increasing implementation into cybersecurity are just


several Indian startups, and research institutions which are working hard in developing
and implementing this technology. For example, firms like Sequretek and Lucideus
have centered their business solutions on the detection of threats and vulnerability
assessments using AI which presents the efforts put in place by indigenes in ensuring
the cyber resilience of the nation. Notwithstanding this fact, the collaborative efforts
of the government, academia and stakeholders alike will facilitating innovation in the
cyberspace which assures AI technology enhanced security. India’s active engagement
in the international consortiums like the Global Cybersecurity Capacity Centre gives a
hint of their intention of yielding to like-minded groups for the development of
responsible AI use in the light of cybersecurity. As India that will walk with the
changes in the cyberspace, the country that has an AI technology assimilated within
cyber security is bound to be a significant player on the international cyber threat
response.
Development of Issue/Timeline

Date Event Outcome

1956 The Birth of “AI” In 1956, around one hundred researchers


and scholars met at Dartmouth for a
summer program - during which they
coined the term ‘Artificial Intelligence’ for
the first time. This is widely accepted as
the birth of AI, even though it was nothing
but hypothetical at first.

1969 The first AI program SHRDLU is an attempted language model


developed my Terry Winograd. It is widely
believed that this language model, even
though not complete enough to be properly
called AI - was the source of inspiration for
future developments. It can be said, that
this model was the first step through the
doors of invention.

1988 The first computer The first computer virus - called ‘Morris
virus Worm, graced the internet. This first cyber
threat sent ripples across the world,
reminding humanity that this newfound
utopia also required vigilance and
intelligence in its use, and further
highlighted the need for cybersecurity.

1997 Deep Blue vs Garry On this date, Deep Blue (a chess AI)
Kasparov reigned victorious over the world chess
champion Garry Kasparov, further
showcasing the potential in success that AI
boasts.

2010 Use of AI in Warfare Stuxnet, a computer worm, targets Iran’s


nuclear facility. This revealed the potential
for AI to be used in warfare.

2016 AlphaGo beats Lee AlphaGo, a model developed by


Seedol DeepMinds defeats world champion Go
player Lee Seedol, once again asserting
itself over humanity.

2017 Equifax Data Breach (read more on this) 147 million people had
their personal data leaked. This marks the
first major public data breach.

2018 GPDR comes into The General Data Protection Regulation


play (GDPR) comes into effect in the European
Union, influencing global data protection
standards and impacting AI applications in
cybersecurity.

2019 AI Deepfakes In this year, AI deepfakes became


sophisticated enough, for people to start
protesting for making laws and regulations
for the ethical use of this technology. Till
modern date, a proper law structure hasn’t
been implemented for this technology.

2020 SolarWinds This was the first major publicly


cyberattack announced major cybercrime organisation
that targeted multiple chains, stealing data.
This sent ripples across society, as it
highlighted the lack of sufficient
cybersecurity.

2021 Colonial Pipeline The Colonial Pipeline ransomware attack


Ransomware Attacks disrupts fuel supplies on the U.S. East
Coast, highlighting the real-world impact
of cyber threats and the need for resilient
AI-driven cybersecurity measures.

2022 Quantum Computing Quantum Computing is a step forward in


Threats computing things efficiently. Using
quantum mechanics, probability to do
actions can be minimized a lot. It’s
relevancy to cybersecurity comes into play
with the question of - will this be used
better for cybercrime attacks or reinforcing
defense?

2020 - Present The question of After breaches like the Yahoo breach and
transparency google being exposed for not respecting
user data privacy, people have started to
raise eyebrows towards them trusting
artificial intelligence. The general public
loves the notion of self privacy, and a lot of
the consensus believe that their privacy is
not being respected with them having to
rely on the internet for many things. The
governments counter by saying that
monitoring certain bits of data is crucial to
managing populations today.
Previous Attempts to Solve the Issue
Utilising Machine Learning for Threat Detection
Machine learning has grown into somewhat of a double edged sword. While intended
to be used for minimizing the work and effort put behind manipulating large amounts
of data, it is being used for detecting vulnerabilities in defense systems. One attempted
solution to detect threats to vulnerabilities, is ofcourse, using machine learning. With
its capabilities of detecting changes in patterns, it can be a viable option. But here
comes the question of - which side uses it better - Attackers, or Cybersecurity
Software?

Automated Incident Response Systems


With anything and everything being automated these days, so are response systems -
especially response to threats. When a threat is detected, it is automatically dealt with
- if within the capabilities of AI. This creates potential for false positives.
Cyberattacks can be done in a strategic way, targeting the blindspots of AI, creeping
past the false sense of security that the owner of data may be under. This can lead to
the owner of the data not being well prepared, causing mass damage and harm.
Evidences - Yahoo attacks. However automated incident response systems have grown
to be an integral part of response systems and continue to be used today, as it helps
prevent more minor cybercrime attempts.

User and Entity Behavior Analytics (UEBA)


User and Entity Behavior Analytics (UEBA as a potent cybersecurity approach that
employs the study of the behavior of both users and groups of users in the network to
reinforce the defenses of the organization. This algorithm acts based on the idea that
any phenomenon that deviates from our standard pattern of operation indicates an
intrusion of critical systems. UEBA systems implement potentials of advanced
analytical and machine learning processes by setting baseline behaviors for each
individual user and device, providing authentication - time, data accessed patterns,
regular activities, etc. as examples. The average number represents zero percent
activity and is used for tracking abnormal actions that may indicate malicious intent.
One of the considerable features of UEBA is about snod detection. Machine
learning algorithms, being the fundamental ones, automatically allow UEBA systems
to learn continuously and automatically adapt to the reoccurring changes in normal
behavior. Any anomalies that are discovered — for example, when unusual data
access or out of regular times occur — the system triggers alerts. These alerts
immediately generate a process that assists with investigation and response. By
introducing risk scoring besides indicators of detected threats, an algorithm gets a
quantitative assessment of the threat of the threats it has detected. Risk scores more
than higher mean that a security threat is of greater likelihood. This way, the security
analysts can prioritize their response efforts, which allows the security analysts to
prioritize their response efforts, which in turn is directed to the greatest security threat.

Establishment of Military Cyber Commands


Military cyber commands become end-ish thing in the modern warring era which
clearly shows that cyber space is the most vital part of military operations. A good
number of countries have confronted the multi-dimensionality of security threats by
appointing special organs with the responsibility of taking charge of cybersecurity
operations under the military scope. These military cyber commands are notchedered
in their responsibilities, the main which being to ensure that the digital warfare cannot
happen to the military network through a stronger cybersecurity. As for that, it comes
down to implementing various tough comprehensive protective measures such as data
and communication system security and critical infrastructure safeguard against
intruders.

Additionally, military cyber commands are not only the defenders but also the
combatants in the cyber world. They are actively involved in strategic use of
cyberspace for an offensive and defensive cyber warfare. This includes the building of
an articulation of ways and means of conducting cyber warfare such as network
disruption and disabling adversary communications and exploiting cyber tools toward
military purposes. The fact that the function to command offensive acts of war is
entrusted to them in these controls subtly indicates the need to address modern wars
and conflict where cyber operations act on a par with the more traditional domains of
warfare.

Possible Solutions
Red Team Exercises in the Military
Red team simulation is a strategic process, used by military organizations, as
their initial reaction to the necessity to test and improve their cyber defense systems
that are robust and reliable. These cyber-attacks, in a sense, are imitated and created
meticulously to mirror the real-life conditions where military forces have the
opportunity to identify existing problems associated with the network as well as detect
the points of entry by means of adversaries’ exploitation. The objective is to determine
the actual capability of the existing cybersecurity regulation, incident response matrix
and incident management mechanism in a computerized environment.

Nevertheless, due to the utilization of ethical hacking and penetration testing


approach by the called "red team" professional hackers, a team of the focused security
experts specializing on opponent's tricks will be formed. These exercises include
being wide-reaching, and aim to cover the various elements of the military's digital
infrastructure, such as the networks, the systems and the applications and the
communication channels. Through mimicking the ways how rivals gain their benefits
and accessibility, red teams can offer the best assessment on the defense's strengths
and shortcomings in regard to cyber security department.

The ideas acquired from doing red team activities are basically what the
cybersecurity strategies need in terms of modifications and making them efficient.
Identifying the vulnerabilities in their systems helps military organizations to better
determine which upgrades, updations, or needed enhancements are the most critical.
Beyond that, these tests serve for the measurement of the efficiency of intrusion
detection systems, incident response protocols and other factors that characterize the
readiness of military cyberspace for the occurrence of similar events.
Cyberthreat Intelligence Sharing
Cyber Threat Intelligence Sharing - the central pillar of the fight -, by military and
intelligence agencies - in the general battle of cyber threats - against their growing
diversity, is indeed concise and clear. Understanding the intricacy in cyber threats and
main defects which states experience, collective actions and information sharing are
viewed as leading to the improvement of cybersecurity defenses. Military bodies, intel
agencies, and also wider international military all lias appear to be sufficiently strong
in respect to sharing cyber threat intelligence. They can unite to face cyber
adversaries. A powerful working together method includes a constant lengthening of
knowledge about the tactics, practices, and standards, which is utilised by the cyber
threat actors. Thereupon these computer skills are manifested in indicators of
compromise (IOCs) that facilitate the identification of the malware signatures and
provide the insights into the emerging cyber threats. Through the consolidation of
resources and experience, armed forces can grow their observing ability as an entire,
having a holistic vision of the danger picture.

Collectively, effective network infiltration increases cyberattack attribution as it


enables locating the source and the motive behind malicious activities. This
consideration is important for any diplomatic and categorical decision-making, as it
enables nations to take right actions concerning the cyber threats that are stipulated
and enforced within the international law. A Community Participation and Best
Practice Sharing in CTI Reinforces a Sense of Common Responsibility as well as
increased resilience of military organizations. Established sharing platforms and
protocols allow participating entities sharing real-time insights while proper
consideration of the relevance with the details provided under the considerations of
classified information. It is this collaborative action in particular that both benefits
individual countries, but also makes a positive contribution to the overall security of
the international cyber space ecosystem.

However it is not to be overlooked that a lot of cybercrime is actually state sponsored


and done strategically to boost military efficiency.
The Notion of ‘Individual Data Ownership’
Ownership of personal data as individuals is arguably the biggest disruption in the
discussion of digital privacy which argues, it is about time for the users to step up,
learn its implication or implications and how to use it if there will be a need to let go
of it. With the underlying pervasiveness of AI tools, data generators are continuously
prompted to comprehend their data life-cycle varies from data getting collected, to
data storage and utilization by various AI applications.

Underlining the principle of data ownership, this philosophy concentrates on the


increased personal accountability of the user, towards the opportunity that he has to
scan, oversee and manage the way the AI algorithms utilize his or her personal data.
This calls for the promotion of transparency in data practices, the elevation of which is
informed consent as well as the responsibility of the users to make informed decisions
about sharing their data upon the level of their in-depth knowledge of positive and
negative impacts of sharing of information.

It must be acknowledged that in reality the said idea is still far from being an ordinary
and universally accepted practice. In real life, the practical side of the total data
ownership concept is not so simple due to the complicated process of reading and
understanding terms of service agreements, policies, and the less cumbersome
data-sharing practices that are commonplace on the Internet. Users are definitely being
told to reckon with their privacy and exercise their data control, ideologically
speaking. Nevertheless, the challenges of applying this concept such as the “data
avalanche”, “the data-sharing network” and “technology evolution” remain. In
addition, the self-realization of individual data ownership cannot only be user
awareness but also tech companies, policy makers, and regulators concertedly make
an achievement is to give data ownership with the principles of data sovereignty.

In essence, this notion urges the general consensus to take charge of their own data,
reducing the reliance on things they don’t even know in the internet.
Cybersecurity Education
The heightened attention on cyber security training seems to be a critical factor in
response to the dynamic digital platform that is diversely dangerous. We developed
this training course to give people a understanding and techniques required to deal
with cyber hazards whatever from public threat to those associated with AI-driven
modern technology.

The advance—in technology—leaves educating people in cybersecurity with the risk


that AI creates as crucial to the entire process. Users' attention will be focused onto
recognition of the potential dangers that come along with AI, namely, the development
of AI-based scams and the creation AI-generated malware. People can get these
necessary tools into their hands by learning how the cyber threat tactics change and
are updated. It is an extra advantage, which helps the safety of their digital
communication to become higher.

Furthermore, cybersecurity studies extend their attention towards the main and rising
threats of phishing attacks and social engineering techniques which are occurring
everywhere. People are coached consider to detect the biggest deceiving signals of
hacker attempts, such as emails, messages or deceiving websites, among others. Being
aware of the social engineering manipulation, which use psychological approach to
led users astray, stand as an integral part of this training. Attention to all these tricks
will not only create a sort of shield against the necessity to be manipulated but also
will protect individual information from getting into the wrong hands.
Bibliography

“AI and Cybersecurity: A New Era | Morgan Stanley.” Morgan Stanley,

www.morganstanley.com/articles/ai-cybersecurity-new-era.

Artificial Intelligence (AI) Cybersecurity | IBM.

www.ibm.com/ai-cybersecurity#:~:text=AI-powered%20risk%20analysis%20can,by

%20an%20average%20of%2055%25.&text=The%20AI%20technology%20also%20

helps,against%20cybercriminals%20and%20cyber%20crime.

“Increase OT Cybersecurity Resilience.” Honeywell Forge,

www.honeywellforge.ai/us/en/solutions/ot-cybersecurity/increase-ot-cybersecurity-res

ilience?utm_source=google&utm_medium=cpc&utm_campaign=23-q4-hce-cyber-w

w-outcome_increase_ot_resiliency-7016S000002KWLeQAO-meta&utm_adgroup=&

utm_term=&gad_source=1&gclid=Cj0KCQiAw6yuBhDrARIsACf94RUj_hjoSPWlt

QNTmhuCCXXcA-IelfbglSWNGpU3VstmzYxeP0VTaWYaAi2EEALw_wcB.

Lawton, George. “AI Transparency: What Is It and Why Do We Need It?” CIO, 2 June 2023,

www.techtarget.com/searchcio/tip/AI-transparency-What-is-it-and-why-do-we-need-it

Oliveira, Willian. Military Cybersecurity: Challenges With AI. 12 Jan. 2024,

www.linkedin.com/pulse/military-cybersecurity-challenges-ai-willian-oliveira-ctsaf.

The Economist. “Artificial Intelligence Is Changing Every Aspect of War.” The Economist,

13 Feb. 2021,

www.economist.com/science-and-technology/2019/09/07/artificial-intelligence-is-cha
nging-every-aspect-of-war?utm_medium=cpc.adword.pd&utm_source=google&ppcc

ampaignID=18151738051&ppcadID=&utm_campaign=a.22brand_pmax&utm_conte

nt=conversion.direct-response.anonymous&gad_source=1&gclid=Cj0KCQiAw6yuBh

DrARIsACf94RVQEEJ751BBW6uPNNqYW6TnG-TJpWKjuL5_GIZQIHTtgWK8L

ZDqF_8aAqoDEALw_wcB&gclsrc=aw.ds.

What Is Transparency? - Ethics of AI. ethics-of-ai.mooc.fi/chapter-4/2-what-is-transparency.

You might also like