100% found this document useful (1 vote)
150 views26 pages

Generative AI's Role in Cybersecurity

Checkpoint generative ai for Cybersecurity
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
150 views26 pages

Generative AI's Role in Cybersecurity

Checkpoint generative ai for Cybersecurity
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Generative AI for Cybersecurity 1

Generative AI
for Cybersecurity:
An Optimistic but
Uncertain Future
Jon Oltsik | Distinguished Analyst and Fellow
ENTERPRISE STRATEGY GROUP

january 2024

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


© 2024 TechTarget, Inc. All Rights Reserved.
Generative AI for Cybersecurity 2

Research Objectives
This study sought to:
Since the introduction of ChatGPT in November 2022, generative
AI (GenAI) has been described as everything from a novelty and an • Identify current usage of and plans for generative AI.
economic boon to a threat to humanity. As this debate continued,
GenAI took center stage at the RSA Conference 2023 with the • Establish how generative AI influences the balance of power between
introduction of and subsequent hoopla around Microsoft Security cyber-adversaries and cyber-defenders.
Copilot. Many other vendors have introduced similar capabilities
since. Few would argue against the idea that GenAI (and AI in general) • Determine how organizations are approaching generative AI governance,
will have a profound impact on society and global economics, but in policies, and policy enforcement.
the near term, it introduces new risks as employees connect to GenAI
• Monitor how organizations will apply generative AI for cybersecurity
applications, share data, and build homegrown large language models
use cases.
(LLMs) of their own. These actions will inevitably expand the attack
surface, open new threat vectors, introduce software vulnerabilities,
and lead to data leakage.

Despite these risks, generative AI holds great cybersecurity potential. KEY FINDINGS:
Generative AI could help improve security team productivity,
accelerate threat detection, automate remediation actions, and guide Generative AI Has a Foothold Organizations Anticipate
incident response. These prospective benefits are so compelling that Today and Will Be Pervasive by Generative AI Risks
many CISOs are already experimenting with GenAI or building their the End of 2024
own security LLMs. At the same time, security professionals remain
anxious about how cybercriminals may use GenAI as part of attack PAGE 3 PAGE 8
campaigns, and how they can defend against these advances.

Have organizations embraced GenAI for cybersecurity today, and


what will they do in the future? To gain further insight into these Security Professionals Are Cautiously Generative AI Will Become
trends, TechTarget’s Enterprise Strategy Group (ESG) surveyed 370 Optimistic About Generative AI’s a Purchasing Consideration,
IT and cybersecurity professionals at organizations in North America Potential for Cybersecurity Though Plans Are Still Developing
(US and Canada) responsible for cyber-risk management, threat
intelligence analysis, and security operations, with visibility into PAGE 13 PAGE 20
current GenAI usage and strategic plans.

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI
Has a Foothold
Today and Will Be
Pervasive by the
End of 2024
Generative AI for Cybersecurity 4
Development plans for proprietary large language models.

41+59+U
Many Are Already Developing Proprietary LLMs
to Support a Variety of GenAI Use Cases LLMs are already a part of
41% production generative AI applications
The research indicates that 85% of organizations have a proprietary large language model in place,
while another 13% report that they are in the process of developing one. It’s likely that many of
these are early-stage projects based on open models. Nevertheless, this data indicates robust

44+56+U
activity, forecasting rapid development of production applications in 2024.

Which business functions are ripe for GenAI applications? Organizations are well underway toward LLMs are already a part of an
ubiquitous use of GenAI in areas like IT operations, software development, sales, research, and 44% ongoing development project
others. Like “shadow IT” in the past, growing use of GenAI will introduce cyber-risks as multiple
departments and individual employees interact with open GenAI applications, experiment with

13+87+U
open source, and create their own LLMs. CISOs must assess and communicate these risks while
working with executives to create the right governance models and policies for risk mitigation.
Additionally, security professionals must implement compensating controls and monitor users and
We are just getting started
networks for anomalous, suspicious, and malicious behavior.
13% developing our own LLMs
Top five current or planned GenAI use cases.

660+340= 66%
1+99+U
IT operations
We plan to develop our own
Software development 360+640= 36%
1% LLMs in the future
Sales 290+710= 29%

1+99+U
Research 260+740= 26%
We are interested in developing
Product development 250+750= 25%
1% our own LLMs in the future
© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents
Generative AI for Cybersecurity 5

Pervasive Security Concerns for Generative AI


Mitigating GenAI risk is not some future requirement. Rather, organizations are already
“Organizations are already dealing
dealing with, or struggling to address, generative AI risks proactively. Respondents claim that
their organizations are already blocking access to generative AI sites. Security professionals with, or struggling to address,
are also concerned about GenAI-driven data leakage and remain unsure as to whether
employees are already accessing and using GenAI applications today. CISOs must address generative AI risks proactively.”
these risks before losing control of who does what with GenAI across the enterprise.

Cybersecurity professionals’ GenAI opinions.

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

My organization expects generative AI to help accelerate the software


54% 40% 6%
development cycle

My organization has blocked/is blocking access to one or several


44% 42% 6% 5% 2%
generative AI sites

We are concerned about data leakage as employees increasingly use


43% 39% 10% 5% 3%
generative AI

We aren’t sure if any employees are currently accessing generative AI


42% 40% 7% 5% 5%
sites today or what they are doing on these sites

0% 20% 40% 60% 80% 100%

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 6

Plans for a specific GenAI governance structure.

Many Are Getting Ahead of


GenAI Governance, Especially
Larger Organizations
Of course, most organizations have acceptable
use policies and governance models in place today
10+90+S
10%
18+ S +
We already have
82
a specific GenAI
governance structure
36+ S + 64 2+ S + 98
18%
We are developing
a specific GenAI
governance structure
36%
We are interested in
developing a specific GenAI
governance structure
2%
We believe our existing
governance and acceptable use
policies apply to generative AI

that may provide needed guardrails for GenAI


usage. Recognizing additional risks associated with
GenAI, many firms are creating specific governance
models. Larger enterprise organizations, with the Percentage of organizations that already have a specific governance
most at risk, are taking the GenAI governance lead, structure for the use of generative AI by company size.
while their smaller counterparts lag. CISOs at these
organizations must act as a catalyst to accelerate
the creation of governance models, policies, and
policy enforcement controls. Given the pace of

34% 41% 73%


GenAI innovation, delays will only lead to rapidly
increasing risk.

340+660= 410+590= 730+270=


1,000 to 2,499 employees 2,500 to 4,999 employees 5,000 or more employees

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 7

Weakest areas of generative AI governance and policy enforcement.

Third-party risk management 29%

Data leakage protection/rights management 27%

Weakest Areas of Regulatory compliance 26%


GenAI Governance
Enforcement of approved/disapproved generative AI
26%
While many organizations have created or are application access
creating GenAI governance models, security
professionals admit that these models have User awareness training specific to generative AI usage 26%
abundant weaknesses, including third-party risk
management (29%), data leakage protection/
List of approved/disapproved generative AI
rights management (27%), regulatory 26%
applications supported
compliance (26%), and enforcing GenAI access
policies (26%). Based on the data, it is safe to
assume that most organizations have a myriad Identity governance 24%
of these weaknesses in place today. Closing
these gaps should be a 2024 priority.
Risk modeling/management 24%

Data governance 23%

User behavior monitoring 19%

Acceptable use policies 17%

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Organizations
Anticipate
Generative
AI Risks
Generative AI for Cybersecurity 9

Generative AI Risks Associated With LLMs


As organizations experiment with GenAI applications and build their own LLMs, they face several application security challenges such as trust boundary risks (i.e., where data
comes from or when it moves to an untrusted source), data management risks (e.g., data leakage or corruption), inherent underlying model risks, and general security risks.
Security professionals are especially apprehensive about data management risks, while IT professionals’ concerns focus on trust boundary risks. Organizations must reinforce
their application security efforts with oversight, tooling, and testing to address specific GenAI application security.

Biggest risk to LLM and generative AI application development by role. IT professionals with Pure cybersecurity Overall
cybersecurity responsibilities professionals

26% 260+740= 7% 70+930=


Trust
47% 470+530= Inherent 9% 90+910=
boundary risks underlying
53% 530+470= model risks 8% 80+920=

Data
63% 630+370= 4% 40+960=
General
management 44% 440+560= security risks 1% 10+990=
risks
36% 360+640= 2% 20+980=

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 10

Group expected to gain biggest advantage from generative AI innovation.


GenAI Balance of Power Skews Toward
Cyber-adversary Advantage
Of course, cyber-adversaries also have access to open GenAI applications and have the technical
76% 760+240=
capabilities to develop their own LLMs. WormGPT and FraudGPT are early examples of LLMs designed
for use by cybercriminals and hackers. Will cyber-adversaries use and benefit from LLMs? More than Cyber-adversaries
three-quarters of survey respondents (76%) not only believe they will, but also feel that cyber-adversaries
will gain the biggest advantage (over cyber-defenders) from generative AI innovation.

Alarmingly, most security professionals believe that cyber-adversaries are already using GenAI and that
adversaries always gain an advantage with new technologies. Respondents also believe that GenAI could
24% 240+760=
lead to an increase in threat volume, as it makes it easier for unskilled cyber-adversaries to develop more
sophisticated attacks. Security and IT pros are also concerned about deep fakes and automated attacks. Security defenders

Top five reasons cyber-adversaries will likely gain the biggest advantage from GenAI technology.

57+43+U 55+45+U 51+49+U 25+75+U 23+77+U


57%
Cyber-adversaries are
already using generative AI
technology to their advantage
55%
Cyber-adversaries always
gain an early advantage
with new technologies
51%
Generative AI makes it easier
for unskilled cyber-adversaries
to develop more advanced
cyberattacks, so we anticipate
25%
Generative AI can help cyber-
adversaries develop realistic deep
fakes based on someone’s voice
or writing style, making it harder
23%
Generative AI can help
cyber-adversaries
automate attacks

greater volumes of attacks to detect malicious behavior

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 11

GenAI Broadens the Attack Surface and Expands Social Engineering Tactics
Security professionals already have lots of ideas about how adversaries will use GenAI as a component of cyberattacks. Proprietary and partner-based GenAI systems will expand
their attack surface, leading to unavoidable vulnerabilities and potential exploitation. Adversaries will use GenAI within phishing, business email compromise (BEC), and other types
of deep fakes. There is also a concern about GenAI creating new types of malware (note: researchers have already created polymorphic malware using ChatGPT).

Given the unknown tactics, techniques, and procedures (TTPs), unclear balance of power, and potential GenAI threat vectors, cybersecurity professionals must stay vigilant by
monitoring threat intelligence for signs of GenAI-based adversary TTPs and implementing the right controls for policy enforcement.

Most concerning use cases for cyber-adversaries using GenAI.

31% 29% 29% 28% 27%


310+690= 290+710= 290+710= 280+720= 270+730=
Targeting our expanded Targeting our expanded attack Developing realistic voice and Adapting social engineering Creating new malware
attack surface caused by the surface caused by the generative image impersonations that can tactics based on the strains that can evade
generative AI solutions we use AI solutions our partners use be used for social engineering responses and behaviors traditional security measures
attacks and fraud of individual targets

25% 24% 23% 23% 19%


250+750= 240+760= 230+770= 230+770= 190+810=
Conducting brute force attacks Creating fake news and Creating highly realistic Automating and optimizing Corrupting a generative
that can crack passwords much disinformation campaigns phishing emails and cyberattacks, making them AI data set
faster than traditional methods social engineering scams faster, more efficient, and
harder to detect

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 12

Most important security controls for GenAI policy enforcement.

Important Security Tools for


GenAI Policy Enforcement 35% 33% 33%
While GenAI introduces new types of threats and
350+650= 330+670= 330+670=
vulnerabilities, organizations aren’t completely Cloud access security Security information and Application security
defenseless. In fact, existing tools like CASB, SIEM, broker (CASB) event management (SIEM)
application security technologies, and identity and
access management systems can block threats
and enforce policies. For example, CASB systems
can allow access to designated web-based GenAI
applications for credentialed users while denying
access to others. IAM systems can be used to create
these credentials based on roles and specific use
31% 30% 30%
310+690= 300+700= 300+700=
cases. Application security technologies can scan
source code and create threat models that look Identity and access User behavior monitoring Email security
for trust boundary or data usage violations. SIEM management (IAM)
systems can log activity and implement detection
rules to alert on any anomalous behavior.

These tools can provide a foundation for GenAI

27% 25% 21%


security, but they will need to be configured and
customized appropriately to adhere to organizational
policies, regulatory compliance, and threats targeting
industries, regions, and individual firms. 270+730= 250+750= 210+790=
Data loss prevention (DLP) Secure browser technology Firewalls

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Security
Professionals
Are Cautiously
Optimistic About
Generative AI’s
Potential for
Cybersecurity
Generative AI for Cybersecurity 14

Sentiments Toward GenAI and Cybersecurity


GenAI for cybersecurity elicits an emotional response from security and IT professionals, ranging from optimistic and excited
to reserved and skeptical. Beyond these emotions, survey respondents had some concrete opinions: 93% believe generative
AI could help them improve their knowledge, skills, and abilities, while at the other end of the spectrum, 80% remain skeptical
about the value GenAI can bring to cybersecurity. Many (92%) believe their organization would be willing to replace existing
technologies based on the GenAI capabilities of another similar product.
“GenAI for
cybersecurity
elicits an emotional
Level of agreement with statements related to using GenAI to fortify existing cybersecurity tools and processes.
response from
security and IT
I believe generative AI could help the cybersecurity team
improve its knowledge, skills, and abilities
53% 40% 6% 1% professionals,
ranging from
optimistic and
We would be willing to replace existing security technologies
based on another vendor’s generative AI capabilities
44% 48% 4%, 3% excited to reserved
and skeptical.”
We are skeptical about the value generative AI can deliver
39% 41% 9% 6% 6%
for cybersecurity

Jon Oltsik | Distinguished Analyst and Fellow


0% 20% 40% 60% 80% 100%
ENTERPRISE STRATEGY GROUP

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 15

Sentiments of cybersecurity defenders regarding the potential impact of GenAI.

42%
Optimistic 42%
43%

41%
Excited
“Given these misgivings, CISOs should 39%
43%

champion GenAI for cybersecurity Curious


35%
37%
efforts while remaining judicious in 33%

its adoption.”
24%
Reserved 28%
19%

23%
Skeptical 24%
21%
Clearly, opinions vary widely, even between IT and security Overall
respondents. For example, security professionals tended to 23%
Neutral 20%
be more reserved (28% versus 19% for IT pros), fearful (23% 26%
versus 15% for IT pros), and pessimistic (20% versus 12% Pure cybersecurity professionals
19%
for IT pros). Alternatively, IT pros were more neutral than Fearful 23%
their security colleagues (26% versus 20% for security pros). 15% IT professionals with
Given these misgivings, CISOs should champion GenAI for cybersecurity responsibilities
16%
cybersecurity efforts while remaining judicious in its adoption. Pessimistic 20%
12%

13%
Dismissive 13%
13%

12%
Disillusioned 12%
11%

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 16

Security Professionals’ Opinions on Machine Learning


While generative AI may be a shiny new cybersecurity object, other AI technologies, like machine learning, have been around for nearly a decade. With all these years of experience, 92%
of respondents agree that machine learning has improved the efficacy and efficiency of cybersecurity technologies. Still, this benefit didn’t come without baggage. It’s apparent from this
research that machine learning-based cybersecurity technologies created false positive alerts and needed tuning and customization. And while they’ve become commonplace today, it took
many years for vendors to deliver valuable machine learning (ML) for cybersecurity. It’s worth noting that 83% of respondents agree that looking back, the hype around machine learning
for cybersecurity was excessive. Little wonder then why cybersecurity pros are reserved, pessimistic, and fearful of machine learning technology. With this experience, CISOs should take a
prudent approach to GenAI implementation, looking for quick wins and incremental value rather than force multiplying results.

Level of agreement with statements related to machine learning and cybersecurity technologies.

Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

ML has improved the efficacy and efficiency of cybersecurity technologies 50% 42% 8% 1%

Security technologies including ML often require customization for our industry, region, and/or IT environment 50% 43% 6% 1%

My organization’s experiences with ML will influence how we approach generative AI for cybersecurity 47% 46% 6% 1%

Machine learning algorithms generate excessive false positive alerts 47% 33% 9% 7% 4%

Security technologies including ML can be buggy, requiring lots of hands-on support from vendors 44% 40% 13% 2% 1%

Security technologies including ML often require tuning to improve alert accuracy 43% 49% 6% 1%

It took many years for security technology vendors to deliver ML that truly delivered incremental value 42% 49% 8% 1% 1%

My organization had to remove/replace one or several security technologies including ML because they never
42% 42% 5% 8% 2%
lived up to their promise

Looking back, the hype around machine learning for cybersecurity was excessive 42% 41% 6% 6% 5%

My organization had to remove/replace one or several security technologies including ML because the vendor
38% 43% 10% 6% 3%
was acquired, impacting the quality of the product
0% 20% 40% 60% 80% 100%

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 17

Areas in which GenAI is believed to hold the most promise for cybersecurity.

Promising GenAI Improving security hygiene and posture


26%
Cybersecurity Use Cases management
Guiding the cybersecurity staff with
26%
Enterprise organizations will likely upgrade existing tools recommended actions
like EDR, SIEM, and threat intelligence platforms (TIPs) with
new versions containing GenAI functionality. Some of the Accelerating threat detection and response 25%
more advanced organizations will create their own security
LLMs or customize existing models to meet organizational, Consolidating security technologies 24%
industry, and regional needs. As they experiment with open
AI engines, their goals must include improving security
efficacy, efficiency, and staff productivity. Training entry-level cybersecurity personnel 23%

To that end, survey respondents see a number of promising


Improving the quality of our homegrown software 23%
use cases, including improving security hygiene and
posture management, guiding the cybersecurity staff with
Creating summary security reports (i.e., security
recommended actions, consolidating security technologies, incidents, threats, executive reports, etc.)
22%
and training junior employees. Along with these, ESG has
seen practical use and strong results in two other areas Creating detection rules 21%
cited: Security professionals are saving time by using
GenAI to help them create summary reports (i.e., threat
intelligence reports, incident response reports, cyber- Analyzing threat intelligence 20%
risk status reports, etc.). Additionally, security teams
are starting to rely on GenAI to help them better analyze Using a natural language interface 20%
threat intelligence germane to their organization, industry,
and region. Security pros will also use GenAI for natural
language queries, improving homegrown software and Automating processes 19%

creating detection rules in the near future.


Penetration testing and/or red teaming 16%

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 18

Approach to handling low-risk recommendations from a GenAI application.


“These actions could scale
staff productivity, but skeptical
cybersecurity teams aren’t
quite ready for autonomic
processes just yet.”
71% A staff member would review the recommendation
before taking the remediation action manually

Response Actions for Low-risk


GenAI Recommendations
A staff member would review the recommendation
Generative AI also has the potential to automate security
processes like patching vulnerable software, quarantining
a system, or disabling a user account. These actions could
25% and, if they approve it, allow the generative AI
application to execute the remediation action
scale staff productivity, but skeptical cybersecurity teams
aren’t quite ready for autonomic processes just yet. If
their organization were given a low-risk recommendation
from a GenAI application, nearly three-quarters (71%) of
respondents say that a staff member would review the
recommendation before taking the remediation action The generative AI application would execute
4%
manually, while 25% claim a staff member would review
the recommendation and, if they approve it, allow the the remediation action on its own, without any
generative AI application to execute the remediation human intervention
action. Only 4% would let the application execute the
remediation action on its own without human intervention.

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 19

Areas in which security operations process automation supported by GenAI would be most helpful.

29% 26% 26%


Generative AI and Security 290+710= 260+740= 260+740=
Process Automation Security investigations Orchestration across General process automation
heterogeneous security controls
While security teams remain cautious, they do
pinpoint where GenAI-based process automation
could be helpful in areas like security investigations,
orchestration across heterogeneous security controls, 24% 24% 24%
case management, and alert enrichment. Anecdotally, 240+760= 240+760= 240+760=
ESG has heard similar themes from CISOs. They’d
like to automate alert triage processes to bolster the Security operations Alert enrichment Security operations ticketing
productivity of tier one/junior analysts. They believe case management system management
GenAI could increase security analysts’ throughput
if it could string together individual alerts into attack
chains and then enrich these attack chains with threat
intelligence and put them in a MITRE ATT&CK context. 22% 22% 18%
End-to-end GenAI-based process automation may be
220+780= 220+780= 180+820=
a scary thought for security teams. Perhaps they will SSL certificate management Vulnerability management Detection rule
get there over the next few years, but in the meantime, creation/generation
GenAI should be applied sensibly to low-risk, time-
consuming processes that act as bottlenecks for

18% 17% 15%


security teams.

180+820= 170+830= 150+850=


Incident response/ Patch management Threat hunting
automated remediation

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 20

Generative AI
Will Become
a Purchasing
Consideration,
Though Plans Are
Still Developing
© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents
Generative AI for Cybersecurity 21

GenAI Solutions Drive Cybersecurity Spending


It appears that GenAI cybersecurity technologies are viewed as a generational change. This conclusion is based on the fact that more than three-quarters (78%) of organizations
plan on increasing spending for security solutions with generative AI capabilities. And while the bulk of new spending (73%) will focus on adding new generative AI products or
features, CISOs will also dedicate budget dollars to security technologies to mitigate risks and enforce security policies specific to general GenAI initiatives. This spending balance
will likely vary widely based on industry, regulations, and organizations’ GenAI application usage. The more GenAI in use, the more organizations will need appropriate compensating
controls for risk mitigation.

Impact of security solutions with GenAI capabilities on security budget. Priorities for spending on GenAI over the next 12-24 months.

78+22+U 73+27+U
21% We will increase spending
27% Adding generative AI
for security solutions with products and/or features
generative AI capabilities to security technologies

78% We will reallocate spending


73% Adding security controls as
from somewhere else for guardrails to mitigate the risks
security solutions with associated with generative
generative AI capabilities AI usage by my organization
and/or cyber-adversaries

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 22

Generative AI and Security Vendor Selection Likely strategy for augmenting security tools
with generative AI products and/or features.

54+46+U
At this point, more than half (54%) of organizations are leaning toward choosing a single vendor and technology for generative AI
security tools, but this strategy may change with time and rapid technology innovation. In fact, 31% of organizations will choose a
primary GenAI vendor/technology but selectively use others, and 15% will use GenAI capabilities across numerous or a few select
vendors/technologies.

Since GenAI-based security technologies are in their genesis phase, organizations will have to learn how to evaluate products and 54% Choose a single vendor/
technology for generative AI
capabilities as part of their RFI/RFP processes. As of now, the top purchasing considerations include vendors having a formal
documented process for data security/privacy as part of development and operations, capabilities built on top of multiple LLMs
(including open source versions), an open architecture, and lots of experience with AI/ML-based products. As previously stated,

31+69+U
CISOs will likely use the lessons learned from their previous experiences with machine learning to evaluate tools and set the right
expectations within the organization.
Choose a primary vendor/
Most important considerations for GenAI capabilities offered by security vendors.
31% technology for generative
AI but selectively use other
vendors/technologies
38% 36% 34% 33%

8+92+U
380+620= 360+640= 340+660= 330+670=
A formal documented process Capabilities built on top of An open architecture that A long history of offering
Use generative AI capabilities
8%
for data security/privacy multiple LLMs, including can accomodate data from products based on AI/ML
as part of development open source versions other security vendors as across numerous vendors/
and operations part of the LLM technologies

29% 29% 28% 28%

7+93+U
290+710= 290+710= 280+720= 280+720= Use generative AI capabilities
Capabilities built on top of
existing LLMs like GPT 4
Capabilities offered by one of
the vendors my organization
An LLM based on security
content (i.e., threat intelligence,
All LLM data being stored
within my country’s borders
7% equally across a few select
vendors/technologies
currently works with CVEs, malware analysis, etc.)

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 23

Thoughts on GenAI Pricing for Cybersecurity


Generative AI capabilities will likely come as additional features in new cybersecurity product revisions. Vendors are expected to charge a premium for GenAI capabilities, so ESG asked
security professionals which pricing models they preferred in this scenario. The data suggests a bit of uncertainty, which is expected for this type of new technology. Nevertheless,
survey respondents’ preferences range from pricing based on the number of security personnel using GenAI through pricing baked into product pricing to pricing based on the total
number of users within an organization.

Despite these preferences, GenAI pricing is likely to be a quagmire for the foreseeable future as the market develops. Security and purchasing managers should monitor different
pricing models and compare notes with other organizations in their industry as they negotiate with vendors.

Most appropriate pricing models for security technologies that include GenAI capabilities.

Pricing based on the number of security personnel using a particular


generative AI engine
25% “GenAI pricing is
Pricing included as part of the general pricing of security technologies
without some type of premium for generative AI capabilities
16%
likely to be a quagmire
Pricing based on the total number of users within an organization 15%
for the foreseeable
A standard incremental fee in addition to the product purchase price 12%
future as the market
develops.”
Pricing based on the number of generative AI queries generated by an
12%
organization

Pricing based on the total number of assets within an organization 9%

Pricing based on the amount of data in an LLM 9%

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity

ABOUT

Check Point Software Technologies Ltd. is a leading AI-powered, cloud-delivered cyber security platform provider
protecting over 100,000 organizations worldwide. Check Point leverages the power of AI everywhere to enhance cyber
security efficiency and accuracy through its Infinity Platform, with industry-leading catch rates enabling proactive threat
anticipation and smarter, faster response times. The comprehensive platform includes cloud-delivered technologies
consisting of Check Point Harmony to secure the workspace, Check Point CloudGuard to secure the cloud, Check Point
Quantum to secure the network, and Check Point Infinity Core Services for collaborative security operations and services.

LEARN MORE

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


Generative AI for Cybersecurity 25

RESEARCH METHODOLOGY AND DEMOGRAPHICS

To gather data for this report, TechTarget’s Enterprise Strategy Group conducted a comprehensive online survey of IT and cybersecurity professionals from private- and public-
sector organizations in North America between November 6, 2023 and November 21, 2023. To qualify for this survey, respondents were required to be involved with supporting and
securing, as well as using, generative AI technologies. All respondents were provided an incentive to complete the survey in the form of cash awards and/or cash equivalents.

After filtering out unqualified respondents, removing duplicate responses, and screening the remaining completed responses (on a number of criteria) for data integrity, we were left
with a final total sample of 370 IT and cybersecurity professionals.

Respondents by number of employees. Respondents by age of organization. Respondents by industry.

20,000 or More than


more, 1% 50 years, Business services 18%
10,000 to 1%
19,999, 17% 21 to 50 Manufacturing 16%
years, 14%
1,000 to
Retail/wholesale 14%
2,499, 49%
Financial 14%
5,000 to
9,999, 11% 11 to 20 Communications and media 13%
years, 18%
Utilities 10%

Technology 7%
5 to 10
years, 67% Healthcare 3%

2,500 to Other 5%
4,999, 23%

© 2024 TechTarget, Inc. All Rights Reserved. Back to Contents


All product names, logos, brands, and trademarks are the property of their respective owners. Information contained in this publication has been obtained by sources TechTarget, Inc. considers to be reliable but is not warranted by TechTarget, Inc.
This publication may contain opinions of TechTarget, Inc., which are subject to change. This publication may include forecasts, projections, and other predictive statements that represent TechTarget, Inc.’s assumptions and expectations in light of
currently available information. These forecasts are based on industry trends and involve variables and uncertainties. Consequently, TechTarget, Inc. makes no warranty as to the accuracy of specific forecasts, projections or predictive statements
contained herein.

This publication is copyrighted by TechTarget, Inc. Any reproduction or redistribution of this publication, in whole or in part, whether in hard-copy format, electronically, or otherwise to persons not authorized to receive it, without the express
consent of TechTarget, Inc., is in violation of U.S. copyright law and will be subject to an action for civil damages and, if applicable, criminal prosecution. Should you have any questions, please contact Client Relations at [email protected].

Enterprise Strategy Group is an integrated technology analysis, research, and strategy firm providing market intelligence,
actionable insight, and go-to-market content services to the global technology community.
© 2024 TechTarget, Inc. All Rights Reserved.

You might also like