0% found this document useful (0 votes)
36 views9 pages

Forrester Securing Generative AI

The report discusses the rapid adoption of generative AI in enterprises following the release of technologies like Stable Diffusion and ChatGPT, highlighting the need for security and risk teams to adapt to this emerging technology. It outlines key concerns such as the impact on security workflows, the necessity for new skills in prompt engineering, and the complexities of third-party risk management. Additionally, it emphasizes the importance of deploying modern security practices and understanding adoption timelines to effectively manage the risks associated with generative AI.

Uploaded by

Miguel Molano
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views9 pages

Forrester Securing Generative AI

The report discusses the rapid adoption of generative AI in enterprises following the release of technologies like Stable Diffusion and ChatGPT, highlighting the need for security and risk teams to adapt to this emerging technology. It outlines key concerns such as the impact on security workflows, the necessity for new skills in prompt engineering, and the complexities of third-party risk management. Additionally, it emphasizes the importance of deploying modern security practices and understanding adoption timelines to effectively manage the risks associated with generative AI.

Uploaded by

Miguel Molano
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

REPORT

Securing Generative AI
Use Cases, Threats, Risks, And Skills
Jun 30, 2023 • 6 min read

Jeff Pollard
VP, Principal Analyst

Rowan Curran
Senior Analyst

Jeremy Vale
Researcher

With contributors:
Merritt Maxim, Zachary Dallas and Michael Belden

Share

Generative AI exploded into consumer awareness with the release of


Stable Diffusion and ChatGPT, driving enterprise interest, integration,
and adoption. This report details the departments most likely to adopt
generative AI, their primary use cases, threats, and what security and
risk teams will need to defend against as this emerging technology
goes mainstream.

Interest, Anxiety, And Confusion Dominate


Discussions About Generative AI
The release of Stable Diffusion and ChatGPT went viral almost
immediately, grabbing wide attention and speculation ... along with
plenty of hijinks from security researchers. Security and risk (S&R)
teams need to adapt to how their enterprise plans to use generative
AI, or they will find themselves unprepared to defend it (see Figure 1).
Today’s security leaders:

1. Worry about impact on their security team first. Yes,


generative AI will change how security programs operate, but
well before that happens it will change workflows for other
enterprise functions. Unfortunately, many CISOs tune out news
about new technologies, considering it a distraction. That
reasonable — but entirely mistaken — reaction becomes
tomorrow’s emergency when the security program learns the
marketing team plans to use a large language model (LLM) to
produce marketing copy and expects it to do so securely. Worse
yet, security leaders follow up with an even more devastating
decision: implementing a draconian policy that bans LLM
adoption, which only drives employees underground, costing the
security team visibility and understanding how the tech is used
and increasing risks.
2. Think in terms of code, not natural language. One of the
interesting ways to subvert or make unauthorized use of
generative AI is finding creative ways to structure questions or
commands. While bypassing safety controls online is fun for
hobbyists, those same bypasses could allow generative AI to leak
sensitive data such as trade secrets, intellectual property, or
protected data. Expect to add “prompt security engineering” skills
to your team via hiring or partners. Coursera, Udemy, and Google
have already added courses on generative AI that include an
emphasis on prompt engineering, and practitioners will need to
apply their security skills to these.
3. Lack the right third-party risk management questions to ask
generative AI vendors. Sure, advanced organizations with heavy
research and development budgets will build their own AI
systems — but most companies will buy generative AI solutions
from a vendor or receive them bundled in an offering they
already subscribe to. Every S&R pro knows the danger and
complexities inherent in managing suppliers. This emerging
technology creates new supply chain security and third-party risk
management problems for security teams and introduces
additional complexity given that the foundational models are so
large that detailed auditing of them is impossible.
4. Need to deploy modern security practices for AI success.
Many security technologies that will secure your firm’s adoption
of generative AI already exist within the cybersecurity domain.
Two examples include API security and privacy-preserving
technologies. These technologies are introducing new controls to
secure generative AI. This will force your team to work with new
vendors, technologies, and acquire and train on new skills. While
processes also serve as useful security controls, generative AI will
uncover procedural gaps in domains involving data leakage, data
lineage and observability, and privacy.

Figure 1 - Skills, Requirements, And Controls Necessary To Secure Generative AI

Departments, Use Cases, Threats, And Risks


Most organizations will buy — not build — generative AI. Many may
not even buy generative AI directly, but will receive it via bundled
integrations such as Microsoft’s plans for Copilot, CrowdStrike’s
introduction of Charlotte AI, and Google Cloud Security AI
Workbench. This forcing function mandates that S&R leaders
understand the relevant departments, use cases, threats, and risks
based on their current vendors and bundling strategies. To achieve
this, combine Generative AI Prompts Productivity, Imagination, And
Innovation In The Enterprise with the framework from The CISO’s
Guide To Securing Emerging Technology and create a table that lists
each item (see Figure 2). This will help provide context on the
challenges the adoption of this emerging technology will introduce.
Figure 2 - Departments, Use Cases, Threats, And Risks: Example

Both Large-Scale And Smaller Fine-Tuned Language Models Pose


Cybersecurity Risks
Security leaders need to focus time, attention, and budget on the
large-scale foundational AI models being offered by OpenAI,
Microsoft, Google, and others. However, as demonstrated by the
leaked document sourced from Google entitled: We Have No Moat,
And Neither Does OpenAI: many smaller models will emerge that
need protection. It is much more likely that your corporate data will
be used to train one of these functional models. Generally, finding out
that security needs to protect more of something is not great news,
but there is a silver lining: If your organization trains these smaller
functional models using corporate data, it is less likely to use
corporate data in one of the large-scale foundational AI models.
Expect large-scale models to feature prominently in more generalized
use cases while your data science and development teams use
corporate data to train and fine-tune smaller models based on
organization-specific use cases. Securing these niche models won’t
be easy, but it is much more manageable than securing the large-
scale models, and vendors like HiddenLayer, CalypsoAI, and Robust
Intelligence already exist to assist here.

Adoption Timelines, Supplier Questions, And Skill


Sets
The sudden surge in generative AI in 2023 will become the de facto
case study in how emerging technology adoption radically shifts and
how hype can cause you to ignore something that brings disruptive
change. To avoid scrambling for answers, use the following
methodology to understand protections for generative AI
implementation before it becomes your next urgent problem to solve.
Understand How Catalysts And Dependencies Dictate Adoption
Timelines
By combining business use cases by department and mapping
catalysts and dependencies, S&R pros can predict when their
organization will understand the relevance of an emerging
technology. Generative AI’s dependencies include: 1) massive amounts
of data to train the models, 2) skilled practitioners to build the
technology, and 3) substantial compute resources. OpenAI built the
first two and leveraged cloud for the third. Then it found these
catalysts: a partnership with Microsoft that provides: 4) significant
funding, 5) beta tests at scale, and 6) a path to enterprise customers.
Together these elements created an inflection point (see Figure 3).
For most organizations, the adoption timeline for generative AI went
from “soon” to “yesterday” due to Microsoft and other vendors
announcing plans to bundle these capabilities into widely adopted
and deployed enterprise tools.

Figure 3 - Generative AI Catalysts And Dependencies

Compile Generative AI Third-Party Risk Management Questions


Given that most organizations will find generative AI integrated into
already deployed products and services, one immediate priority for
security leaders is third-party risk management. When you buy a
product or service that includes generative AI, you depend on your
suppliers to secure the solution. Microsoft and Google are taking that
responsibility as they bundle and integrate generative AI into services
like Copilot and Workspace, but other providers will source AI
solutions from their own supplier ecosystem. You will need to compile
your set of supplier security and risk management questions based on
the use cases outlined above (see Figure 4).

Figure 4 - Third-Party Risk Management Questions For Suppliers Of Generative


AI Solutions

Identify Your Skill Gaps And How To Solve Them


With adoption happening now, your security team needs to get up to
speed quickly on generative AI. Security practitioners need training to
understand prompt engineering and prompt injection attacks. Your
third-party risk management program needs help developing
frameworks and questionnaires that help assess generative AI
partners and suppliers. Your privacy team needs help navigating the
legal landscape based on your intended use cases. Your infrastructure
team needs help understanding how to secure the technical
implementation and integration of systems (see Figure 5). Note that
there is still at least one question mark for competencies because this
area is developing so quickly.

Figure 5 - Skills Inventory Example

About Forrester Reprints https://go.forrester.com/research/reprints/

© 2025. Forrester Research, Inc. and/or its subsidiaries. All rights reserved.

You might also like