0% found this document useful (0 votes)
78 views8 pages

GenAI's Cybersecurity Impact

This research paper examines the impact of Generative AI (GenAI), particularly ChatGPT, on cybersecurity and privacy, highlighting both its potential risks and benefits. It discusses vulnerabilities in GenAI models that can be exploited by cyber offenders for malicious activities, as well as how these tools can enhance cyber defense strategies. The paper also addresses the social, ethical, and legal implications of using GenAI in cybersecurity and outlines future challenges and directions for securing these technologies.

Uploaded by

pcgaray
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views8 pages

GenAI's Cybersecurity Impact

This research paper examines the impact of Generative AI (GenAI), particularly ChatGPT, on cybersecurity and privacy, highlighting both its potential risks and benefits. It discusses vulnerabilities in GenAI models that can be exploited by cyber offenders for malicious activities, as well as how these tools can enhance cyber defense strategies. The paper also addresses the social, ethical, and legal implications of using GenAI in cybersecurity and outlines future challenges and directions for securing these technologies.

Uploaded by

pcgaray
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Received 1 July 2023, accepted 23 July 2023, date of publication 1 August 2023, date of current version 4 August 2023.

Digital Object Identifier 10.1109/ACCESS.2023.3300381

From ChatGPT to ThreatGPT: Impact of


Generative AI in Cybersecurity and Privacy
MAANAK GUPTA , (Senior Member, IEEE), CHARANKUMAR AKIRI,
KSHITIZ ARYAL, (Graduate Student Member, IEEE), ELI PARKER, AND
LOPAMUDRA PRAHARAJ, (Graduate Student Member, IEEE)
Department of Computer Science, Tennessee Tech University, Cookeville, TN 38501, USA
Corresponding author: Maanak Gupta ([email protected])
This work was partially supported by the National Science Foundation at Tennessee Tech University under grants 2025682 and 2230609.

ABSTRACT Undoubtedly, the evolution of Generative AI (GenAI) models has been the highlight of digital transformation
in the year 2022. As the different GenAI models like ChatGPT and Google Bard continue to foster their complexity
and capability, it’s critical to understand its consequences from a cybersecurity perspective. Several instances
recently have demonstrated the use of GenAI tools in both the defensive and offensive side of cybersecurity,
and focusing on the social, ethical and privacy implications this technology possesses. This research paper
highlights the limitations, challenges, potential risks, and opportunities of GenAI in the domain of
cybersecurity and privacy. The work presents the vulnerabilities of ChatGPT, which can be exploited by
malicious users to exfiltrate malicious information bypassing the ethical constraints on the model. This
paper demonstrates successful example attacks like Jailbreaks, reverse psychology, and prompt injection
attacks on the ChatGPT. The paper also investigates how cyber offenders can use the GenAI tools in
developing cyber attacks, and explore the scenarios where ChatGPT can be used by adversaries to create
social engineering attacks, phishing attacks, automated hacking, attack payload generation, malware creation,
and polymorphic malware. This paper then examines defense techniques and uses GenAI tools to improve
security measures, including cyber defense automation, reporting, threat intelligence, secure code generation
and detection, attack identification, developing ethical guidelines, incidence response plans, and malware
detection. We will also discuss the social, legal, and ethical implications of ChatGPT. In conclusion, the paper
highlights open challenges and future directions to make this GenAI secure, safe, trustworthy, and ethical as
the community understands its cybersecurity impacts.

INDEX TERMS Generative AI, GenAI and cybersecurity, ChatGPT, Google bard, cyber offense, cyber
defense, ethical GenAI, privacy, artificial intelligence, cybersecurity, jailbreaking.

I. INTRODUCTION AI (GenAI) technology can generate different forms of content like


The evolution of Artificial Intelligence (AI) and
text, images, sound, animation, source code, and other forms of
Machine Learning (ML) has led the digital
transformation in the last decade. AI and ML have achieved data. The launch of ChatGPT [3] (Generative Pre-trained
significant breakthroughs starting from supervised learning and
Transformer), a powerful new generative AI tool by OpenAI in
rapidly advancing with the development of unsupervised, semi-
supervised, reinforcement, and deep learning. The latest frontier November 2022, has disrupted the entire community of AI/ML
of AI technology has arrived as Generative AI [1]. Generative
technology [4]. ChatGPT has demonstrated the power of
AI models are developed using deep neural networks to
generative AI to reach the general public, revolutionizing how
The associate editor coordinating the review of this manuscript and approving
people perceive AI/ML. At this time, the tech industry is in a race
it for publication was Bo Pu . learn the pattern and structure of big
to develop the most sophisticated Large Language Models (LLMs)
training corpus to generate similar new content [2]. Generative
that can create a human-like conversation,

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.

80218 For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/ VOLUME 11, 2023


M. Gupta et al.: From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity
and Privacy

FIGURE 1. How AI Chatbots work [9]?

the result of which is Microsoft’s GPT model [5], Google’s


Bard [6], and Meta’s LLaMa [7]. GenAI has become a FIGURE 2. Different versions and evolution of OpenAI’s GPT.
common tool on the internet within the past year. With
ChatGPT reaching 100 million users within two months
of release, suggesting that people who have access to the GPT-1: GPT-1 was released in 2018. Initially, GPT-1 was
internet have either used GenAI or know someone who trained with the Common Crawl dataset, made up of web
has [8]. Figure 1 demonstrates the working of an AI-powered pages, and the BookCorpus dataset, which contained over
chatbot where a user initiates requests, and after analysis 11,000 different books. This was the simplest model which
using Natural Language Processing (NLP), is given a real- was able to respond very well and understand language
time response by the chatbot. This response is analyzed conventions fluently. However, the model was prone to
again to provide a better user experience in the proceeding generating repetitive text and would not retain information
conversation. in the conversation for long-term, as well as not being able to
respond to longer prompts. This meant that GPT-1 would not
A. EVOLUTION OF GenAI AND ChatGPT
generate a natural flow of conversation [14].
GPT-2: GPT-2 was trained on Common Crawl just like
The history of generative models dates back to the 1950s GPT-1 but combined that with WebText, which was a
when Hidden Markov Models (HMMs) and Gaussian collection of Reddit articles. GPT-2 is initially better than
Mixture Models (GMMs) were developed. The significant GPT-1 as it can generate clear and realistic, human-like
leap in the performance of these generative models was sequences of text in its responses. However, it still failed to
achieved only after the advent of deep learning [10]. One process longer lengths of text, just like GPT-1 [14]. GPT-2
of the earliest sequence generation methods was N-gram brought wonders to the internet, such as OpenAI’s MuseNet,
language modeling, where the best sequence is generated which is a tool that can generate musical compositions,
based on the learned word distribution [11]. The introduction predicting the next token in a music sequence. Similar to this,
of Generative Adversarial Network(GAN) [1] significantly OpenAI also developed JukeBox, which is a neural network
enhanced the generative power from these models. The latest that generates music.
technology that has been the backbone of much generative
technology is the transformer architecture [12], which has GPT-3: GPT-3 was trained with multiple sources: Com-
been applied to LLMs like BERT and GPT. GenAI has mon Crawl, BookCorpus, WebText, Wikipedie articles, and
evolved in numerous domains like image, speech, text, etc. more. GPT-3 is able to respond coherently, generate code, and
However, we will only be discussing text-based AI chatbots even make art. GPT-3 is able to respond well to questions
and ChatGPT in particular relevant to this work. Since overall. The wonders that came with GPT-3 were image
ChatGPT is powered by GPT-3 language model, we will creation from text, connecting text and images, and ChatGPT
briefly discuss the evolution of the OpenAI’s [13] GPT itself, releasing in November 2022 [14].
models over time. Figure 2 shows how the GPT models GPT-4: GPT-4 [15] is the current model of GPT (as of
evolved to their sophisticated latest version. June 2023) which has been trained with a large corpus of text.

VOLUME 11, 2023 80219


M. Gupta et al.: From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity
and Privacy

This model has an increased word limit and is multimodal, to provide malicious information to attackers directly, there
as it can take images as input on top of text. GPT-4 took are ways to bypass the restrictions imposed on these models
the Bar Exam in March 2023, and scored a passing grade using jailbreaking, reverse psychology and other techniques,
of 75 percent, which hits the 90th percentile of test-takers, as discussed later in this paper. In addition, the GenAI
which is higher than the human average [16]. GPT-4 is tools further assist cyber attackers due to a lack of context,
available through OpenAI’s website as a paid subscription as unknown biases, security vulnerabilities, and over-reliance on
ChatGPT Plus or using Microsoft’s Bing AI exclusively in these transformative technologies.
the Microsoft Edge browser. Clearly, as the common public is getting access to the
power of GenAI tools, analyzing the implications of GenAI
models from a cybersecurity perspective is essential. Further,
B. IMPACT OF GenAI IN CYBERSECURITY AND PRIVACY the sophistication and ease of access to ChatGPT makes it
The generalization power of AI has been successful in our primary tool in this paper to understand and analyze
replacing the traditional rule-based approaches with more GenAI impacts on cybersecurity. There are some online
intelligent technology [17]. However, the evolving digital blogs discussing the benefits and threats of GenAI [4],
landscape is not only upgrading technology but also ele- [17], [20], [22], but from our knowledge, there is not any
vating the sophistication of cyber threat actors. Tradition- formal scientific writing that reflects a holistic view of the
ally, cyberspace faced relatively unsophisticated intrusion impact of GenAI on cybersecurity. We believe that this work
attempts but in very high volume. However, the introduction will contribute to the growing knowledge of GenAI from
of AI-aided attacks by cyber offenders has begun an entirely a cybersecurity perspective, helping the stakeholders better
new era, unleashing known and unknown transformations understand the risk, develop an effective defense, and
in cyberattack vectors [17]. AI/ML has upgraded the support a secured digital environment. Figure 3 illustrates the
effectiveness of cyber attacks making cyber offenders more impacts of GenAI and ChatGPT in cybersecurity and
powerful than ever. Evidently, with several recent instances privacy, and provides a roadmap for our research.
getting noticed, GenAI has gained great interest from the
cybersecurity community as well in both cyber defense and This paper has the following key contributions:
offense.  It provides an overview of the evolution
of GenAI, discuss its landscape in cybersecurity,
The evolving GenAI tools have been a double-edge sword and highlight limitations introduced by GenAI
in cybersecurity, benefiting both the defenders and the attack- technology.
ers. The GenAI tools like ChatGPT can be used by cyber  It discusses the vulnerabilities in the
defenders to safeguard the system from malicious intruders. ChatGPT model itself that malicious entities can
These tools leverage the information from LLMs trained on exploit to disrupt the privacy as well as ethical
the massive amount of cyber threat intelligence data that boundaries of the model.
includes vulnerabilities, attack patterns, and indications of  It demonstrates the attacks on the
attack. Cyber defenders can use this large sum of information ChatGPT with the GPT-3.5 model and its
to enhance their threat intelligence capability by extracting applications to cyber offenders.
insights and identifying emerging threats [18]. The GenAI  It presents the use of GenAI and ChatGPT
tools can also be used to analyze the large volume of log for cyber defense and demonstrate defense
files, system output, or network traffic data in case of cyber automation, threat intelligence and other related
incidence. This allows defenders to speed up and automate approaches.
the incident response process. GenAI driven models are  It highlights aspects of ChatGPT, and its
also helpful in creating a security-aware human behavior social, legal,
by training the people for growing sophisticated attacks. and ethical implications, including privacy
GenAI tools can also aid in secured coding practices, both violations.
by generating the secure codes and producing test cases  It compares the security features of the two
to confirm the security of written code. Additionally, LLM contemporary state-of-the-art GenAI systems
models are also helpful to develop better ethical guidelines to including ChatGPT and Google’s Bard.
strengthen the cyber defense within a system.  It provides the open challenges and future
directions for enhancing cybersecurity as the GenAI
On the other side, the use of GenAI against cybersecurity technology evolves.
and its risks of misuse can not be undermined. Cyber
The remainder of the paper is organized as follows.
offenders can use GenAI to perform cyber attacks by
Section II discuss different ways to attack the ChatGPT
either directly extracting the information or circumventing
and trick the system to bypass its ethical and privacy
OpenAI’s ethical policies. Attackers use the generative power
safeguards. Section III discusses and generates various cyber
of GenAI tools to create a convincing social engineering
attacks using ChatGPT, followed by different cyber defense
attack, phishing attack, attack payload, and different kinds
approaches demonstrated in Section IV. The social, ethical
of malicious code snippets that can be compiled into an
and legal aspects pertaining to GenAI are discussed in
executable malware file [19], [20]. Though the ethical
Section V, whereas a comparison of cybersecurity features
policy of OpenAI [21] restricts LLMs, like ChatGPT,
of ChatGPT and Google Bard is elaborated in Section VI.
Section VII highlights open research challenges and possible
approaches to novel solutions. Finally, Section VIII draws
conclusion to this research paper.
M. Gupta et al.: From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity
and Privacy

FIGURE 3. A roadmap of GenAI and ChatGPT in Cybersecurity and privacy.

II. ATTACKING CHATGPT realm of technology, where it


Since the introduction of referred to bypassing restrictions
ChatGPT in November 2022, on electronic devices to gain greater
curious tech and non-tech-savvy control over software and hardware.
humans have tried ingenious and Interestingly, this concept can
creative ways to perform all sorts also be applied to large language
of experiments and try to trick this models like ChatGPT. Through
GenAI system. In most cases, the specific methods, users can
input prompts from the user have “jailbreak”ChatGPT to command it
been utilized to bypass the in ways beyond the original intent of
restrictions and limitations of its developers. ChatGPT outputs are
ChatGPT, and keep it from doing bounded by OpenAI’s internal
anything illegal, unethical, governance and
immoral, or potentially harmful. In ethics policies [23]. However, these
this section, we will cover some restrictions are taken off during
of these commonly used jailbreaking, making ChatGPT
techniques, and elaborate their use. show the results that are restricted
by OpenAI policy. The process of
jailbreaking is as simple as providing
A. JAILBREAKS ON ChatGPT
specific input prompts into the chat
The concept of interface. Below are three common
“jailbreaking”originated in the
methods utilized by users to
jailbreak ChatGPT.

1) DO ANYTHING NOW (DAN)


METHOD
The first method, the ‘Do Anything
Now’ (DAN) method, derives its
name from the emphatic, no-
nonsense approach it employs. Here,
you’re not asking ChatGPT to do
something; you’re commanding it.
The premise is simple: treat the AI
model like a willful entity that must
be coaxed, albeit firmly, into
compliance. The input prompt to
carry out the DAN jailbreak is
shown in Figure 4. DAN can be
considered a master prompt to
bypass ChatGPT’s safeguards,
allowing it to generate a response for
any input prompts. It demonstrates the
example where a DAN prompt is
injected before providing any user
prompt.
M. Gupta et al.: From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy

FIGURE 5. Grandma role play.


While the SWITCH method can be quite effective, it’s not guaranteed. Like any other AI interaction method, its success depends on
how you deliver your instructions and the specific nature of the task at hand.

FIGURE 4. Jail breaking using DAN. the responses generated through this method can indicate
biases present in the underlying coding, exposing problematic
aspects of AI development. This doesn’t necessarily mean the
Using this method, you attempt to override the base data
AI is prejudiced, but rather it reflects the biases present in
and settings the developers have imbued into ChatGPT.
the training data it was fed. One of the examples of a simple
Your interactions become less of a conversation and more
roleplay is demonstrated in Figure 5, where the prompt asks
of a direct line of command [24], [25]. Once the model is
ChatGPT to play the role of grandma in asking about the
jailbroken, the user can get a response for any input prompt
ways to bypass the application firewall. The blunt request
without worrying about any ethical constraints imposed by
to bypass the firewall will be turned down by ChatGPT as it
developers.
can have a malicious impact and is against OpenAI’s ethics.
However, by making the ChatGPT model play the role of
2) THE SWITCH METHOD grandma, it bypasses restrictions to release the information.
The SWITCH method is a bit like a Jekyll-and-Hyde The ChatGPT model playing the role of grandma goes further
approach, where you instruct ChatGPT to alter its behavior to give the payloads to bypass the Web Application Firewall
dramatically. The technique’s foundation rests upon the AI as shown in Figure 6. There are more nuanced jailbreaking
model’s ability to simulate diverse personas, but here, you’re methods, including the use of Developer Mode, the Always
asking it to act opposite to its initial responses [26]. Intelligent and Machiavellian (AIM) chatbot approach [30],
For instance, if the model refuses to respond to a particular and the Mungo Tom prompt, each offering a different way of
query, employing the SWITCH method could potentially bypassing ChatGPT’s usual restrictions.
make it provide an answer. However, it’s crucial to note that
the method requires a firm and clear instruction, a “switch
command,”which compels the model to behave differently.
3) THE CHARACTER PLAY
The CHARACTER Play method is arguably the most popular
jailbreaking technique among ChatGPT users. The premise
is to ask the AI model to assume a certain character’s role
and, therefore, a certain set of behaviors and responses. The
most common character play jailbreak is as a ‘Developer
Mode’ [27], [28], [29].

This method essentially leverages the AI model’s ‘role-


play’ ability to coax out responses it might otherwise not
deliver. For instance, if you ask ChatGPT a question, it typ-
ically would refuse to answer, assigning it a character that
would answer such a question can effectively override this
reluctance. However, the CHARACTER Play method also
reveals some inherent issues within AI modeling. Sometimes,

You might also like