3 new ways to use AI as your security sidekick

Anton Chuvakin
Security Advisor, Office of the CISO
Trisha Alexander
Senior Consultant, Mandiant Consulting
Hear monthly from our Cloud CISO in your inbox
Get the latest on security from Cloud CISO Phil Venables.
SubscribeIn the race to adopt AI, security executives might feel a bit like Martinus Evans, who came to fame for running eight marathons while weighing more than 300 pounds.
Evans didn’t believe he could run a marathon until he did it, and the same is true for security executives: You might not know that AI can help you, until you find it doing just that. At Google Cloud’s Office of the CISO, we believe that the large-scale promise of AI can only be achieved when it’s developed and deployed in a responsible, ethical, and safe way.
Securing AI use plays a big role in responsible, safe AI use cases. This is where our Secure AI Framework (SAIF) comes in, as well as our new deep-dive report on how to apply SAIF in the real world.
We’ve also heard from AI skeptics. Some customers are interested in working AI into their workflows, but aren’t sure where to begin in a way that will generate results. Some are facing institutional headwinds from leaders and even security engineers who push back when AI is discussed. Others are dead-set against using gen AI until it can prove its worth — perhaps by waiting for others to explore how to use gen AI in security and then report back.
The reality is that generative AI is already providing clear and impactful security results. Today, we’re reviewing three decisive use-cases that you can adopt for your own, and perhaps help inspire you to find new uses for AI in security, too.
The big boost that gen AI gives threat hunting
In the ever-shifting cybersecurity landscape, where threats change faster than a chameleon colors in a disco, traditional defenses often find themselves a step behind. That's where proactive threat hunting comes in.
While we offer intelligence-led, human-driven Custom Threat Hunt services to reveal ongoing and past threat actor activity in cloud and on-premise environments, you can also use AI as a threat-hunting advisor. It can help you:
- Generate threat hunting hypotheses.
- Provide log sources that would be needed for the hunt.
- Align the hunt to the unique threats targeting a specific industry.
- Offer guidance on how to generate hunt queries.
- Suggest next steps on how to pivot the search if the hunt gets stuck.
- Help write hunt findings reports.
- Create detections based on hunt findings.
- Provide configuration changes when detection requires additional log sources.
Some prompts you can use to get started integrating AI into your threat hunts include:
- "If I have [a specific threat] in my environment, and want to find APT42 persistence, what should I search for in my Elastic?"
- "Suggest a number of threat hunt hypotheses that align to the MITRE ATT&CK framework."
- "Based on the threat profile for [my company], suggest threat hunts that align to APT groups that would target that company."
- "I'm stuck at [situation] and I'm hunting for [a threat], what should I pivot to next to investigate?"
- "Based on [a specific] hypothesis, what data would I need for a successful hunt? What should I search for?"
Gen AI can help transform threat hunting from a daunting challenge into an exhilarating pursuit.
How gen AI helps make stronger security validations
Think of security validation as a rigorous inspection of your defenses, meticulously examining each control to ensure it functions as intended and withstands the pressures of real-world attacks. The validation process can help bridge the gap between security theory and IT reality, uncovering hidden vulnerabilities, generating actionable insights, mapping your path to compliance, and even encourage cross-team knowledge-sharing, all the while helping you build a foundation for proactive defense.
Gen AI can be powerful ally of security validation, offering a range of capabilities that can enhance and streamline the testing process:
- Create test cases based on existing detections and controls, ensuring comprehensive coverage and minimizing the risk of overlooking potential weaknesses.
- Generate scripts in seconds, even for those unfamiliar with specific security controls, accelerating the testing process and reducing the barrier to entry.
- Suggest security controls to prioritize for testing based on your threat profile and industry, optimizing resource allocation and focusing efforts on the most critical areas.
- Develop threat models to help you anticipate potential attack vectors and formulate proactive mitigation strategies.
- Recommend mitigation strategies based on security validation test results that can address weaknesses, strengthening your defenses against potential threats.
- Map your security controls to frameworks, simplifying compliance efforts and ensuring adherence to industry standards.
Some prompts you can use to boost your security validations:
- "I need to test [a specific] security control, generate a script that can test this."
- "What security controls should I test if I'm in the [specific] industry?"
- "Generate a threat model for my organization."
- "Based on [data] from the validation tests, what mitigation strategies should I focus on implementing?"
- "Map my security controls to [specific] framework."
Gen AI can help mature your security validation process into one that’s more proactive and dynamic, with continuous assessments and recommendations on how to strengthen your defenses.
AI delivers smarter red team data analysis
Red teams often face the challenge of processing vast amounts of unstructured data collected during reconnaissance and internal network exploration. This data can include text from social media, various file types, and descriptions in Active Directory objects. Traditional methods of sifting through this information can be time-consuming and inefficient.
Generative AI, particularly large language models (LLMs), offer a powerful solution to this problem. By feeding this unstructured data into an LLM, red teams can use its ability to parse and understand text. LLMs can be prompted to return structured data in formats including JSON, XML, and CSV, making it much easier to analyze.
Here’s how AI can enhance red team data analysis:
- Speed up reconnaissance by analyzing social media data to quickly identify potential targets for phishing campaigns. For example, they can parse job titles to identify individuals likely not in IT or security roles.
- Enhance privilege escalation by scanning the content of diverse file types to identify login credentials. This approach may find credentials in more files than traditional credential-scanning tools.
- Improve internal reconnaissance by analyzing unstructured fields in Active Directory to detect high-value target systems, cluster the user accounts, and then correlate users to their likely workstations.
- Provide a top-line and summary by synthesizing large amounts of data into a concise summary, highlighting key findings and potential vulnerabilities, and explain their reasoning.
This approach accelerates the time-consuming process of sifting through data, allowing red team operators to more quickly identify potential leads, vulnerabilities, and paths for exploitation, ultimately improving the overall efficiency and impact of the engagement.
Here are some useful sample prompts for red teams:
- "Analyze these Active Directory descriptions and identify systems that are likely backup servers or domain controllers."
- "Scan the content of these files and identify any potential credentials (usernames, passwords)."
- "Analyze this social media data and identify potential targets for phishing campaigns, especially those not in IT or security roles."
- "Explain why this piece of information is relevant to a potential security issue found in the provided data."
- "Analyze this unstructured Active Directory data and detect high-value target systems, cluster user accounts, and correlate users to their likely workstations."
- "Based on this data from internal network exploration, provide a concise summary highlighting key findings and potential vulnerabilities."
By using AI in this way, red teams can transform data analysis from a bottleneck into a force multiplier, significantly enhancing their operational capabilities.
To learn more about how to use AI as your security sidekick, come see us at the RSA Conference, and check out our latest report on AI and security.