The company post is linked; it seems like an update on where we are with automated cybersec.
So far in 2025, only four security vulnerabilities received CVE identifiers in OpenSSL, the cryptographic library that secures the majority of internet traffic. AISLE’s autonomous system discovered three of them. (CVE-2025-9230, CVE-2025-9231, and CVE-2025-9232)
Some quick thoughts:
OpenSSL is one of the most human-security-audited pieces of open-source code ever, so discovering 3 new vulnerabilities sounds impressive. How much exactly: I’m curious about peoples opinions
Obviously, vulnerability discovery is a somewhat symmetric capability, so this also gives us some estimate of the offense side
This provides concrete evidence for the huge pool of bugs that are findable and exploitable even by current level AI—this is something everyone sane believed existed in my impression
On the other hand, it does not neatly support the story where it’s easy for rogue AIs to hack anything. Automated systems can also fix the bugs, hopefully systems like this will be deployed, and it seems likely the defense side will start with large advantage of compute
It’s plausible that the “programs are proofs” limit is defense-dominated. On the other hand, actual programs are leaky abstractions of the physical world, and it’s less clear what the limit is in that case.
I would love for someone to tell me how big a deal these vulnerabilities are, and how hard people had previously been trying to catch them. The blog post says that two were severity “Moderate”, and one was “Low”, but I don’t really know how to interpret this.
Two of the bugs AISLE highlighted are memory corruption primitives. They could be used in certain situations to crash a program that was running OpenSSL (like a web server), which is a denial of service risk. Because of modern compiler safety techniques, they can’t on their own be used to access data or run code, but they’re still concerning because it sometimes turns out to be possible to chain primitives like these into more dangerous exploits.
The third bug is a “timing side-channel bug” with a particular opt-in certificate algorithm that OpenSSL provides, when used on ARM architectures. It’s a pretty niche circumstance but it does look legitimate to me. The only way to know if it’s exploitable would be to try to build some kind of a PoC.
OpenSSL is a very hardened target, and lots of security researchers look at it. Any security-relevant bugs found on OpenSSL are pretty impressive.
Short answer: these aren’t Heartbleed-class, but they’re absolutely worth patching.
Two signals: (i) OpenSSL itself minted CVEs for them. This is non-trivial given its conservative posture, and (ii) fixes were backported across supported branches (3.5.4 / 3.4.3 / 3.3.5 / 3.2.6, with distro backports).
For context, per OpenSSL’s own vulnerability index as of today (3 Nov 2025), there were 4 CVEs in 2025 YTD (CVE-2025-), 9 in 2024 (CVE-2024-), 18 in 2023 (CVE-2023-), 15 in 2022 (CVE-2022-). Getting any CVE there is hard. “Low/Medium” here mostly reflects narrow preconditions and prevalence within typical OpenSSL usage, not that the primitives themselves are trivial. The score (called CVSS) compresses likelihood and impact into one scalar.
AISLE has a bunch of new things, including this somewhat easy to exploit and severe problem in samba which is maximium severity (if the affected feature would be turned on in default installation, would have been heartbleed-level). Samba is also extremely widely used
My impression is people reading lc’s comments updated downward on the news based on speculation like “so this may sound more impressive than it is.… my company has reported several similar memory corruption primitives to OpenSSL in the last month found by our scanner and I’m not sure if we ever got any CVEs for it. … AI security startups are trying to attract media attention, they have a habit of crediting findings to an AI when they actually involved a bunch of human effort …”.
My current impression is it is basically as impressive as it sounds; lc’s competing company product is likely somewhat worse; roughly zero human effort went into AISLE discoveries (but obviously into verification and contacting developers)
I don’t know if OpenSSL actually goes through the process of minting CVEs for a lot of the security problems they patch, so this may sound more impressive than it is. My company has reported several similar memory corruption primitives to OpenSSL in the last month found by our scanner and I’m not sure if we ever got any CVEs for it.
Because AI security startups are trying to attract media attention, they have a habit of crediting findings to an AI when they actually involved a bunch of human effort—especially when their tools are not publicly available. You should be healthily skeptical of anything startups report on their own. For a practitioner’s perspective on the state of security scanning, there was a blog post posted last month that provided a good independent overview at the time: https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters[1]
Full disclosure: we’ve since hired this guy, but we only reached out to him after he posted this blog.
Appreciate the pushback and your perspective. Two anchoring facts:
OpenSSL minted and published these CVEs (not us). They’re very conservative. Getting any CVE through their process is non-trivial. In 2025 we reported several issues. Some received CVEs, others were fixed without CVEs, which is normal under OpenSSL’s security posture.
On your “AI vs human experts” point: the findings came from a fully autonomous analysis pipeline. We then manually verified and coordinated disclosure with maintainers. The takeaway: our stack surfaced previously unknown, CVE-worthy bugs in OpenSSL’s hardened codebase. That’s hard to do by hand at scale.
(am not a security professional.)
All seem low-real-world-severity; two of the three are bugs in places where I think people wouldn’t be looking for them as much; one of the three is a controlled crash with no impact outside potential DoS.
See this comment.
The timing side-channel bug is impressive to see discovered with AI. You need to notice that operations take different amounts of time, and then figure out that it’s bad in this specific case.
Unsure how much of this is due to scaffolding around LLMs vs. due to more traditional systems.