The company post is linked; it seems like an update on where we are with automated cybersec.
So far in 2025, only four security vulnerabilities received CVE identifiers in OpenSSL, the cryptographic library that secures the majority of internet traffic. AISLE’s autonomous system discovered three of them. (CVE-2025-9230, CVE-2025-9231, and CVE-2025-9232)
Some quick thoughts:
OpenSSL is one of the most human-security-audited pieces of open-source code ever, so discovering 3 new vulnerabilities sounds impressive. How much exactly: I’m curious about peoples opinions
Obviously, vulnerability discovery is a somewhat symmetric capability, so this also gives us some estimate of the offense side
This provides concrete evidence for the huge pool of bugs that are findable and exploitable even by current level AI—this is something everyone sane believed existed in my impression
On the other hand, it does not neatly support the story where it’s easy for rogue AIs to hack anything. Automated systems can also fix the bugs, hopefully systems like this will be deployed, and it seems likely the defense side will start with large advantage of compute
It’s plausible that the “programs are proofs” limit is defense-dominated. On the other hand, actual programs are leaky abstractions of the physical world, and it’s less clear what the limit is in that case.
AISLE discovered three new OpenSSL vulnerabilities
Link post
The company post is linked; it seems like an update on where we are with automated cybersec.
So far in 2025, only four security vulnerabilities received CVE identifiers in OpenSSL, the cryptographic library that secures the majority of internet traffic. AISLE’s autonomous system discovered three of them. (CVE-2025-9230, CVE-2025-9231, and CVE-2025-9232)
Some quick thoughts:
OpenSSL is one of the most human-security-audited pieces of open-source code ever, so discovering 3 new vulnerabilities sounds impressive. How much exactly: I’m curious about peoples opinions
Obviously, vulnerability discovery is a somewhat symmetric capability, so this also gives us some estimate of the offense side
This provides concrete evidence for the huge pool of bugs that are findable and exploitable even by current level AI—this is something everyone sane believed existed in my impression
On the other hand, it does not neatly support the story where it’s easy for rogue AIs to hack anything. Automated systems can also fix the bugs, hopefully systems like this will be deployed, and it seems likely the defense side will start with large advantage of compute
It’s plausible that the “programs are proofs” limit is defense-dominated. On the other hand, actual programs are leaky abstractions of the physical world, and it’s less clear what the limit is in that case.