I don’t know if OpenSSL actually goes through the process of minting CVEs for a lot of the security problems they patch, so this may sound more impressive than it is. My company has reported several similar memory corruption primitives to OpenSSL in the last month found by our scanner and I’m not sure if we ever got any CVEs for it.
Because AI security startups are trying to attract media attention, they have a habit of crediting findings to an AI when they actually involved a bunch of human effort—especially when their tools are not publicly available. You should be healthily skeptical of anything startups report on their own. For a practitioner’s perspective on the state of security scanning, there was a blog post posted last month that provided a good independent overview at the time: https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters[1]
Appreciate the pushback and your perspective. Two anchoring facts:
OpenSSL minted and published these CVEs (not us). They’re very conservative. Getting any CVE through their process is non-trivial. In 2025 we reported several issues. Some received CVEs, others were fixed without CVEs, which is normal under OpenSSL’s security posture.
On your “AI vs human experts” point: the findings came from a fully autonomous analysis pipeline. We then manually verified and coordinated disclosure with maintainers. The takeaway: our stack surfaced previously unknown, CVE-worthy bugs in OpenSSL’s hardened codebase. That’s hard to do by hand at scale.
I don’t know if OpenSSL actually goes through the process of minting CVEs for a lot of the security problems they patch, so this may sound more impressive than it is. My company has reported several similar memory corruption primitives to OpenSSL in the last month found by our scanner and I’m not sure if we ever got any CVEs for it.
Because AI security startups are trying to attract media attention, they have a habit of crediting findings to an AI when they actually involved a bunch of human effort—especially when their tools are not publicly available. You should be healthily skeptical of anything startups report on their own. For a practitioner’s perspective on the state of security scanning, there was a blog post posted last month that provided a good independent overview at the time: https://joshua.hu/llm-engineer-review-sast-security-ai-tools-pentesters[1]
Full disclosure: we’ve since hired this guy, but we only reached out to him after he posted this blog.
Appreciate the pushback and your perspective. Two anchoring facts:
OpenSSL minted and published these CVEs (not us). They’re very conservative. Getting any CVE through their process is non-trivial. In 2025 we reported several issues. Some received CVEs, others were fixed without CVEs, which is normal under OpenSSL’s security posture.
On your “AI vs human experts” point: the findings came from a fully autonomous analysis pipeline. We then manually verified and coordinated disclosure with maintainers. The takeaway: our stack surfaced previously unknown, CVE-worthy bugs in OpenSSL’s hardened codebase. That’s hard to do by hand at scale.