OpenAI is rolling out Trusted Access for Cyber, a program that gives trusted users greater access to dual-use cyber capabilities. Seems like a great idea, but hard to execute well at scale.
Nope. It’s the sort of bad idea that seems good to people who either don’t really understand the landscape, or are flailing and self-deluding because they feel such a strong need to feel like they’re Doing Something about an actually intractable problem.
There are two main issues:
“Defenders” are basically everybody. Most of “everybody” won’t jump through hoops to get extra access (and definitely won’t try to get around restrictions). They have other things to do. Attackers, on the other hand, will jump through hoops (and may also find ways around restrictions). They’re not just trying to get some help to secure their project; this is their project.
And no, having people’s identity won’t help (not that the identies you get are necessarily valid anyway, but even if they were). At best it lets you assign blame after the fact, but in practice it usually won’t even do that. There’s no reliable way to connect “ChatGPT identified this bug to user X” with “unknown actors started exploiting this bug”. Very few bugs are actually that exclusive or hard to find. There’s even less chance of definitively saying “user X is not the one who started exploiting this bug”. Even user X reporting it through “normal channels” doesn’t prove much; it’s an obvious diversionary tactic, and you get to exploit it during the lag.
“Respected security researchers”, “members of well-known security teams”, and “employees of responsible(TM) companies” are in many cases the same people as “illicit hackers”, “open-market sellers of vulnerabilities”, and “APT operators”. Both individuals and organizations routinely lead “double lives”. And there are huge political and opinion components to deciding who’s legitimate. That’s assuming you can authenticate people to begin with; you’re dealing with actors who specialize in circumventing that.
Oh, and by the way, if you tether yourself to the entrenched “responsible disclosure” system, as OpenAI seems to suggest they may be doing, you’re tethering yourself to a deeply corrupt system that probably on net reduces the security of actually deployed systems.
Really the only answer is to provide exactly the same capabilities to all users. And since motivated users will seek out paths to the highest possible capabilities, there’s a completely rational race to the bottom effect that results in everybody getting a lot of capability.
It’s really popular right now to play silly games with access to capabilities… and that’s not nececessarily irrational, in the sense that OpenAI may get some “well, we did our best” blame-deflection cover out of this kind of thing. But it’s not going to actually fix the problem, which is that models are suddenly going to vastly increase access simultaneously to vulnerability knowledge and to the capabilities to exploit them, while not helping nearly as much with the barriers to agile defense. It has a really good chance of further disadvantaging defense. We’re just all going to have to buckle up.
Nope. It’s the sort of bad idea that seems good to people who either don’t really understand the landscape, or are flailing and self-deluding because they feel such a strong need to feel like they’re Doing Something about an actually intractable problem.
There are two main issues:
“Defenders” are basically everybody. Most of “everybody” won’t jump through hoops to get extra access (and definitely won’t try to get around restrictions). They have other things to do. Attackers, on the other hand, will jump through hoops (and may also find ways around restrictions). They’re not just trying to get some help to secure their project; this is their project.
And no, having people’s identity won’t help (not that the identies you get are necessarily valid anyway, but even if they were). At best it lets you assign blame after the fact, but in practice it usually won’t even do that. There’s no reliable way to connect “ChatGPT identified this bug to user X” with “unknown actors started exploiting this bug”. Very few bugs are actually that exclusive or hard to find. There’s even less chance of definitively saying “user X is not the one who started exploiting this bug”. Even user X reporting it through “normal channels” doesn’t prove much; it’s an obvious diversionary tactic, and you get to exploit it during the lag.
“Respected security researchers”, “members of well-known security teams”, and “employees of responsible(TM) companies” are in many cases the same people as “illicit hackers”, “open-market sellers of vulnerabilities”, and “APT operators”. Both individuals and organizations routinely lead “double lives”. And there are huge political and opinion components to deciding who’s legitimate. That’s assuming you can authenticate people to begin with; you’re dealing with actors who specialize in circumventing that.
Oh, and by the way, if you tether yourself to the entrenched “responsible disclosure” system, as OpenAI seems to suggest they may be doing, you’re tethering yourself to a deeply corrupt system that probably on net reduces the security of actually deployed systems.
Really the only answer is to provide exactly the same capabilities to all users. And since motivated users will seek out paths to the highest possible capabilities, there’s a completely rational race to the bottom effect that results in everybody getting a lot of capability.
It’s really popular right now to play silly games with access to capabilities… and that’s not nececessarily irrational, in the sense that OpenAI may get some “well, we did our best” blame-deflection cover out of this kind of thing. But it’s not going to actually fix the problem, which is that models are suddenly going to vastly increase access simultaneously to vulnerability knowledge and to the capabilities to exploit them, while not helping nearly as much with the barriers to agile defense. It has a really good chance of further disadvantaging defense. We’re just all going to have to buckle up.