Thanks for the response, one quick clarification in case this isn’t clear.
On:
For instance, I think that well implemented RSPs required by a regulatory agency can reduce risk to <5% (partially by stopping in worlds where this appears needed).
I assume this would be a crux with Connor/Gabe (and I think I’m at least much less confident in this than you appear to be).
It’s worth noting here that I’m responding to this passage from the text:
In a saner world, all AGI progress should have already stopped. If we don’t, there’s more than a 10% chance we all die.
Many people in the AI safety community believe this, but they have not stated it publicly. Worse, they have stated different beliefs more saliently, which misdirect everyone else about what should be done, and what the AI safety community believes.
I’m responding to the “many people believe this” which I think implies that the groups they are critiquing believe this. I want to contest what these people believe, not what is actually true.
Like many of therse people think policy interventions other than pause reduce X-risk below 10%.
Maybe I think something like (numbers not well considered):
P(doom) = 35%
P(doom | scaling pause by executive order in 2024) = 25%
P(doom | good version of regulatory agency doing something like RSP and safety arguments passed into law in 2024) = 5% (depends a ton on details and political buy in!!!)
P(doom | full and strong international coordination around pausing all AI related progress for 10+ years which starts by pausing hardware progress and current manufacturing) = 3%
Note that these numbers take into account evidential updates (e.g., probably other good stuff is happening if we have super strong internation coordination around pausing AI).
Agreed that the post is at the very least not clear. In particular, it’s obviously not true that [if we don’t stop today, there’s more than a 10% chance we all die], and I don’t think [if we neverstop, under any circumstances...] is a case many people would be considering at all.
It’d make sense to be much clearer on the ‘this’ that “many people believe”.
Thanks for the response, one quick clarification in case this isn’t clear.
On:
It’s worth noting here that I’m responding to this passage from the text:
I’m responding to the “many people believe this” which I think implies that the groups they are critiquing believe this. I want to contest what these people believe, not what is actually true.
Like many of therse people think policy interventions other than pause reduce X-risk below 10%.
Maybe I think something like (numbers not well considered):
P(doom) = 35%
P(doom | scaling pause by executive order in 2024) = 25%
P(doom | good version of regulatory agency doing something like RSP and safety arguments passed into law in 2024) = 5% (depends a ton on details and political buy in!!!)
P(doom | full and strong international coordination around pausing all AI related progress for 10+ years which starts by pausing hardware progress and current manufacturing) = 3%
Note that these numbers take into account evidential updates (e.g., probably other good stuff is happening if we have super strong internation coordination around pausing AI).
Ah okay—thanks. That’s clarifying.
Agreed that the post is at the very least not clear.
In particular, it’s obviously not true that [if we don’t stop today, there’s more than a 10% chance we all die], and I don’t think [if we never stop, under any circumstances...] is a case many people would be considering at all.
It’d make sense to be much clearer on the ‘this’ that “many people believe”.
(and I hope you’re correct on P(doom)!)