E. g., Ryan Greenblatt thinks that spending 5% more resources than is myopically commercially expedient would be enough. AI 2027 also assumes something like this.
TBC, my view isn’t that this is sufficient for avoiding takeover risk, it is that this suffices for “you [to] have a reasonable chance of avoiding AI takeover (maybe 50% chance of misaligned AI takeover?)”.
(You seem to understand that this is my perspective and I think this is also mostly clear from the context in the box, but I wanted to clarify this given the footnote might be read in isolation or misinterpreted.)
TBC, my view isn’t that this is sufficient for avoiding takeover risk, it is that this suffices for “you [to] have a reasonable chance of avoiding AI takeover (maybe 50% chance of misaligned AI takeover?)”.
(You seem to understand that this is my perspective and I think this is also mostly clear from the context in the box, but I wanted to clarify this given the footnote might be read in isolation or misinterpreted.)
Edited for clarity.
I’m curious, what’s your estimate for how much resources it’d take to drive the risk down to 25%, 10%, 1%?