That intuition is there for a reason. We’re spoiled having grown up in a liberal order within which this risk is mostly overblown. However, ASI is clearly powerful enough to unilaterally over turn any such liberal order (or whatever’s left of it), and puts us into a realm which is even worse than the ancestral environment in terms of how changeable power hierarchies are, and in how bad things can get if you’re at the bottom.
Corrigibility and CEV are trying to solve separate problems? Not sure what your point is here; agreed on that being one of the major points of CEV.
Persuading people about x-risk enough to stop AI capability gains seems like the current best lever to me too.
I think where we disagree is that I do not think that we should immediately jump into alignment when/if that succeeds, but need to focus on good governance and institutions first (and probably worth spending some effort trying to lay the groundwork now, especially since this seems like an especially high-leverage moment in history for making such changes). I have some thoughts on this too if you want to move to DMs.
Corrigibility and CEV are trying to solve separate problems? Not sure what your point is here; agreed on that being one of the major points of CEV.
If every country/person was building CEV, it wouldn’t be particularly scary (from a misuse standpoint). Whereas if every country is focused on corrigibility, there will be a phase where unilateral actors can do bad stuff you need to worry about.
That intuition is there for a reason. We’re spoiled having grown up in a liberal order within which this risk is mostly overblown. However, ASI is clearly powerful enough to unilaterally over turn any such liberal order (or whatever’s left of it), and puts us into a realm which is even worse than the ancestral environment in terms of how changeable power hierarchies are, and in how bad things can get if you’re at the bottom.
Corrigibility and CEV are trying to solve separate problems? Not sure what your point is here; agreed on that being one of the major points of CEV.
Persuading people about x-risk enough to stop AI capability gains seems like the current best lever to me too.
I think where we disagree is that I do not think that we should immediately jump into alignment when/if that succeeds, but need to focus on good governance and institutions first (and probably worth spending some effort trying to lay the groundwork now, especially since this seems like an especially high-leverage moment in history for making such changes). I have some thoughts on this too if you want to move to DMs.
If every country/person was building CEV, it wouldn’t be particularly scary (from a misuse standpoint). Whereas if every country is focused on corrigibility, there will be a phase where unilateral actors can do bad stuff you need to worry about.