Isn’t the “scaling AI kills you” view the conjunction of “scaling-pilled” and “alignment is extremely difficult” views, rather than being identical with the scaling-pilled view?
One could reason as something like:
If alignment is as hard as people make it out to be, we’re in all likelihood dead anyway since Westerners are going to develop AI even if we don’t.
If alignment isn’t as hard as people make it out to be, then the country that controls the most powerful AI will be the one that becomes dominant in the world.
Thus if alignment is hard it doesn’t matter what we do, and if alignment is less hard we should invest in AI. Thus, we should invest in AI.
(There’s some obvious nuance that this argument is missing, e.g. the chance of arms races increasing the difficulty of alignment, but some form of it still seems reasonable to me.)
Isn’t the “scaling AI kills you” view the conjunction of “scaling-pilled” and “alignment is extremely difficult” views, rather than being identical with the scaling-pilled view?
One could reason as something like:
If alignment is as hard as people make it out to be, we’re in all likelihood dead anyway since Westerners are going to develop AI even if we don’t.
If alignment isn’t as hard as people make it out to be, then the country that controls the most powerful AI will be the one that becomes dominant in the world.
Thus if alignment is hard it doesn’t matter what we do, and if alignment is less hard we should invest in AI. Thus, we should invest in AI.
(There’s some obvious nuance that this argument is missing, e.g. the chance of arms races increasing the difficulty of alignment, but some form of it still seems reasonable to me.)