From the scaling-pilled perspective, or even just centrist AI perspective, this is an insane position: it is taking a L on one of, if not the most, important future technological capabilities, which in the long run may win or lose wars. If China wants to dominate Asia, much less surpass the obsolete American empire, or create AGI, or lead in aerospace, or create ‘5G’ or whatever, it’s hard to see how it’s going to do that while paying more for chips which are half a decade or worse out of date.
The scaling-pilled AI view ought to be that scaling AI kills you. Why pretend that there’s a strategic advantage here, as opposed to a loaded gun you can point at your own head if you’re stupid enough?
It’s one thing to say “given China’s actual beliefs, they ought to do X” or “if China were rationally acting on a correct understanding of the world, they would do Y”. But why criticize China for avoiding a self-destructive action that would make sense to do if they had a specific combination of definitely-true, maybe-true, and definitely-false beliefs—a specific combination they don’t in fact have?
Isn’t the “scaling AI kills you” view the conjunction of “scaling-pilled” and “alignment is extremely difficult” views, rather than being identical with the scaling-pilled view?
One could reason as something like:
If alignment is as hard as people make it out to be, we’re in all likelihood dead anyway since Westerners are going to develop AI even if we don’t.
If alignment isn’t as hard as people make it out to be, then the country that controls the most powerful AI will be the one that becomes dominant in the world.
Thus if alignment is hard it doesn’t matter what we do, and if alignment is less hard we should invest in AI. Thus, we should invest in AI.
(There’s some obvious nuance that this argument is missing, e.g. the chance of arms races increasing the difficulty of alignment, but some form of it still seems reasonable to me.)
The scaling-pilled AI view ought to be that scaling AI kills you. Why pretend that there’s a strategic advantage here, as opposed to a loaded gun you can point at your own head if you’re stupid enough?
It’s one thing to say “given China’s actual beliefs, they ought to do X” or “if China were rationally acting on a correct understanding of the world, they would do Y”. But why criticize China for avoiding a self-destructive action that would make sense to do if they had a specific combination of definitely-true, maybe-true, and definitely-false beliefs—a specific combination they don’t in fact have?
Isn’t the “scaling AI kills you” view the conjunction of “scaling-pilled” and “alignment is extremely difficult” views, rather than being identical with the scaling-pilled view?
One could reason as something like:
If alignment is as hard as people make it out to be, we’re in all likelihood dead anyway since Westerners are going to develop AI even if we don’t.
If alignment isn’t as hard as people make it out to be, then the country that controls the most powerful AI will be the one that becomes dominant in the world.
Thus if alignment is hard it doesn’t matter what we do, and if alignment is less hard we should invest in AI. Thus, we should invest in AI.
(There’s some obvious nuance that this argument is missing, e.g. the chance of arms races increasing the difficulty of alignment, but some form of it still seems reasonable to me.)