What it will shift to is important, but assuming that this rosy model of alignment is correct, then I’d argue a significant part of the field of AI Alignment should and can change purpose to something else.
Even if you’re forecasting is correct, AI alignment is still so pivotal that even the difference between 1% and 0.1% matter. At most, you’re post implies that alignment researchers should favor accelerating AI instead of decelerating it.
Even if you’re forecasting is correct, AI alignment is still so pivotal that even the difference between 1% and 0.1% matter. At most, you’re post implies that alignment researchers should favor accelerating AI instead of decelerating it.