Mm, I agree with the detailed scenarios outlined, but I think there are some anomalous worlds the correct policy in which is to switch from working on AI Alignment: worlds in which there’s a shorter avenue to faithful cognitive augmentation/creating an utopia/such stuff. E. g., if we discover a very flexible magic system, then abandoning all your agency-research stuff and re-specializing towards figuring out how to design a Becomus Goddus spell in it may be the correct move. Or if the Matrix Lord comes down and explicitly sends you off on a heroic quest the reward for completing which will be control over the simulation.
Which also means you’d need to investigate the anomalous occurrences first, so that you may do your due diligence and check that they don’t contain said shorter avenues (or existential threats more pressing than AGI).
Overall, I agree that there’s a thing you need to single-mindedly pursue across all possible worlds: your CEV. Which, for most people, likely means an eutopia. And in this world, the shortest and most robust route there seems to be solving AGI Alignment via agency-foundations research. Sure. But it’s not a universal constant.
(I mean, I suppose it kind of all does ultimately bottom out in you causing a humanity-aligned superintelligence to exist (so that it can do acausal trade for us). But the pathways there can look very different, and not necessarily most-accurately described as “working on solving alignment”.)
Mm, I agree with the detailed scenarios outlined, but I think there are some anomalous worlds the correct policy in which is to switch from working on AI Alignment: worlds in which there’s a shorter avenue to faithful cognitive augmentation/creating an utopia/such stuff. E. g., if we discover a very flexible magic system, then abandoning all your agency-research stuff and re-specializing towards figuring out how to design a Becomus Goddus spell in it may be the correct move. Or if the Matrix Lord comes down and explicitly sends you off on a heroic quest the reward for completing which will be control over the simulation.
Which also means you’d need to investigate the anomalous occurrences first, so that you may do your due diligence and check that they don’t contain said shorter avenues (or existential threats more pressing than AGI).
Overall, I agree that there’s a thing you need to single-mindedly pursue across all possible worlds: your CEV. Which, for most people, likely means an eutopia. And in this world, the shortest and most robust route there seems to be solving AGI Alignment via agency-foundations research. Sure. But it’s not a universal constant.
(I mean, I suppose it kind of all does ultimately bottom out in you causing a humanity-aligned superintelligence to exist (so that it can do acausal trade for us). But the pathways there can look very different, and not necessarily most-accurately described as “working on solving alignment”.)
Oh, yeah, you’re right.