Many people think that AI alignment is intractable (<50% chance of success) and also believe that a universe optimized towards elephant CEV, or the CEV of aliens that had a similar evolutionary environment to humans, would be at least 50% as good as a universe optimized towards human CEV. Doesn’t this mean we should be spending significant effort (say, at least 1% of the effort spent on alignment) finding tractable plans to create a successor species in case alignment fails?
If alignment fails I don’t think it’s possible to safely prepare a successor species. We could maybe try to destroy the earth slightly before the AI turns on rather than slightly after, in the hopes that the aliens don’t screw up the chance we give them?
Many people think that AI alignment is intractable (<50% chance of success) and also believe that a universe optimized towards elephant CEV, or the CEV of aliens that had a similar evolutionary environment to humans, would be at least 50% as good as a universe optimized towards human CEV. Doesn’t this mean we should be spending significant effort (say, at least 1% of the effort spent on alignment) finding tractable plans to create a successor species in case alignment fails?
If alignment fails I don’t think it’s possible to safely prepare a successor species. We could maybe try to destroy the earth slightly before the AI turns on rather than slightly after, in the hopes that the aliens don’t screw up the chance we give them?