I think it’s valuable to study rationality and AI alignment (with a touch of programming) for the purpose of preparing to take advantage of post-AGI personal growth opportunities, without destroying your own extrapolated volition. This is relevant in case we survive, which I think is not unlikely (while the unlikely good outcome is that we keep cosmic endowment; the more likely alternative is being allowed to live on relatively tiny welfare, while the rest is taken away).
I think it’s valuable to study rationality and AI alignment (with a touch of programming) for the purpose of preparing to take advantage of post-AGI personal growth opportunities, without destroying your own extrapolated volition. This is relevant in case we survive, which I think is not unlikely (while the unlikely good outcome is that we keep cosmic endowment; the more likely alternative is being allowed to live on relatively tiny welfare, while the rest is taken away).