Well, it does feel like you’re betraying yourself if you ignore the experiences of your future self, unless you don’t believe in continuity of consciousness at all. So if you’re future self would do anything to stop a situation, I think anything should be done to prevent it.
I guess this post may have come off as selfish as it focuses only on saving yourself. However, I would argue that preventing unfriendly ASI is one of the most altruistic things you could do because ASI could create an astronomical number of sentient beings, as Bostrom wrote.
The usual case where one would be unwilling to do literally anything to prevent a very negative outcome for oneself are when literally anything includes highly unethical actions.
The possible methods of preventing the outcome don’t really affect other people though so I don’t see how they would be unethical towards others. Actually, working on AI safety would benefit many people.
Well, it does feel like you’re betraying yourself if you ignore the experiences of your future self, unless you don’t believe in continuity of consciousness at all. So if you’re future self would do anything to stop a situation, I think anything should be done to prevent it.
I guess this post may have come off as selfish as it focuses only on saving yourself. However, I would argue that preventing unfriendly ASI is one of the most altruistic things you could do because ASI could create an astronomical number of sentient beings, as Bostrom wrote.
The usual case where one would be unwilling to do literally anything to prevent a very negative outcome for oneself are when literally anything includes highly unethical actions.
The possible methods of preventing the outcome don’t really affect other people though so I don’t see how they would be unethical towards others. Actually, working on AI safety would benefit many people.
Which outcome in which scenario?
I was referring to the the scenarios I listed in the post.