As the observer would prefer to die in a worse than death scenario, one can assume that they would be willing to do anything to escape the scenario. Thus, it follows that we should do anything to prevent worse than death scenarios from occurring in the first place.
There seems to be a leap of logic here. One can strongly prefer an outcome without being “willing to do anything” to ensure it. Furthermore, just because someone in an extreme situation has an extreme reaction to it does not mean that we need to take that extreme reaction as our own—it could be that they are simply being irrational.
In addition, the very low probability of the scenarios is balanced by their extreme disutility. This inevetitably leads to Pascal’s Mugging.
I am confused—being a Pascal’s Mugging is usually treated as a negative feature of an argument?
I do think that it is worthwhile to work to fight S-risks. It’s not clear to me that they are the only thing that matters. The self-interestedness frame also seems a little off to me; to be honest if you’re selfish I think the best thing to do is probably to ignore the far future and just life a comfortable life.
Solving AI alignment doesn’t seem like the easiest way for humanity to do a controlled shutdown, if we decide that that’s what we need to do. Of course, it may be more feasible for political reasons.
Well, it does feel like you’re betraying yourself if you ignore the experiences of your future self, unless you don’t believe in continuity of consciousness at all. So if you’re future self would do anything to stop a situation, I think anything should be done to prevent it.
I guess this post may have come off as selfish as it focuses only on saving yourself. However, I would argue that preventing unfriendly ASI is one of the most altruistic things you could do because ASI could create an astronomical number of sentient beings, as Bostrom wrote.
The usual case where one would be unwilling to do literally anything to prevent a very negative outcome for oneself are when literally anything includes highly unethical actions.
The possible methods of preventing the outcome don’t really affect other people though so I don’t see how they would be unethical towards others. Actually, working on AI safety would benefit many people.
There seems to be a leap of logic here. One can strongly prefer an outcome without being “willing to do anything” to ensure it. Furthermore, just because someone in an extreme situation has an extreme reaction to it does not mean that we need to take that extreme reaction as our own—it could be that they are simply being irrational.
I am confused—being a Pascal’s Mugging is usually treated as a negative feature of an argument?
I do think that it is worthwhile to work to fight S-risks. It’s not clear to me that they are the only thing that matters. The self-interestedness frame also seems a little off to me; to be honest if you’re selfish I think the best thing to do is probably to ignore the far future and just life a comfortable life.
Solving AI alignment doesn’t seem like the easiest way for humanity to do a controlled shutdown, if we decide that that’s what we need to do. Of course, it may be more feasible for political reasons.
Well, it does feel like you’re betraying yourself if you ignore the experiences of your future self, unless you don’t believe in continuity of consciousness at all. So if you’re future self would do anything to stop a situation, I think anything should be done to prevent it.
I guess this post may have come off as selfish as it focuses only on saving yourself. However, I would argue that preventing unfriendly ASI is one of the most altruistic things you could do because ASI could create an astronomical number of sentient beings, as Bostrom wrote.
The usual case where one would be unwilling to do literally anything to prevent a very negative outcome for oneself are when literally anything includes highly unethical actions.
The possible methods of preventing the outcome don’t really affect other people though so I don’t see how they would be unethical towards others. Actually, working on AI safety would benefit many people.
Which outcome in which scenario?
I was referring to the the scenarios I listed in the post.