The usual case where one would be unwilling to do literally anything to prevent a very negative outcome for oneself are when literally anything includes highly unethical actions.
The possible methods of preventing the outcome don’t really affect other people though so I don’t see how they would be unethical towards others. Actually, working on AI safety would benefit many people.
The usual case where one would be unwilling to do literally anything to prevent a very negative outcome for oneself are when literally anything includes highly unethical actions.
The possible methods of preventing the outcome don’t really affect other people though so I don’t see how they would be unethical towards others. Actually, working on AI safety would benefit many people.
Which outcome in which scenario?
I was referring to the the scenarios I listed in the post.