Death and AI successionism or AI doom are similar because they feel difficult to avoid and therefore it’s insightful to analyze how people currently cope with death as a model of how they might later cope with AI takeover or AI successionism.
Regarding death, similar to what you described in the post, I think people often begin with a mindset of confused, uncomfortable dissonance. Then they usually converge on one of a few predictable narratives:
1. Acceptance: “Death is inevitable, so trying to fight it is pointless.” Given the inevitability and unavoidability of death, worrying about it or putting effort into avoiding it is futile and pointless. Just swallow the bitter truth and go on living.
2. Denial: Avoiding the topic or distracting oneself from the implications.
3. Positive reframing: Turning death into something desirable or meaningful. As Eliezer Yudkowsky has pointed out, if you were hit on the head with a baseball bat every week, you’d eventually start saying it built character. Many people rationalize death as “natural” or essential to meaning.
Your post seems mostly about mindset #3: AI successionism framed as good or even noble. I’d expect #2 and #3 to be strong psychological attractors as well, but based on personal experience, #1 seems most likely.
I see all three as cognitive distortions: comforting stories designed to reduce dissonance rather than finding an accurate model of reality.
A more truth-seeking and honest mindset is to acknowledge unpleasant realities (death, AI risk), that these events may be likely but not guaranteed, and then ask what actions increase the probability of positive outcomes and decrease negative ones. This is the kind of mindset that is described in IABIED.
I also think a good heuristic is to be skeptical of narratives that minimize human agency or suppress moral obligations to act (e.g. “it’s inevitable so why try”).
Death and AI successionism or AI doom are similar because they feel difficult to avoid and therefore it’s insightful to analyze how people currently cope with death as a model of how they might later cope with AI takeover or AI successionism.
Regarding death, similar to what you described in the post, I think people often begin with a mindset of confused, uncomfortable dissonance. Then they usually converge on one of a few predictable narratives:
1. Acceptance: “Death is inevitable, so trying to fight it is pointless.” Given the inevitability and unavoidability of death, worrying about it or putting effort into avoiding it is futile and pointless. Just swallow the bitter truth and go on living.
2. Denial: Avoiding the topic or distracting oneself from the implications.
3. Positive reframing: Turning death into something desirable or meaningful. As Eliezer Yudkowsky has pointed out, if you were hit on the head with a baseball bat every week, you’d eventually start saying it built character. Many people rationalize death as “natural” or essential to meaning.
Your post seems mostly about mindset #3: AI successionism framed as good or even noble. I’d expect #2 and #3 to be strong psychological attractors as well, but based on personal experience, #1 seems most likely.
I see all three as cognitive distortions: comforting stories designed to reduce dissonance rather than finding an accurate model of reality.
A more truth-seeking and honest mindset is to acknowledge unpleasant realities (death, AI risk), that these events may be likely but not guaranteed, and then ask what actions increase the probability of positive outcomes and decrease negative ones. This is the kind of mindset that is described in IABIED.
I also think a good heuristic is to be skeptical of narratives that minimize human agency or suppress moral obligations to act (e.g. “it’s inevitable so why try”).