I don’t claim that most memes derive their fitness from resolving cognitive dissonance. There are many reasons why something may be memetically fit, and I gesture at some of the more common ones. For example most common religions and ideologies have some memes which encourage proselytizing—the mechanics why this increases memetic fitness is not particularly subtle or mysterious. Also many ideas are fit just because they are straighforwardly predictive or helpful. For example the idea that you should stop on red light at crossings is fairly prevalent, helpful coordination norm, and trasmitted both vertically from parents, by state, by dedicated traffic safety signs, etc.
In my view succesionism is interesting case study is because - it is not directly useful for predicting observations, manipulating physical reality or solving coordination problems— many of the common memes remixed are clearly insufficient to explain the spread—many big ideologies claim to understand the arc of history, expanding moral circle toward AIs is not yet powerful right now, misanthropy is unatractive,… so the question why this spreads is interesting.
You may argue it’s because straightforwardly true or object-level compelling, but I basically don’t buy that. Metaethics is hard, axiology is hard, and macro-futurism is hard, and all of these domains share the feature that you can come up with profound sounding object-level reasons for basically arbitrary positions. This means without some ammount of philosophical competence and discipline, I’d expect people arrive at axiologies and meta-ethical ideas which fit beliefs they adopted for other reasons. Forms of successionism I mention share the feature that there is close to zero philosophers endorsing them, and when people with some competence in philosophy look at the reasons given, they see clear mistakes, arguments ignored, etc. Yes “part will also track genuine and often rigorous attempts to reason about the future”, but my guess is it’s not a large part—my impression is if you genuinely and rigorously reason about the future, you usually arrive at some combination of transhumanist ideas, view that metaethics is important and we don’t have clear solution, and something about AI being big deal.
I do agree AI xrisk memeplex is also somewhat strange and interesting case.
I don’t claim that most memes derive their fitness from resolving cognitive dissonance. There are many reasons why something may be memetically fit, and I gesture at some of the more common ones. For example most common religions and ideologies have some memes which encourage proselytizing—the mechanics why this increases memetic fitness is not particularly subtle or mysterious. Also many ideas are fit just because they are straighforwardly predictive or helpful. For example the idea that you should stop on red light at crossings is fairly prevalent, helpful coordination norm, and trasmitted both vertically from parents, by state, by dedicated traffic safety signs, etc.
In my view succesionism is interesting case study is because
- it is not directly useful for predicting observations, manipulating physical reality or solving coordination problems—
many of the common memes remixed are clearly insufficient to explain the spread—many big ideologies claim to understand the arc of history, expanding moral circle toward AIs is not yet powerful right now, misanthropy is unatractive,…
so the question why this spreads is interesting.
You may argue it’s because straightforwardly true or object-level compelling, but I basically don’t buy that. Metaethics is hard, axiology is hard, and macro-futurism is hard, and all of these domains share the feature that you can come up with profound sounding object-level reasons for basically arbitrary positions. This means without some ammount of philosophical competence and discipline, I’d expect people arrive at axiologies and meta-ethical ideas which fit beliefs they adopted for other reasons. Forms of successionism I mention share the feature that there is close to zero philosophers endorsing them, and when people with some competence in philosophy look at the reasons given, they see clear mistakes, arguments ignored, etc. Yes “part will also track genuine and often rigorous attempts to reason about the future”, but my guess is it’s not a large part—my impression is if you genuinely and rigorously reason about the future, you usually arrive at some combination of transhumanist ideas, view that metaethics is important and we don’t have clear solution, and something about AI being big deal.
I do agree AI xrisk memeplex is also somewhat strange and interesting case.