I appreciate the memetic-evolution framing, but I’m somewhat skeptical of the strong emphasis on tension-reduction as the primary (or even a major) explanatory driver of successionist beliefs. Given that you take successionism to be “false and dangerous,” it seems natural that your preferred explanation foregrounds memetics; but that sits a bit uneasily with the stated goal of analyzing why people hold these views irrespective of their truth value, which you state you’re doing at the beginning.
Even if we bracket the object level, a purely memetic or cognitive-dissonance-based explanation risks drifting into an overly broad epistemic relativism/skepticism. Under many accounts of truth—process reliabilism being one—what makes a belief true is precisely that it’s formed by a reliable process. If we exclude the possibility that people arrive at their views through such processes and instead explain them almost entirely via dissonance-reduction pressures, we risk undermining (almost) all belief formation, not just things like successionism.
There’s a related danger: sociological/memetic explanations of belief formation can easily shade into ad hominem-esque critiques if not handled carefully (of course, ad hominems in some forms—i.e. talking about someones likelihood to get to a true belief—is relevant to evidence, but it’s bad for good epistemic hygiene and discourse). One could tell a similar story about why people believe in, say, AI x-risk—Tyler Cowen has suggested that part of the appeal is the feeling of possessing secret, high-stakes insight. And while this may capture a fragment of the causal picture for some individuals, to me, it’s clearly not the dominant explanation for most thoughtful, epistemically serious people. And if it were the main cause, we would be right to distrust the resulting beliefs, and yet this doesn’t seem particularly more convincing in one case or another as an explanation (unless you already think one is false and one is true).
So while memetic fitness and tension-resolution offer part of an explanation, I’m not convinced they do most of the work for most people. For most, object-level reasoning—about value theory, metaethics, consciousness, agency, and long-run trajectories—plays a substantial role in why they end up where they do. To the extent that successionist ideologies spread, part of that spread will track memetic dynamics, but part will also track genuine and often rigorous attempts to reason about the future of value and the structure of possible worlds.
Curious what people think about this, though, and very open to constructive criticism/I don’t feel very confident about this.
I don’t claim that most memes derive their fitness from resolving cognitive dissonance. There are many reasons why something may be memetically fit, and I gesture at some of the more common ones. For example most common religions and ideologies have some memes which encourage proselytizing—the mechanics why this increases memetic fitness is not particularly subtle or mysterious. Also many ideas are fit just because they are straighforwardly predictive or helpful. For example the idea that you should stop on red light at crossings is fairly prevalent, helpful coordination norm, and trasmitted both vertically from parents, by state, by dedicated traffic safety signs, etc.
In my view succesionism is interesting case study is because - it is not directly useful for predicting observations, manipulating physical reality or solving coordination problems— many of the common memes remixed are clearly insufficient to explain the spread—many big ideologies claim to understand the arc of history, expanding moral circle toward AIs is not yet powerful right now, misanthropy is unatractive,… so the question why this spreads is interesting.
You may argue it’s because straightforwardly true or object-level compelling, but I basically don’t buy that. Metaethics is hard, axiology is hard, and macro-futurism is hard, and all of these domains share the feature that you can come up with profound sounding object-level reasons for basically arbitrary positions. This means without some ammount of philosophical competence and discipline, I’d expect people arrive at axiologies and meta-ethical ideas which fit beliefs they adopted for other reasons. Forms of successionism I mention share the feature that there is close to zero philosophers endorsing them, and when people with some competence in philosophy look at the reasons given, they see clear mistakes, arguments ignored, etc. Yes “part will also track genuine and often rigorous attempts to reason about the future”, but my guess is it’s not a large part—my impression is if you genuinely and rigorously reason about the future, you usually arrive at some combination of transhumanist ideas, view that metaethics is important and we don’t have clear solution, and something about AI being big deal.
I do agree AI xrisk memeplex is also somewhat strange and interesting case.
I think the process reliabilism argument rules out friction reduction as a fully general explanation, but doesn’t rule out friction reduction in specific cases where reducing friction had equal or greater survival and reproductive utility than understanding the world. So total paranoia and abandonment of rational epistemics is unjustified, but also, there may be needles hiding in haystacks that evolution itself both decided were infohazards and converted into ostensibly intensely realist but objectively anti-realist political positions. This is my updated position after thinking about this comment a lot. It is still a very bad position to be in. I am still too convinced the phenomenon is real, but also, the number of things which have convinced me is like, four. It was premature to convert that into a totalizing worldview.
Agreed. “This idea I disagree with is spreading because it’s convenient for my enemies to believe it” is a very old refrain, and using science-y words like “memetics” is a way to give authority to that argument without actually doing any work that might falsify it.
Overall, I think the field of memetics, how arguments spread, how specifically bad ideas spread, and how to encourage them / disrupt them is a fascinating one, but discourse about it is poisoned by the fact that almost everyone who shows interest in the subject is ultimately hoping to get a Scientific Reason Why My Opponents Are Wrong. Exploratory research, making falsifiable predictions, running actual experiments, these are all orthogonal or even detrimental to Proving My Opponents Are Wrong, and so people don’t care about them.
I appreciate the memetic-evolution framing, but I’m somewhat skeptical of the strong emphasis on tension-reduction as the primary (or even a major) explanatory driver of successionist beliefs. Given that you take successionism to be “false and dangerous,” it seems natural that your preferred explanation foregrounds memetics; but that sits a bit uneasily with the stated goal of analyzing why people hold these views irrespective of their truth value, which you state you’re doing at the beginning.
Even if we bracket the object level, a purely memetic or cognitive-dissonance-based explanation risks drifting into an overly broad epistemic relativism/skepticism. Under many accounts of truth—process reliabilism being one—what makes a belief true is precisely that it’s formed by a reliable process. If we exclude the possibility that people arrive at their views through such processes and instead explain them almost entirely via dissonance-reduction pressures, we risk undermining (almost) all belief formation, not just things like successionism.
There’s a related danger: sociological/memetic explanations of belief formation can easily shade into ad hominem-esque critiques if not handled carefully (of course, ad hominems in some forms—i.e. talking about someones likelihood to get to a true belief—is relevant to evidence, but it’s bad for good epistemic hygiene and discourse). One could tell a similar story about why people believe in, say, AI x-risk—Tyler Cowen has suggested that part of the appeal is the feeling of possessing secret, high-stakes insight. And while this may capture a fragment of the causal picture for some individuals, to me, it’s clearly not the dominant explanation for most thoughtful, epistemically serious people. And if it were the main cause, we would be right to distrust the resulting beliefs, and yet this doesn’t seem particularly more convincing in one case or another as an explanation (unless you already think one is false and one is true).
So while memetic fitness and tension-resolution offer part of an explanation, I’m not convinced they do most of the work for most people. For most, object-level reasoning—about value theory, metaethics, consciousness, agency, and long-run trajectories—plays a substantial role in why they end up where they do. To the extent that successionist ideologies spread, part of that spread will track memetic dynamics, but part will also track genuine and often rigorous attempts to reason about the future of value and the structure of possible worlds.
Curious what people think about this, though, and very open to constructive criticism/I don’t feel very confident about this.
Screencapping this reply so I can read it every day to try to be less insane.
I don’t claim that most memes derive their fitness from resolving cognitive dissonance. There are many reasons why something may be memetically fit, and I gesture at some of the more common ones. For example most common religions and ideologies have some memes which encourage proselytizing—the mechanics why this increases memetic fitness is not particularly subtle or mysterious. Also many ideas are fit just because they are straighforwardly predictive or helpful. For example the idea that you should stop on red light at crossings is fairly prevalent, helpful coordination norm, and trasmitted both vertically from parents, by state, by dedicated traffic safety signs, etc.
In my view succesionism is interesting case study is because
- it is not directly useful for predicting observations, manipulating physical reality or solving coordination problems—
many of the common memes remixed are clearly insufficient to explain the spread—many big ideologies claim to understand the arc of history, expanding moral circle toward AIs is not yet powerful right now, misanthropy is unatractive,…
so the question why this spreads is interesting.
You may argue it’s because straightforwardly true or object-level compelling, but I basically don’t buy that. Metaethics is hard, axiology is hard, and macro-futurism is hard, and all of these domains share the feature that you can come up with profound sounding object-level reasons for basically arbitrary positions. This means without some ammount of philosophical competence and discipline, I’d expect people arrive at axiologies and meta-ethical ideas which fit beliefs they adopted for other reasons. Forms of successionism I mention share the feature that there is close to zero philosophers endorsing them, and when people with some competence in philosophy look at the reasons given, they see clear mistakes, arguments ignored, etc. Yes “part will also track genuine and often rigorous attempts to reason about the future”, but my guess is it’s not a large part—my impression is if you genuinely and rigorously reason about the future, you usually arrive at some combination of transhumanist ideas, view that metaethics is important and we don’t have clear solution, and something about AI being big deal.
I do agree AI xrisk memeplex is also somewhat strange and interesting case.
I think the process reliabilism argument rules out friction reduction as a fully general explanation, but doesn’t rule out friction reduction in specific cases where reducing friction had equal or greater survival and reproductive utility than understanding the world. So total paranoia and abandonment of rational epistemics is unjustified, but also, there may be needles hiding in haystacks that evolution itself both decided were infohazards and converted into ostensibly intensely realist but objectively anti-realist political positions. This is my updated position after thinking about this comment a lot. It is still a very bad position to be in. I am still too convinced the phenomenon is real, but also, the number of things which have convinced me is like, four. It was premature to convert that into a totalizing worldview.
Agreed. “This idea I disagree with is spreading because it’s convenient for my enemies to believe it” is a very old refrain, and using science-y words like “memetics” is a way to give authority to that argument without actually doing any work that might falsify it.
Overall, I think the field of memetics, how arguments spread, how specifically bad ideas spread, and how to encourage them / disrupt them is a fascinating one, but discourse about it is poisoned by the fact that almost everyone who shows interest in the subject is ultimately hoping to get a Scientific Reason Why My Opponents Are Wrong. Exploratory research, making falsifiable predictions, running actual experiments, these are all orthogonal or even detrimental to Proving My Opponents Are Wrong, and so people don’t care about them.