Further writings at https://thewiderangle.substack.com/
Jonah Wilberg
On the contrary, the advantage of evolutionary game theory is that you do not need to assume that individuals in the model are rational agents. As I said in the previous post, evolutionary game theory stems from biology, where you’re dealing with animals that are of course not acting rationally—instead, the payoffs are what is selected for. In EPD individuals don’t really make choices at all, they just have strategies (C or D).
My focus here was on solutions that work for generic collective action problems (usual concept of Moloch), but don’t work for evolutionary prisoner’s dilemma (my concept of Moloch). It’s true I didn’t go through all possible solutions to collective action problems—instead I focused on those that don’t involve making structural changes (as opposed to changing parameters) of the model.
I don’t think your examples work as direct solutions to EPD:
Bottom-up control: any form of ‘control’ seems like ‘changing the payoffs’ - doesn’t matter whether it’s top-down or bottom-up.
Irrational agents: we’ve already covered this—the model assumes non-rational agents.
Illegibility: only helps if you assume rational agents who want legibility.
Decentralisation: you actually get basically the same overall results as EPD in a fairly small, finite population—this is called a ‘Moran process’.
Though to be clear, I do think that bottom-up control, decentralisation and social preferences are partial solutions to Moloch, just not directly—they require a switch to a different model, e.g. the Goddess model.
Defeating Moloch: The view from Evolutionary Game Theory
Thanks for the comment and for sharing your drafts—feel free to DM regarding possible collaboration. I agree: there are number of ways in which the Molochian regime can be avoided in evolutionary models, including memory of past games (iterated prisoner’s dilemma) and introducing spatial structure. And one of the benefits of seeing Moloch in these terms is that these models of the ‘evolution of cooperation’ become applicable to issues associated with Moloch like AI governance. I was planning to cover this in later posts in the sequence :)
Why Moloch is actually the God of Evolutionary Prisoner’s Dilemmas
Metacrisis as a Framework for AI Governance
Thanks for the comment—the reason I focus on cosmic unfairness here is because I addressed local unfairness in a previous post in the sequence—apologies this wasn’t clear, I’ve now added a hyperlink to clarify.
I don’t agree that the challenges of Dostoevsky etc are only about local unfairness though: as I say I think it’s typically a mixture of local and cosmic unfairness that are not clearly distinguished.
The ‘problem from the lack of intervention’ that you mention is much discussed by people in this context, so presumably they think it is relevant to the challenges they are considering, even if there is no easy solution.
Many Worlds and the Problems of Evil
Yes, very much agree with those points. Virtue ethics is another angle to come at the same point that there’s a process whereby you internalise system 2 beliefs into system 1. Virtues need to be practised and learned, not just appreciated theoretically. That’s why stoicism has been thought of (e.g. by Pierre Hadot) as promoting ‘spiritual exercises’ rather than systematic philosophy—I draw some further connections to stoicism in the next post in the sequence.
Thanks, yes good to see people independently arriving at a similar conclusion.
Good Fortune and Many Worlds
Misfortune and Many Worlds
The ‘Road Not Taken’ in the Multiverse
OK ‘impossible’ is too strong, I should have said ‘extremely difficult’. That was my point in footnote 3 of the post. Most people would take the fact that it has implications like needing to “maximize splits of good experiences” (I assume you mean maximise the number of splits) as a reductio ad absurdum, due to the fact that this is massively different from our normal intuitions about what we should do. But some people have tried to take that approach, like in the article I mentioned in the footnote. If you or someone else can come up with a consistent and convincing decision approach that involves branch counting I would genuinely love to see it!
I’m not at all saying the experiences of a person in a low-weight world are less valuable than a person in a high-weight world. Just that when you are considering possible futures in a decision-theoretic framework you need to apply the weights (because weight is equivalent to probability).
Wallace’s useful achievement in this context is to show that there exists a set of axioms that makes this work, and this includes branch-indifference.
This is useful because makes clear the way in which the branch-counting approach you’re suggesting is in conflict with decision theory. So I don’t disagree that you can care about the number of your thin instances, but what I’m saying is in that case you need to accept that this makes decision theory and probably consequentialist ethics impossible in your framework.
First of all, macroscopical indistinguishability is not fundamental physical property—branching indifference is additional assumption, so I don’t see how it’s not as arbitrary as branch counting.
You’re right it’s not a fundamental physical property—the overall philosophical framework here is that things can be real—as emergent entities—without being fundamental physical properties. Things like lions, and chairs are other examples.
But more importantly, branching indifference assumption is not the same as informal “not caring about macroscopically indistinguishable differences”!
This is how Wallace defines it (he in turn defines macroscopically indistinguishable in terms of providing the same rewards). It’s his term in the axiomatic system he uses to get decision theory to work. There’s not much to argue about here?
As Wallace showed, branching indifference implies the Born rule implies you almost shouldn’t care about you in a branch with a measure of 0.000001 even though it may involve drastic macroscopic difference for you in that branch. You being macroscopic doesn’t imply you shouldn’t care about your low-measure instances.
Yes this is true. Not caring about low-measure instances is a very different proposition from not caring about macroscopically indistinguishable differences. We should care about low-measure instances in proportion to the measure, just as in classical decision theory we care about low-probability instances in proportion to the probability.
OK but your original comment reads like you’re offering things not mattering cosmically as a reason for thinking MWI doesn’t change anything (if that’s not a reason, then you haven’t given any reason, you’ve just stated your view). And I think that’s a good argument—if you have general reasons that are independent of specific physics to think nothing matters (cosmically), then it will follow that nothing matters in MWI as well. I was responding to that argument.
I don’t get why you would say that the preferences are fine-grained, it kinda seems obvious to me that they are not fine-grained. You don’t care about whether worlds that are macroscopically indistinguishable are distinguishable at the quantum level, because you are yourself macroscopic. That’s why branching indifference is not arbitrary. Quantum immortality is a whole other controversial story.
You’re right that you can just take whatever approximation you make at the macroscopic level (‘sunny’) and convert that into a metric for counting worlds. But the point is that everyone will acknowledge that the counting part is arbitrary from the perspective of fundamental physics—but you can remove the arbitrariness that derives from fine-graining, by focusing on the weight. (That is kind of the whole point of a mathematical measure.)
OK I think I see where you’re coming from—but I do think the unimaginable bigness of the universe has more ‘irrelevance’ implications for a consequentialist view which tries to consider valuable states of the universe than for a virtue approach which considers valuable states of yourself. Also if you think the implication of physics is that everything is irrelevant, that seems like an important implication in it’s own right, and different from ‘normality’ (the normal way most people think about ethics, which assumes that some things actually are relevant).
Thanks for these thoughts. Yes this is broadly the direction I am headed, and planning to explore further in future posts. I agree that both repeated (IPD-style) interactions and kin-interactions are ways of developing the evolutionary prisoner’s dilemma so as to ‘integrate Moloch and the Goddess into a single model’ as I put it at the end of my post. Another approach is to introduce spatial structure.
There are a number of different specific models to explore here, and I prefer to look at the simplest and clearest versions. Your presentation is very dense, and I think introduces some complications that are not necessary to make the point. For example I’m not sure why you introduce the assumption about two descendants and their pc±ϵ probabilities to cooperate. You can just construct the model using a variant of Hamilton’s rule where r is the probability of interacting with kin rather than a random member of the population, this will result in cooperation growing if r is high enough relative to the payoffs.