Thanks, I had been hoping to see an evolutionary analysis of decision theories, so I’ll check out the paper sometime! Whichever decision theory turns out to be evolutionarily optimal, I imagine it still won’t engage in multiverse trade; does the paper disagree?
We can send space ship beyond event horizon and still care about what is going to happen on it after it crosses event horizon, despite this being utterly irrelevant to our genetic fitness in causal sense. If we are capable to develop such preferences, I don’t see any strong reason to develop strongly-monoverse decision theory.
Multiversal acausal trading is just logical consequence of LDT and I expect majority of powerful agents to have LDT-style decision theory, not LDT-but-without-multiverse decision theory.
Hm I think LDT must be fleshed out in more detail, to clarify which consequences follow or which generalizations are most natural. Arguing from selection seems like a powerful tool here; nonetheless, this seems like a difficult project. Suppose you live in a Universe where you often get cloned with mutations, and made to play prisoner’s dilemmas against your imperfect copies; how much correlation does the most successful version of LDT assign between the two competing policies? The full theory must deal with even more general scenarios.
Thanks, I had been hoping to see an evolutionary analysis of decision theories, so I’ll check out the paper sometime! Whichever decision theory turns out to be evolutionarily optimal, I imagine it still won’t engage in multiverse trade; does the paper disagree?
We can send space ship beyond event horizon and still care about what is going to happen on it after it crosses event horizon, despite this being utterly irrelevant to our genetic fitness in causal sense. If we are capable to develop such preferences, I don’t see any strong reason to develop strongly-monoverse decision theory.
Multiversal acausal trading is just logical consequence of LDT and I expect majority of powerful agents to have LDT-style decision theory, not LDT-but-without-multiverse decision theory.
Hm I think LDT must be fleshed out in more detail, to clarify which consequences follow or which generalizations are most natural. Arguing from selection seems like a powerful tool here; nonetheless, this seems like a difficult project. Suppose you live in a Universe where you often get cloned with mutations, and made to play prisoner’s dilemmas against your imperfect copies; how much correlation does the most successful version of LDT assign between the two competing policies? The full theory must deal with even more general scenarios.