CDT agents are not capable to cooperate in Prisoner’s dilemma, therefore, they are selected out. EDT agents are not capable to refuse to pay in XOR blackmail (or, symmetrically, pay in Parfit’s hitchhiker), therefore, they are selected out.
Yeah, agents incapable of acausal cooperation are already being selected out: Most of the dominant nations and corporations are to some degree internally transparent, or bound by public rules or commitments, which is sufficient for engaging in acausal trade. This will only become more true over time: Trustworthiness is profitable, a person who can’t keep a promise is generally an undesirable trading partner, and artificial minds are much easier to make transparent and committed than individual humans and even organisations of humans are.
Also, technological (or post-biological) eras might just not have ongoing darwinian selection. Civilisations that fail to seize control of their own design process wont be strong enough to have a seat at the table, those at the table will be equipped with millions of years of advanced information technology, cryptography and game theory, perfect indefinite coordination will be a solved problem. I can think of ways this could break down, but they don’t seem like the likeliest outcomes.
Thanks, I had been hoping to see an evolutionary analysis of decision theories, so I’ll check out the paper sometime! Whichever decision theory turns out to be evolutionarily optimal, I imagine it still won’t engage in multiverse trade; does the paper disagree?
We can send space ship beyond event horizon and still care about what is going to happen on it after it crosses event horizon, despite this being utterly irrelevant to our genetic fitness in causal sense. If we are capable to develop such preferences, I don’t see any strong reason to develop strongly-monoverse decision theory.
Multiversal acausal trading is just logical consequence of LDT and I expect majority of powerful agents to have LDT-style decision theory, not LDT-but-without-multiverse decision theory.
Hm I think LDT must be fleshed out in more detail, to clarify which consequences follow or which generalizations are most natural. Arguing from selection seems like a powerful tool here; nonetheless, this seems like a difficult project. Suppose you live in a Universe where you often get cloned with mutations, and made to play prisoner’s dilemmas against your imperfect copies; how much correlation does the most successful version of LDT assign between the two competing policies? The full theory must deal with even more general scenarios.
Problem 1 is wrong objection.
CDT agents are not capable to cooperate in Prisoner’s dilemma, therefore, they are selected out. EDT agents are not capable to refuse to pay in XOR blackmail (or, symmetrically, pay in Parfit’s hitchhiker), therefore, they are selected out.
I think you will be interested in this paper.
Yeah, agents incapable of acausal cooperation are already being selected out: Most of the dominant nations and corporations are to some degree internally transparent, or bound by public rules or commitments, which is sufficient for engaging in acausal trade. This will only become more true over time: Trustworthiness is profitable, a person who can’t keep a promise is generally an undesirable trading partner, and artificial minds are much easier to make transparent and committed than individual humans and even organisations of humans are.
Also, technological (or post-biological) eras might just not have ongoing darwinian selection. Civilisations that fail to seize control of their own design process wont be strong enough to have a seat at the table, those at the table will be equipped with millions of years of advanced information technology, cryptography and game theory, perfect indefinite coordination will be a solved problem. I can think of ways this could break down, but they don’t seem like the likeliest outcomes.
Thanks, I had been hoping to see an evolutionary analysis of decision theories, so I’ll check out the paper sometime! Whichever decision theory turns out to be evolutionarily optimal, I imagine it still won’t engage in multiverse trade; does the paper disagree?
We can send space ship beyond event horizon and still care about what is going to happen on it after it crosses event horizon, despite this being utterly irrelevant to our genetic fitness in causal sense. If we are capable to develop such preferences, I don’t see any strong reason to develop strongly-monoverse decision theory.
Multiversal acausal trading is just logical consequence of LDT and I expect majority of powerful agents to have LDT-style decision theory, not LDT-but-without-multiverse decision theory.
Hm I think LDT must be fleshed out in more detail, to clarify which consequences follow or which generalizations are most natural. Arguing from selection seems like a powerful tool here; nonetheless, this seems like a difficult project. Suppose you live in a Universe where you often get cloned with mutations, and made to play prisoner’s dilemmas against your imperfect copies; how much correlation does the most successful version of LDT assign between the two competing policies? The full theory must deal with even more general scenarios.