Yeah, agents incapable of acausal cooperation are already being selected out: Most of the dominant nations and corporations are to some degree internally transparent, or bound by public rules or commitments, which is sufficient for engaging in acausal trade. This will only become more true over time: Trustworthiness is profitable, a person who can’t keep a promise is generally an undesirable trading partner, and artificial minds are much easier to make transparent and committed than individual humans and even organisations of humans are.
Also, technological (or post-biological) eras might just not have ongoing darwinian selection. Civilisations that fail to seize control of their own design process wont be strong enough to have a seat at the table, those at the table will be equipped with millions of years of advanced information technology, cryptography and game theory, perfect indefinite coordination will be a solved problem. I can think of ways this could break down, but they don’t seem like the likeliest outcomes.
Yeah, agents incapable of acausal cooperation are already being selected out: Most of the dominant nations and corporations are to some degree internally transparent, or bound by public rules or commitments, which is sufficient for engaging in acausal trade. This will only become more true over time: Trustworthiness is profitable, a person who can’t keep a promise is generally an undesirable trading partner, and artificial minds are much easier to make transparent and committed than individual humans and even organisations of humans are.
Also, technological (or post-biological) eras might just not have ongoing darwinian selection. Civilisations that fail to seize control of their own design process wont be strong enough to have a seat at the table, those at the table will be equipped with millions of years of advanced information technology, cryptography and game theory, perfect indefinite coordination will be a solved problem. I can think of ways this could break down, but they don’t seem like the likeliest outcomes.