Assuming a civilization that expands at close to the speed of light, your only chance to influence the behavior of colonies in most distant galaxies must be encoded in what you send toward those galaxies to begin with (i.e. what is in the colonizing probes themselves, plus any updates to instructions you send early on, while they’re still en route). Because, the home galaxy (the Milky Way) will never hear so much as a “hello, we’ve arrived” back from approximately 7⁄8 of the galaxies that it eventually colonizes (due to a causal horizon).
You’ll have some degree of two-way communication with the closest 1⁄8 of colonized galaxies, though the amount of conversation will be greatly delayed and truncated with distance.
To see just how truncated, suppose a colony is established in a galaxy, and they send the following message back towards the Milky Way: “Hello, we’ve arrived and made the most wonderful discovery about our colony’s social organization. Yes, it involves ritualistically eating human children, but we think the results are wonderful and speak for themselves. Aren’t you proud of us?”
As I mentioned, for only 1⁄8 of colonized galaxies would that message even make it back to the Milky Way. And for only the 1⁄27 closest galaxies would the Milky Way be able to send a reply saying “What you are doing is WRONG, don’t you see? Stop it at once!” And you can expect that message to arrive at the colony only after a hundred billion years, in most cases. In the case of the 1⁄64 closest colonies, the Milky Way could also expect to hear back “Sorry about that. We stopped.” in reply.
That is, unless the current favored cosmology is completely wrong, which is always in the cards.
So, yeah—if you want to initiate an expanding cosmological civilization, you’ll have to live with the prospect that almost all of it is going to evolve independently of your wishes, in any respect that isn’t locked down for all time on day 1.
That is, unless the current favored cosmology is completely wrong, which is always in the cards.
FWIW, that’s why I disagree with one of your minor conclusions: there being an inherent myopia to superintelligences which renders everything past a certain distance “exactly zero”. There is quite a bit of possibility in the cards about one of the many assumptions being wrong, which creates both risk and reward for not being myopic. So the myopia there would not lead to exactly zero valuation—it might lead to something that is quite substantially larger than zero.
And since the cost of spitting out colonization starwisps seems to be so low in an absolute sense, per Anders, it wouldn’t take much above zero value to motivate tons of colonization anyway.
Indeed, the fundamental epistemological & ontological uncertainities might lead you to problems of the total valuation being too large, because any possibility of being able to break lightspeed or change expansion or any of the other loopholes means both that you are now massively threatened by any other entity which cracks the loopholes, and that you can do the same to the universe—which might then be vastly larger—and now you are in infinite-fanaticism territory dealing with issues like Pascal’s mugging where the mere possibility that any of the colonized resources might solve the problem leads to investing all resources in colonization in the hopes of one of them getting lucky. (This is analogous to other possible infinite-fanaticism traps: ‘what if you can break out of the Matrix into a literally infinite universe? Surely the expected value of even the tiniest possibility of that justifies spending all resources on it?’)
(There is also a modest effect from evolution/selection: if there is any variance between superintelligences about the value of blind one-way colonization, then there will be some degree of universe-wide selection for the superintelligences which happen to choose to colonize more blindly. Those colonies will presumably replicate that choice, and then go on to one-way colonize in their own local bubble, and so on, even as the bubbles become disconnected. Not immediately obvious to me how big this effect would be or what it converges to. Might be an interesting use of the Price equation.)
Yes, I agree. As you point out, that’s a general kind of problem with decision-making in an environment of low probability that something spectacularly good might happen if I throw resources at X. (At one point I actually wrote a feature-length screenplay about this, with an AI attempting to throw cosmic resources at religion, in a low-probability attempt to unlock infinity. Got reasonably good scores in competition, but I was told at one point that “a computer misunderstanding its programming” was old hat. Oh well.)
My pronouncement of “exactly zero” is just what would follow from taking the stated scientific assumptions at face value, and applying them to the specific argument I was addressing. But I definitely agree that a real-world AI might come up with other arguments for expansion.
I have described certain limits to communication in an expanding cosmological civilization here: https://arxiv.org/abs/2208.07871
Assuming a civilization that expands at close to the speed of light, your only chance to influence the behavior of colonies in most distant galaxies must be encoded in what you send toward those galaxies to begin with (i.e. what is in the colonizing probes themselves, plus any updates to instructions you send early on, while they’re still en route). Because, the home galaxy (the Milky Way) will never hear so much as a “hello, we’ve arrived” back from approximately 7⁄8 of the galaxies that it eventually colonizes (due to a causal horizon).
You’ll have some degree of two-way communication with the closest 1⁄8 of colonized galaxies, though the amount of conversation will be greatly delayed and truncated with distance.
To see just how truncated, suppose a colony is established in a galaxy, and they send the following message back towards the Milky Way: “Hello, we’ve arrived and made the most wonderful discovery about our colony’s social organization. Yes, it involves ritualistically eating human children, but we think the results are wonderful and speak for themselves. Aren’t you proud of us?”
As I mentioned, for only 1⁄8 of colonized galaxies would that message even make it back to the Milky Way. And for only the 1⁄27 closest galaxies would the Milky Way be able to send a reply saying “What you are doing is WRONG, don’t you see? Stop it at once!” And you can expect that message to arrive at the colony only after a hundred billion years, in most cases. In the case of the 1⁄64 closest colonies, the Milky Way could also expect to hear back “Sorry about that. We stopped.” in reply.
That is, unless the current favored cosmology is completely wrong, which is always in the cards.
So, yeah—if you want to initiate an expanding cosmological civilization, you’ll have to live with the prospect that almost all of it is going to evolve independently of your wishes, in any respect that isn’t locked down for all time on day 1.
FWIW, that’s why I disagree with one of your minor conclusions: there being an inherent myopia to superintelligences which renders everything past a certain distance “exactly zero”. There is quite a bit of possibility in the cards about one of the many assumptions being wrong, which creates both risk and reward for not being myopic. So the myopia there would not lead to exactly zero valuation—it might lead to something that is quite substantially larger than zero.
And since the cost of spitting out colonization starwisps seems to be so low in an absolute sense, per Anders, it wouldn’t take much above zero value to motivate tons of colonization anyway.
Indeed, the fundamental epistemological & ontological uncertainities might lead you to problems of the total valuation being too large, because any possibility of being able to break lightspeed or change expansion or any of the other loopholes means both that you are now massively threatened by any other entity which cracks the loopholes, and that you can do the same to the universe—which might then be vastly larger—and now you are in infinite-fanaticism territory dealing with issues like Pascal’s mugging where the mere possibility that any of the colonized resources might solve the problem leads to investing all resources in colonization in the hopes of one of them getting lucky. (This is analogous to other possible infinite-fanaticism traps: ‘what if you can break out of the Matrix into a literally infinite universe? Surely the expected value of even the tiniest possibility of that justifies spending all resources on it?’)
(There is also a modest effect from evolution/selection: if there is any variance between superintelligences about the value of blind one-way colonization, then there will be some degree of universe-wide selection for the superintelligences which happen to choose to colonize more blindly. Those colonies will presumably replicate that choice, and then go on to one-way colonize in their own local bubble, and so on, even as the bubbles become disconnected. Not immediately obvious to me how big this effect would be or what it converges to. Might be an interesting use of the Price equation.)
Yes, I agree. As you point out, that’s a general kind of problem with decision-making in an environment of low probability that something spectacularly good might happen if I throw resources at X. (At one point I actually wrote a feature-length screenplay about this, with an AI attempting to throw cosmic resources at religion, in a low-probability attempt to unlock infinity. Got reasonably good scores in competition, but I was told at one point that “a computer misunderstanding its programming” was old hat. Oh well.)
My pronouncement of “exactly zero” is just what would follow from taking the stated scientific assumptions at face value, and applying them to the specific argument I was addressing. But I definitely agree that a real-world AI might come up with other arguments for expansion.