I’m surprised at the lack of any ‘when the stars are right’ quips. But anyway, this has the same problems that jcannell’s ‘cold slow computers’ and most Fermi solutions:
to satisfy the non-modification criteria, it handwaves away all the losses in the present day. Yes, maybe current losses from the stars burning out are relatively small compared to the 10^30 benefit of waiting for the universe to cool a la Dyson’s eternal intelligence, but the losses are still astronomical. This should still produce waves of colonization and stellar engineering well beyond some modest anti-black hole and collision engineering
This doesn’t provide any special reason to expect universal non-defection, coordination, insensitivity to existential risk or model uncertainty, or universally shared near-zero interest rates. All of these would drive expansionism and stellar engineering. Appeal to coordination spurred by the goal of preventing long-term loss of resources provides no additional incentive above and beyond existing ‘burning the cosmic commons’ incentives, and actually contradicts the argument for non-modification: if it’s fine to let the stars burn out and everything proceed normally because the ultimate loss is trivial, then why would they be concerned about some more impatient civilization re-arranging them into Dyson spheres to do some computations earlier? After all, it’s so trivial compared to 10^30 - right?
More to the point, anything that DOES use matter and energy would rapidly dominate over things that do not and be selected for. Replicators spread until they can’t and evolve towards rapid rates of growth and use of resources (compromising between the two), not things orthagonal to doubling time like computational efficiency.
Yes. I think this paper addresses it with the ‘defense of territory’ assumption (‘4. A civilization can retain control over its volume against other civilizations’). I think the idea is that the species quickly establishes a sleeping presence in as many solar systems as possible, then uses its invincible defensive abilities to maintain them.
But in real life, you could well be right. Plausibly there are scenarios in which a superintelligence cant defend a solar system against an arbitrarily large quantity of hostile biomass.
At least one detector is enough to make visible astroengineering. And this is the problem with many suggested solutions of Fermi paradox: they explain why some civilization are not visible, but not why universal coordination between probably not communicating civilization is reached.
Circovic wrote another article with the same problem: “Geo-engineering Gone Awry: A New Partial Solution of Fermi’s Paradox”. https://arxiv.org/abs/physics/0308058 But partial solutions are not solutions in case fermi paradox.
I’m surprised at the lack of any ‘when the stars are right’ quips. But anyway, this has the same problems that jcannell’s ‘cold slow computers’ and most Fermi solutions:
to satisfy the non-modification criteria, it handwaves away all the losses in the present day. Yes, maybe current losses from the stars burning out are relatively small compared to the 10^30 benefit of waiting for the universe to cool a la Dyson’s eternal intelligence, but the losses are still astronomical. This should still produce waves of colonization and stellar engineering well beyond some modest anti-black hole and collision engineering
This doesn’t provide any special reason to expect universal non-defection, coordination, insensitivity to existential risk or model uncertainty, or universally shared near-zero interest rates. All of these would drive expansionism and stellar engineering. Appeal to coordination spurred by the goal of preventing long-term loss of resources provides no additional incentive above and beyond existing ‘burning the cosmic commons’ incentives, and actually contradicts the argument for non-modification: if it’s fine to let the stars burn out and everything proceed normally because the ultimate loss is trivial, then why would they be concerned about some more impatient civilization re-arranging them into Dyson spheres to do some computations earlier? After all, it’s so trivial compared to 10^30 - right?
More to the point, anything that DOES use matter and energy would rapidly dominate over things that do not and be selected for. Replicators spread until they can’t and evolve towards rapid rates of growth and use of resources (compromising between the two), not things orthagonal to doubling time like computational efficiency.
Yes. I think this paper addresses it with the ‘defense of territory’ assumption (‘4. A civilization can retain control over its volume against other civilizations’). I think the idea is that the species quickly establishes a sleeping presence in as many solar systems as possible, then uses its invincible defensive abilities to maintain them.
But in real life, you could well be right. Plausibly there are scenarios in which a superintelligence cant defend a solar system against an arbitrarily large quantity of hostile biomass.
At least one detector is enough to make visible astroengineering. And this is the problem with many suggested solutions of Fermi paradox: they explain why some civilization are not visible, but not why universal coordination between probably not communicating civilization is reached.
Circovic wrote another article with the same problem: “Geo-engineering Gone Awry: A New Partial Solution of Fermi’s Paradox”. https://arxiv.org/abs/physics/0308058 But partial solutions are not solutions in case fermi paradox.