So You Want to Colonize The Universe

Epistemic Status: Mix of facts, far-future-speculation with the inevitable biases from only considering techniques we know are physically possible, fermi calculations, and an actual spacecraft design made during a one-week research bender.

(this is a sequence. 2, 3, 4, 5)

Part 1a: Gotta Go Fast, Astronomy is a-Wasting


Once a civilization or agent grows up enough to set it sights on colonizing realms beyond its host planet as its first priority (instead of averting existential risks), there is a very strong convergent instrumental goal which kicks in. Namely, going as close to lightspeed (abbreviated as c) as possible.

This is because the universe is expanding, so there is a finite sphere of reachable galaxies, and more pass outside the horizon every year. IIRC (60% probability), about half of the galaxies in the Hubble Ultra-Deep Field are unreachable even if we traveled at lightspeed.

Arriving at a galaxy even one year faster nets you a marginal gain of (one galaxy of stars)*(average stellar luminosity)*(1 year) of energy, which for our Milky way comes out to about Joules. Assuming energy production on earth stays constant, that’s enough energy for a billion years of earth civilization, 130 trillion times over. And I’d expect a transhuman civilization to be quite a few orders of magnitude better at getting value from a joule of energy than our current civilization. And that’s just for a single galaxy. There are a lot of galaxies, a one-year speedup in reaching them has tremendous value.

This is basically Bostrom’s astronomical waste argument, except Bostrom’s version then goes on to apply this loss in total form (which is far larger) instead of marginal form, to argue for the value of reducing existential risk.

Now, there are a few corrections to this to take into account. The first and most important is that, by the Landauer limit, the amount of (irreversible) computations that can be done is inversely proportional to temperature, so waiting until the universe cools down from its current 2.7 K temperature nets you several orders of magnitude more computational power from a given unit of energy than spending it now. Also, if you do reversible computation, this limit doesn’t apply except to bit erasures, which nets you a whole lot more orders of magnitude of computation.

Another correction is that if there are aliens, they’ll be rushing to colonize the universe, so the total volume a civilization can grab is going to be much smaller than the entire universe. There’s still an incentive to go fast to capture more stars before they do, though.

There’s also a correction where the total energy available from colonizing at all is more like the mass-energy of a galaxy than the fusion power of a galaxy, for reasons I’ll get to in a bit. The marginal loss from tarrying for one year is about the same, though.

And finally, if we consider the case where there aren’t aliens and we’re going up to the cosmological horizon, the marginal loss is less than stated for very distant galaxies, because by the time you get to a distant galaxy, it will have burned down to red dwarfs only, which aren’t very luminous.

Putting all this together, we get the conclusion that, for any agent whose utility function scales with the amount of computations done, the convergent strategy is “go really really fast, capture as many galaxies as possible, store as much mass-energy as possible in a stable form and ensure competitors don’t arise, then wait until the universe is cold and dead to run ultra-low-temperature reversible computing nodes.”

Part 1b: Stars For the Black Hole God! Utils For the Util Throne!

Now, banking mass-energy for sextillions of years in a way that doesn’t decay is a key part of this, and fortunately, there’s something in nature that does it! Kerr black holes are spinning rapidly, and warp space around them in such a way that it’s possible to recover some energy from them, at the cost of spinning down the black hole slightly. For a maximally spinning black hole, 29% of the mass-energy can be recovered as energy via either the Penrose Process (throwing something near the hole in a way that involves it coming back with more energy than it went in), or the Blandford-Znajek Process (which involves setting up a magnetic field around the hole and this inducing a current, and this is a major process powering quasars). I’m more partial to the second because it produces a current. Most black holes are Kerr black holes, and we’ve found quite a few black holes (including supermassive ones) that are spinning at around 0.9x the maximum spin, so an awful lot of energy can be extracted from them. So, if we sacrificed the entire Milky Way galaxy to the black hole at the center by nudging stellar orbits until they all went in, we’d have 5x10^57 joules of extractible energy to play around with. Take a minute to appreciate how big this is. And remember, this is per-galaxy. Another order of magnitude could be gotten if there’s some way for a far-future civilization to interface with dark matter.

So, the dominant strategy is something like “get to as much of the universe as fast as possible, and sacrifice all the stars you encounter to the black hole gods, and then in the far far future you can get the party started, with an absolutely ridiculous amount of energy at your disposal, and also the ability to use a given unit of energy far far more efficiently than we can today, by reversible computation and the universe being really cold”

(It’s a fuzzy memory, and Eliezer is welcome to correct me on this if I’ve misrepresented his views, and I’ll edit this section)

Due to the expansion of the universe, these mega-black-holes will be permanently isolated from each other. I think Eliezer’s proposal was to throw as much mass back to the Milky Way as possible, and set up shop there, instead of cutting far-future civilization into a bunch of absolutely disconnected islands. I don’t think this is as good, because I’d prefer a larger civilization over the whole universe (from not having to throw mass back to the milky way, just throw it to the nearest hole), cut into more disconnected islands, than a much smaller civilization that’s all in one island.

Part 1c: The Triple Tradeoff

In unrelated news, there’s also a unsettling argument that I came up with that there’s a convergent incentive to reduce the computational resources the computing node consumes. If you switch a simulated world to be lower fidelity, and 80% as fun, but now it only takes a fifth of the computational resources so 5x as much lifespan is available, I think I’d take that bargain. Taking this to the endpoint, I made the joke on a Dank EA Memes poll that the trillion trillion heavens of the far-future all have shitty Minecraft graphics, but I’m actually quite uncertain how that ends up, and there’s also the argument that most of the computational power goes to running the minds themselves and not the environment, in which case there’s an incentive for simplifying one’s own thought processes so they take less resources.

Generalizing this, there seems to be a three-way tradeoff between population, lifespan, and computational resources consumed per member. Picking the population extreme, you’d get a gigantic population of short-lived simple agents. Picking the lifespan extreme, you get a small population of simple agents living for a really really long time. Picking the computational resources extreme, you get a small population of short-lived really really posthuman agents. (note that short-lived can still be quite long relative to human lifespans) I’m not sure what the best tradeoff point is here, and it may vary by person, so something like “you get a finite but ridiculously large amount of computational resources, and if you want to be simpler and live longer, or go towards ever-greater heights of posthumanity with a shorter life, or have a bunch of babies and split your resources with them, you can do that”. However, that approach plus would lead to most of the population being descended from people who valued reproduction over long life or being really transhuman, and they’d get less resources for themselves, and that seems intuitively bad. Also maybe there could be merging of people, with associated pooling of resources? I’m not quite sure how to navigate this tradeoff, except to say that the population extreme of it seems bad, and that it’s a really important far-future issue. I should probably also point out that if this is the favored approach, in the long-time limit, most of those that are left will be those that have favored lifespan over being really transhuman or reproduction, so I guess the last thing left living before heat death might actually be a minimally-resource-intensive conscious agent in a world with low-quality graphics.

Part 1d: An Exploitable Fiction Opportunity

Also, in unrelated news, I think I see an exploitable gap in fiction-writing. The elephant in the room for all space-travel stories is that space is incompatible with mammals, and due to advances in electronics, it just makes more sense to send up robotic probes.

However, Burnside’s Zeroth Law of Space Combat is:

Science fiction fans relate more to human beings than to silicon chips.

I’m not a writer, but this doesn’t strike me as entirely true, due to the tendency of humans to anthropomorphize. When talking about the goal of space travel being to hit up as many stars and galaxies as possible as fast as possible, and throw them into black holes, the very first thing that came to mind was “aww, the civilization is acting just like an obsessive speedrunner!”

I like watching speedruns, it’s absolutely fascinating watching that much optimization power being directed at the task of going as fast as possible in defiance of the local rules and finding cool exploits. I’d totally read about the exploits of a civilization that’s overjoyed to find a way to make their dust shields 5% more efficient because that means they can reach a few thousand more galaxies, and Vinny the Von Neumann probe struggling to be as useful as it can given that it was sent to a low-quality asteroid, and stuff like that. The stakes are massive, you just need to put in some work to make the marginal gain of accelerated colonization more vivid for the reader. It’s the ultimate real-life tale of munchkinry for massive stakes and there’s also ample “I know you know I know...” reasoning introduced by virtue of light-speed communication delays, and everyone’s on the same side vs. nature.

I think Burnside might have been referring to science fiction for a more conventional audience, given the gap between his advice and my own reaction. But hard-sci-fi fans are already a pretty self-selected group, and Less Wrong readers are even moreso, and besides, with the advent of the internet, really niche fiction is a lot easier to pull off, it feels like there’s a Hard-SciFi x Speedrunning niche out there available to be filled. A dash of anthropomorphization along with Sufficiently Intelligent Probes feels like it could go a long way towards making people relate to the silicon chips.

So I think there’s an exploitable niche here.

Putting all this to the side, though, I’m interested in the “go fast” part. Really, how close to light speed is attainable?