So You Want to Colonize the Universe Part 2: Deep Time Engineering

Part 2: Deep Time Engineering

(1, 3, 4, 5)


So, with “Gotta go Fast” as the highest goal, and aware of the fact that the amount of computational resources and thinking time devoted to building fast starships will exceed by many orders of magnitude all human thought conducted so far, due to the importance of it...

I set myself to designing a starship to get to the Virgo supercluster (about 200 million light years away) in minimum time, as a lower-bound on how much of the universe could be colonized. I expect the future to beat whatever bar I set, whether humanity survives or not (it turned out to be about 0.9 c)

Now, most people focus on interstellar travel, but the intergalactic travel part is comparatively underexplored (see comments). We have one big advantage here, which is that we don’t need to keep mammals around, and this lets us have a much smaller payload. Instead of delivering a vessel that can support earth-based life for hundreds of millions of years, we just have to deliver about 100 kg of Von Neumann probes and stored people, which build more of themselves. (The true number is probably a lot less than this, but as it turns out, it isn’t harder to design for the 100 kg case than the 1 mg case because there’s a minimum viable mass for dust shielding, and we’ll be cheating the rocket equation.)

Before we get into intergalactic starship design (part 5), I want to take a minute to point out the field of Deep Time engineering, which is something that I just crystallized as a concept while working on this.

Note that whatever starship design you’re building, it has to last for 200 million years, getting bombarded by relativistic protons and dust the whole way, and even with relativity speeding things up, you’re still talking about building machinery that last for tens of millions of years and works with extremely high reliability the whole way. This is incredibly far beyond what engineering normally does, it takes god-like levels of redundancy and reliability, and if you’ve got something with moving parts, there’s erosion by friction to consider, and also 200 million years worth of cosmic rays… I didn’t focus on actual solutions that much, but just the awareness of the existence of tasks which require building machinery that works for hundreds of millions of years sparked something.

Engineers have shorter time horizons than you might expect. In environmental engineering (my major), we typically focused on a 20-50 year design life for building wastewater treatment systems. They’re also dependent on the electrical grid for functioning. I think that I could design a 500-year treatment plant that also wasn’t dependent on the electrical grid. It would take a while, bring in quite a few nonstandard considerations, and be far outside of the scope of normal design, and a bunch of standard approaches (like using energy-hungry air pumps to aerate the water) wouldn’t work. A plant that does this would also have an enormously larger footprint than standard wastewater treatment plants.

Several-hundred or several-thousand year solutions are in a very different design space than standard solutions.

I should also make the note that we’ve figured out how Roman Concrete works, which is far more erosion-resistant than standard concrete (it lasts for several thousands of years, and is far more resistant to saltwater than standard cement), and this is why the Colosseum is still standing. Basically, you just use seawater instead of regular water when making it. Also the steel beams in regular concrete which give it tensile strength instead of mere compressive strength accelerate corrosion significantly. However, regular concrete takes a few hours to cure enough to apply weight, and cures fully in about a month. Roman Concrete takes two years to fully cure. And this is why very few places use Roman Concrete, even though it lasts over an order of magnitude longer. (I did find an article about a Hindu temple under construction that was using Roman Concrete, and was designed for a thousand years, though).

Even in civil engineering, the land of roads and bridges and buildings, you tend to see 100-year design lives at most, as well. I should note that there are tables that tell you the average magnitude of a 100-year flood (largest flood expected in 100 years), and these are used in design. And also the teachers mentioned that due to climate change, extreme weather events are more likely to occur than the tables indicate. But they didn’t explicitly connect these two things, it was left unstated for the students to click together, and there was also an unstated implication that going to the higher-redundancy systems that’d handle 100 years+climate change would lead to people asking you why you’re using 1,000-year flood numbers instead of 100-year flood numbers and the design wouldn’t pass.

There are exceptions. The sea walls in the Netherlands are sized for 10,000-year flood numbers, and I got a pleasant chill up my back when I read that, because there’s something really nice about seeing a civilization build for thousands of years in the future.

There’s also the attempt to design nuclear waste storage that warns people away for tens of thousands of years, even if civilization falls in the meantime. This popular account is worth reading, as a glimpse into long-timescale engineering.

But in general, Deep Time Engineering is pretty underexplored, because it requires much higher costs, much higher reliability, a larger footprint, and about all of the machinery that you’d buy isn’t rated for hundreds or thousands of years, there’s no supporting infrastructure for engaging in construction projects of that design life.

The specific manifestations of it would vary widely by field and the specifics of what you’re building, but in general it seems to be a discrete Thing that hasn’t previously been named, and that our civilization neglects.

Building a 100-million year (or even billion-year) starship is an especially extreme example of this. For my specific starship design, the only thing that actually requires continuously running the whole time is the antimatter chilling system to get it to 0.1 K when the cosmic microwave background is 2.73 K (otherwise it heats up enough that you lose all your antimatter to evaporation against starcraft walls by the time you get there). This takes less than a watt of power to do, but keeping an antimatter cooling system (and storage system, although superconducting coils help immensely) continuously running for geologic timescales is a very impressive feat. Also, all the machinery for deceleration has to still work at the end of 100 million years of cosmic ray damage and such, and there’s a part in there where end up firing a multi-gigawatt nuclear engine for a few millenia to target a specific star, which is also going to be extremely hard to design for that level of reliability. (imagine the radiation damage to the engine from that level of power, it won’t be pretty).

Repairing nanobots help, but it’s still going to be an impressive feat.