There are two common models of space colonization people sometimes allude to, neither of which I think is particularly likely.
Model 1 (“normal colonization”) is that space colonization will look something like Earth colonization, e.g. the way the first humans to expand to the Polynesian islands. So your boat (rover/ship/probe) hops to one island (planet), you build up a civilization, and then you send your probes onwards to the next couple of nearby planets, maybe saving up a bunch of resources if you’ve colonized nearby star systems (eg your galaxy) and need to send a bigger ship to more distant stars. So it looks like either orderly civilizational growth or an evolutionary process.
I don’t think this model is really likely because von Neumann probes will be really cheap relative to the carrying capacity of star systems. So I don’t think the intuitive “slow waves of colonization” model makes a lot of sense on a galactic scale.
I don’t think my view here is particularly controversial. My impression is that while the first model is common in science fiction, nobody in the futurism/x-risk/etc fields really believes it.
Model 2 (“mad dash”) is that you race ahead as soon as you reach relativistic speeds. So as soon as your science and industry has advanced enough for your probes to reach appreciable fractions of c, you start blasting out von Neumann probes to the far reaches of the affectable universe.
I think this model is more plausible, but still unlikely. A small temporal delay is worth it to develop more advanced spacefaring technology.
My guess is that even if all you care about is maximizing space colonization, it still makes sense to delay some time before you launch your first “serious” interstellar space probe, rather than do it as soon as possible[1].
Whether you can reach the furthest galaxies is determined by something like[2]:
total time to reach a galaxy = delay + distance/speed
So you want to delay and keep researching until the marginal speed gain from additional R&D time is lower than the marginal cost of the delay.
I don’t have a sense of how long this is, but intuitively it feels more like decades and centuries, maybe even slightly longer, than months or years. The furthest away parts of the theoretically reachable universe is 16-18 billion years away, so a 100 years delay is worth it if you can just increase the speed by 1⁄100 millionth of c[4].
For energy/resource reasons you might want to expand to nearby star systems first to send the fastest possible probes[5]. But note that delay before sending your first probe is always at worst a constant amount of time. There’s the possible exception of being able to accelerate R&D in other star systems, e.g. because you need multiple star systems of compute in order to do the R&D well. But this is trickier than it looks! The lightspeed communication barrier means sending information is slow, so you’re really giving up a lot in terms of latency to use up more compute. A caveat here is that you might want your supercomputer to be bigger than the home system’s resources. So maybe you want to capture a nearby star system to turn that into your core R&D department. Though that takes a while to build out, too.
Here are a few models of space colonization that I think are more likely:
Model 3 (Deliberate + build in the home system, then spam): Research deeply until reaching ~very deep technological levels [3] then suddenly spam a ton of probes everywhere at very high fractions of c. I think this is the implicit model in Sandberg’s Space Races’ paper.
Assuming technological maturity, Sandberg had two different models for exploration:
one that races earlier to grab nearby interstellar resources
and one that waits longer and saves up to send faster probes (where the constraint on probe speed is primarily energy, not knowledge)
In the technological maturity case, Sandberg concludes that racing earlier is better.
Note however that Sandberg’s model presumes technological maturity, so in a sense his analysis starts a bit after where mine ends.
I think his model is roughly correct in worlds where reaching technological maturity is relatively quick. This assumption seems plausible enough to me, but not guaranteed.
Model 4 (Colonize in nearby systems, deliberate in home systems, then spam): Colonize nearby systems first, while continuously researching at home. Turn nearby systems into Dyson swarms etc so they have massive (and flexible!) industrial capacity, while simultaneously researching in the home system what are the best ways to send fast ships.
In this model you’re first colonizing systems within say 50-500 years (not light-years, years) of your home system
And then once your home systems figure out the optimal way to send probes, they tell the colonies what to do next (at the speed of light), and the colonies (plus maybe the home system too at that point) spam probes at near the speed of light.
Model 5 (Waves: Deliberate, spam, deliberate, spam, deliberate…): The home system spends enough time thinking/researching/building until they’ve reached a plausible plateau. They send probes out for a while, roughly until distances are such that sending more probes from home will be slower to reach distant shores than it’d take probes from colonies to reach them. Then they switch back to deliberation mode and keep deliberating until/if they invent a new mode of transportation that’s faster than the head start colonies have, then start sending probes again to distant stars, intending to overtake the front wave of probes from the colonies at sufficiently distant stars.
The colonies repeat the same strategy as home, first building up and sending a bunch of probes out, and then switching to “research” mode.
This keeps going until we are very confident you can’t send faster ships, and then the expanding core switches from deliberation and spread into spending their energy on more terminal moral goods.
Model 6 (Deliberate, spam, deliberate and signal): Like the previous model, but after the first wave of probes, the home system (and other systems in the expanding core) no longer sends more probes. Instead they switch to spending all their time on research and deliberations. If/when they discover a faster mode of transportation, they signal (at exactly c) the new strategy to distant systems at the frontier, to switch their expansion technology.
Compared to Model 5, this strategy has the advantage of the speed of light always being faster than whatever mode of transportation you have for physical ships. So if your colonies are “on the way” to distant stars, it’s always faster to tell the colonies what to do than to send your own probes.
This strategy might seem strictly superior to Model 5. But this isn’t necessarily the case! For example, galaxies tend to be ~2D, whereas the affectable universe is ~3D. So for different galaxies not on the same plane, it might often be more efficient to send probes directly than to wait for light-speed communication to hit a colony on the current frontier, and then send a probe from there.
The galactic disk is ~1000 light-years thick but intergalactic targets can be millions of light-years away in any direction, so for most target galaxies there’s no frontier colony meaningfully closer than the home system, or at least the “core”.
Model 7:??? Excited to hear other models I haven’t thought of!
I’m neither an astrophysicist nor in any other way a “real” space expert and I’ve spent less than a day thinking about the relevant dynamics, so let me know if you think I’m wrong or you have additional thoughts! Very happy to be corrected. :)
[1] Modulo other reasons for going faster, like worries about single-system x-risk, stagnation, meme wars etc. There are also other reasons to go slower, for example worries about interstellar x-risks/ vulnerable universe, wanting more value certainty and fear of value drift, being scared of aliens, etc.
[2] + relativistic effects and other cosmological effects that I don’t understand. I never studied relativity but I’d be surprised if it changes the OOM calculus.
[3] where we predict additional research time yields diminishing returns relative to acting on current knowledge
[4] See also earlier work by Kennedy. Kennedy 2006′s ‘wait calculation’ formalizes a version of this tradeoff for nearby stars and gets centuries-scale optimal delays, though his model doesn’t consider the intergalactic case and has additional assumptions about transportation speeds that I’m unsure about.
[5] In contrast, the nearest galaxy is 2.5 million light years away. so 100 years delay before reaching Andromeda is worth it for a 0.004% of c speedup. It’s easy to underestimate how mind-bogglingly big space is!
I’ve reached similar conclusions on the back of some work by Toby and Anders. I think model 3 is most likely though there could be some surprises that make it more like models 4-6.
It could be that even at tech maturity a maximum single hop distance is limiting, such that one has to e.g. ‘crawl’ along galactic filaments taking pit stops to replicate then continuing the journey to the next intermediate destination. Dust is a good candidate as a limiting factor. More likely the probability of mission success decreases with e.g. with the integral of the dust flux over the mission trajectory and so one can deflate the number of pit stops required via the redundancy of sending more probes. And the scaling there may be very good or very poor. The material cost of probe creation could be a consideration. In the case of the spam to all reachable targets strategy one can need variously millions to billions of probes depending the reachable radius. This could require the resources of multiple solar systems but this will depend on the design of advanced probes (which could be extraordinarily light but not totally clear imo) one may be able to make try some various assumptions and do some napkin math.
Another intuition boosting model aside from 3 to my mind is that the steepest scaling gradient for intelligence (probably of all time) is from essentially today to the near future e.g. within the next thousand years though I’d expect to get most of the way there in between 1-100 years.
For example a I think a Landauer-limited KII Dyson is about +22 OOMs in ops/s vs Earth’s (chips+humans) today and you’re getting ~13 of those OOMs from kardashev scaling. Expectation is that you saturate the tech ceiling as it pertains to probes quickly enough to overtake early launches to even nearby stars. The densification of intelligence is perhaps underappreciated since here I vomit ops/s figure but there’s also algorithmic efficiency gains and generally many OOMs to be had in answering ‘what is the max intelligence I can get per unit energy.’ In terms of energy scale up after dyson you’ve then got another +10 in the galaxy that will take you on the order of 100,000 years to get to then about +10 more for the rest of the reachable universe but this will take you many billions of years so this is why I think you’ve got this steep ascension to tech maturity then a prolonged expansion thereafter.
Hopefully this makes sense. Lmk if you’d like to chat more about this. I have more thoughts but I’m so sleepy.
I think that space colonization and exploration depend also on two other things.
The first one is risk assessment. If you assess that there is a small but viable risk that there are other alien expanding or otherwise competing species, then for your long-term goals, you need to have counter-measures. The worst-case scenario is that they have better technology and can sterilize a system quickly enough to remove you before you could send out stealthy probes when you see what’s going on. This means that a sensible strategy is to send probes early to other systems and leave some of them dormant or/and on a low-profile activity level (harder to spot). Also, you need to assess how quickly a competition can arise. If that is fast relative to the time needed to go from system to system, you should focus on finding planets with habitable zones and monitor them, maybe even send automatic probes there as early as you can. Basically, it is likely that you should expand your control as fast as possible, but this does not mean you need to explode in usage of the controlled resources asap.
The second one is the goal.
If your goal is parallelizable enough (like making as many paperclips as possible, or as many minds as possible), then you should expand the resource usage to the areas you control (with some backup plan, like taking further away low-key outposts). Other galaxies—I agree it depends on the estimate of how much of the speed and probability of success you can get with taking more time to research before sending probes. You don’t need to send out probes to other galaxies right away. Sterilizing a system by surprise by alien species in an unescapable way is something one can envision and plan around. Sterilizing the whole galaxy, including vast interstellar spaces, is basically impossible (except vacuum decay, but this would destroy everything).
If your goal is partially parallelizable, then you should expand control but use only local resources in a few places. Example: the research is partially parallelizable, but not totally, as new research is built on top of previous ones, some of it being very interconnected. When your goal is to have a relatively small number of minds that will live and experience things as long as possible (or some other long-term goal that does not envision grand usage of matter and energy), then you should focus on research and risk mitigation through control. Likely the best long-term method to produce energy is to live in relative nearby of a small black hole and use it for turning matter into energy. Much more effective than fission or fusion. So, a sensible strategy is to use a local system for some initial research + send some backup probes, and then when sensible black hole-using tech is ready, find a few black holes to harness and not care about stars, planets, etc., except as a source of danger which needs to be mitigated (observation, ability to intervene and partially evacuate and sending some stealthy backup probes) and long-term source of matter (that can wait for later usage). Parallelizing and using too much matter too quickly in this case is a waste, as you will surely duplicate the same lines of thought and research in many places, and you can’t synchronize well enough over vast distances. What if you need to test a million versions of the same experiment? Then, when the network is mature and it seems safe, you need to send information through your network of control nodes in different systems, and a million of them should expand locally and make the experiment, but not more than that.
If your goal is not parallelizable at all, then you should expand by sending automated probes, but stay local (first near the original star, then find a small black hole). Operate in a rather stealthy way, research and create means for observation and escape. Example: you are one mind or hive-mind that does not have a goal to multiply, expand, or whatever, but wants to experience things and stay alive as long as possible. Likely, you would still create intelligent probes that use resources to expand control, but in a very limited and stealthy fashion.
I agree with much of what you say, and will think about it further. However I don’t think this makes as much sense:
Also, you need to assess how quickly a competition can arise. If that is fast relative to the time needed to go from system to system, you should focus on finding planets with habitable zones and monitor them, maybe even send automatic probes there as early as you can.
My guess is that the automatic monitoring probes aren’t going to be meaningfully faster/more resource-intensive than the colonization probes. So might as well do the latter.
(I’m assuming colonization won’t be with biological humans or transhumans, for maybe obvious reasons)
The furthest away parts of the theoretically reachable universe is 16-18 billion years away, so a 100 years delay is worth it if you can just increase the speed by 1⁄100 millionth of c
Why is there a tradeoff? Why don’t you launch your early comparatively technologically-unsophisticated probes as soon as you can, and then, if you develop faster probes, also launch those if you calculate that they could catch up to the ones that you already launched?
It’s not like the resources spent on early probes trade off appreciably with technological development.
There are a lot of stars and even galaxies in the affectable universe, so if you want to reach all of them quickly, being relatively judicious with your resources, especially your earliest resources, is key. There’s just massive opportunity cost in sending out early probes as opposed to spending relevant resources on compute/research, saving them up for future probes, etc.
I’m skeptical. I guess that cost of probes turns out to be negligible compared to the resources available (and possibly the cost of research will also turn out to be negligible—it remains unclear how fast an intelligence explosion shoots all the way up to technological maturity).
Checking with a BOTEC:
Probes will be nanotechnological, and so probably pretty tiny. The lower the mass the less energy it takes to accelerate them to near lightspeed. Let’s say each probe is the mass of a coke can. (This is probably a significant overestimate.)
Claude tells me that it takes 7.1 e17 J to accelerate that mass to 99.9% the speed of light, assuming unrealistic perfect efficiency. There are an extra 3 or so orders of magnitude for thermodynamic inefficiency. So let’s go up to 7.1 e20 J.
There are ~7 billion galaxies in the reachable universe.
Sending one probe to every galaxy would take 4 e30 J.
Claude also tells me that, in one day, earth’s sun outputs about 3.3 e31 J.
So we could send a probe to every galaxy in the reachable universe using a tenth of the energy of one day, after building a Dyson swarm.
...which is I guess not actually negligible, such that resource allocation question isn’t literally overdetermined.
Hmm I haven’t thought carefully about the numbers but I think the big thing you’re forgetting is the importance of deceleration (or “decel” as the cool kids are calling it).
Anders’ paper is called Eternity in 6 hours, assuming technological maturity and even then it’s at 25% of a day efficiency, despite assuming “only” 30g probes and also substantially slower (like .5c iirc).
no because for a probe you don’t have a reverse launcher on the receiving end, which means that to decelerate:
you can’t use some of the “long-launcher” technologies Claude was referring to, like a particle accelerator or a E-M railgun.
If you’re planning to decelerate via fuel, you need to ~square the single-burn mass ratio, thanks to the rocket equation. iiuc it gets a bit worse with relativity.
You might be able to decel without carrying deceleration fuel (eg with magsails), but this also adds weight to your payload.
I thought that there were mechanisms for using the same particle beam to decelerate as to accelerate?
Something like “You put a mirror that can be deployed at the front of your probe. When you want to start slowing down, you aim the beam at the mirror, and it bounces off and hits the probe, now adding thrust away from the direction of motion.”
Why would there be? Is the disagreement with Eternity in 6 hours that technological maturity would take a long time to reach? My best guess is it would take a few weeks at most when you are actually at the point of having nanomachines autonomously self-replicating into compute.
The material costs of marginal self-replicating probes assuming you are at the point of disassembling Mercury are extremely negligible. You can send hundreds of them to each solar system without breaking a sweat.
“You can send hundreds of them to each solar system without breaking a sweat.” Can you demonstrate this with a BOTEC? I don’t think it’s correct. Mercury has a weight of 10^23 kg, and there are ~10^22 stars in the reachable universe, so sending say a few hundred probes to each star is already burning a significant fraction of your mass, not to mention energy costs.
The speed of reaching technological maturity is evidence in favor of my point, not against it. The faster you can reach technological maturity, the lower the EV of sending probes early is and the better it is to wait until maturity.
There are two common models of space colonization people sometimes allude to, neither of which I think is particularly likely.
Model 1 (“normal colonization”) is that space colonization will look something like Earth colonization, e.g. the way the first humans to expand to the Polynesian islands. So your boat (rover/ship/probe) hops to one island (planet), you build up a civilization, and then you send your probes onwards to the next couple of nearby planets, maybe saving up a bunch of resources if you’ve colonized nearby star systems (eg your galaxy) and need to send a bigger ship to more distant stars. So it looks like either orderly civilizational growth or an evolutionary process.
I don’t think this model is really likely because von Neumann probes will be really cheap relative to the carrying capacity of star systems. So I don’t think the intuitive “slow waves of colonization” model makes a lot of sense on a galactic scale.
I don’t think my view here is particularly controversial. My impression is that while the first model is common in science fiction, nobody in the futurism/x-risk/etc fields really believes it.
Model 2 (“mad dash”) is that you race ahead as soon as you reach relativistic speeds. So as soon as your science and industry has advanced enough for your probes to reach appreciable fractions of c, you start blasting out von Neumann probes to the far reaches of the affectable universe.
I think this model is more plausible, but still unlikely. A small temporal delay is worth it to develop more advanced spacefaring technology.
My guess is that even if all you care about is maximizing space colonization, it still makes sense to delay some time before you launch your first “serious” interstellar space probe, rather than do it as soon as possible[1].
Whether you can reach the furthest galaxies is determined by something like[2]:
total time to reach a galaxy = delay + distance/speed
So you want to delay and keep researching until the marginal speed gain from additional R&D time is lower than the marginal cost of the delay.
I don’t have a sense of how long this is, but intuitively it feels more like decades and centuries, maybe even slightly longer, than months or years. The furthest away parts of the theoretically reachable universe is 16-18 billion years away, so a 100 years delay is worth it if you can just increase the speed by 1⁄100 millionth of c[4].
For energy/resource reasons you might want to expand to nearby star systems first to send the fastest possible probes[5]. But note that delay before sending your first probe is always at worst a constant amount of time. There’s the possible exception of being able to accelerate R&D in other star systems, e.g. because you need multiple star systems of compute in order to do the R&D well. But this is trickier than it looks! The lightspeed communication barrier means sending information is slow, so you’re really giving up a lot in terms of latency to use up more compute. A caveat here is that you might want your supercomputer to be bigger than the home system’s resources. So maybe you want to capture a nearby star system to turn that into your core R&D department. Though that takes a while to build out, too.
Here are a few models of space colonization that I think are more likely:
Model 3 (Deliberate + build in the home system, then spam): Research deeply until reaching ~very deep technological levels [3] then suddenly spam a ton of probes everywhere at very high fractions of c. I think this is the implicit model in Sandberg’s Space Races’ paper.
Assuming technological maturity, Sandberg had two different models for exploration:
one that races earlier to grab nearby interstellar resources
and one that waits longer and saves up to send faster probes (where the constraint on probe speed is primarily energy, not knowledge)
In the technological maturity case, Sandberg concludes that racing earlier is better.
Note however that Sandberg’s model presumes technological maturity, so in a sense his analysis starts a bit after where mine ends.
I think his model is roughly correct in worlds where reaching technological maturity is relatively quick. This assumption seems plausible enough to me, but not guaranteed.
Model 4 (Colonize in nearby systems, deliberate in home systems, then spam): Colonize nearby systems first, while continuously researching at home. Turn nearby systems into Dyson swarms etc so they have massive (and flexible!) industrial capacity, while simultaneously researching in the home system what are the best ways to send fast ships.
In this model you’re first colonizing systems within say 50-500 years (not light-years, years) of your home system
And then once your home systems figure out the optimal way to send probes, they tell the colonies what to do next (at the speed of light), and the colonies (plus maybe the home system too at that point) spam probes at near the speed of light.
Model 5 (Waves: Deliberate, spam, deliberate, spam, deliberate…): The home system spends enough time thinking/researching/building until they’ve reached a plausible plateau. They send probes out for a while, roughly until distances are such that sending more probes from home will be slower to reach distant shores than it’d take probes from colonies to reach them. Then they switch back to deliberation mode and keep deliberating until/if they invent a new mode of transportation that’s faster than the head start colonies have, then start sending probes again to distant stars, intending to overtake the front wave of probes from the colonies at sufficiently distant stars.
The colonies repeat the same strategy as home, first building up and sending a bunch of probes out, and then switching to “research” mode.
This keeps going until we are very confident you can’t send faster ships, and then the expanding core switches from deliberation and spread into spending their energy on more terminal moral goods.
Model 6 (Deliberate, spam, deliberate and signal): Like the previous model, but after the first wave of probes, the home system (and other systems in the expanding core) no longer sends more probes. Instead they switch to spending all their time on research and deliberations. If/when they discover a faster mode of transportation, they signal (at exactly c) the new strategy to distant systems at the frontier, to switch their expansion technology.
Compared to Model 5, this strategy has the advantage of the speed of light always being faster than whatever mode of transportation you have for physical ships. So if your colonies are “on the way” to distant stars, it’s always faster to tell the colonies what to do than to send your own probes.
This strategy might seem strictly superior to Model 5. But this isn’t necessarily the case! For example, galaxies tend to be ~2D, whereas the affectable universe is ~3D. So for different galaxies not on the same plane, it might often be more efficient to send probes directly than to wait for light-speed communication to hit a colony on the current frontier, and then send a probe from there.
The galactic disk is ~1000 light-years thick but intergalactic targets can be millions of light-years away in any direction, so for most target galaxies there’s no frontier colony meaningfully closer than the home system, or at least the “core”.
Model 7:??? Excited to hear other models I haven’t thought of!
I’m neither an astrophysicist nor in any other way a “real” space expert and I’ve spent less than a day thinking about the relevant dynamics, so let me know if you think I’m wrong or you have additional thoughts! Very happy to be corrected. :)
[1] Modulo other reasons for going faster, like worries about single-system x-risk, stagnation, meme wars etc. There are also other reasons to go slower, for example worries about interstellar x-risks/ vulnerable universe, wanting more value certainty and fear of value drift, being scared of aliens, etc.
[2] + relativistic effects and other cosmological effects that I don’t understand. I never studied relativity but I’d be surprised if it changes the OOM calculus.
[3] where we predict additional research time yields diminishing returns relative to acting on current knowledge
[4] See also earlier work by Kennedy. Kennedy 2006′s ‘wait calculation’ formalizes a version of this tradeoff for nearby stars and gets centuries-scale optimal delays, though his model doesn’t consider the intergalactic case and has additional assumptions about transportation speeds that I’m unsure about.
[5] In contrast, the nearest galaxy is 2.5 million light years away. so 100 years delay before reaching Andromeda is worth it for a 0.004% of c speedup. It’s easy to underestimate how mind-bogglingly big space is!
Hi Linch, a really nice comment.
I’ve reached similar conclusions on the back of some work by Toby and Anders. I think model 3 is most likely though there could be some surprises that make it more like models 4-6.
It could be that even at tech maturity a maximum single hop distance is limiting, such that one has to e.g. ‘crawl’ along galactic filaments taking pit stops to replicate then continuing the journey to the next intermediate destination. Dust is a good candidate as a limiting factor. More likely the probability of mission success decreases with e.g. with the integral of the dust flux over the mission trajectory and so one can deflate the number of pit stops required via the redundancy of sending more probes. And the scaling there may be very good or very poor. The material cost of probe creation could be a consideration. In the case of the spam to all reachable targets strategy one can need variously millions to billions of probes depending the reachable radius. This could require the resources of multiple solar systems but this will depend on the design of advanced probes (which could be extraordinarily light but not totally clear imo) one may be able to make try some various assumptions and do some napkin math.
Another intuition boosting model aside from 3 to my mind is that the steepest scaling gradient for intelligence (probably of all time) is from essentially today to the near future e.g. within the next thousand years though I’d expect to get most of the way there in between 1-100 years.
For example a I think a Landauer-limited KII Dyson is about +22 OOMs in ops/s vs Earth’s (chips+humans) today and you’re getting ~13 of those OOMs from kardashev scaling. Expectation is that you saturate the tech ceiling as it pertains to probes quickly enough to overtake early launches to even nearby stars.
The densification of intelligence is perhaps underappreciated since here I vomit ops/s figure but there’s also algorithmic efficiency gains and generally many OOMs to be had in answering ‘what is the max intelligence I can get per unit energy.’ In terms of energy scale up after dyson you’ve then got another +10 in the galaxy that will take you on the order of 100,000 years to get to then about +10 more for the rest of the reachable universe but this will take you many billions of years so this is why I think you’ve got this steep ascension to tech maturity then a prolonged expansion thereafter.
Hopefully this makes sense. Lmk if you’d like to chat more about this. I have more thoughts but I’m so sleepy.
I think that space colonization and exploration depend also on two other things.
The first one is risk assessment. If you assess that there is a small but viable risk that there are other alien expanding or otherwise competing species, then for your long-term goals, you need to have counter-measures. The worst-case scenario is that they have better technology and can sterilize a system quickly enough to remove you before you could send out stealthy probes when you see what’s going on. This means that a sensible strategy is to send probes early to other systems and leave some of them dormant or/and on a low-profile activity level (harder to spot). Also, you need to assess how quickly a competition can arise. If that is fast relative to the time needed to go from system to system, you should focus on finding planets with habitable zones and monitor them, maybe even send automatic probes there as early as you can.
Basically, it is likely that you should expand your control as fast as possible, but this does not mean you need to explode in usage of the controlled resources asap.
The second one is the goal.
If your goal is parallelizable enough (like making as many paperclips as possible, or as many minds as possible), then you should expand the resource usage to the areas you control (with some backup plan, like taking further away low-key outposts). Other galaxies—I agree it depends on the estimate of how much of the speed and probability of success you can get with taking more time to research before sending probes.
You don’t need to send out probes to other galaxies right away. Sterilizing a system by surprise by alien species in an unescapable way is something one can envision and plan around. Sterilizing the whole galaxy, including vast interstellar spaces, is basically impossible (except vacuum decay, but this would destroy everything).
If your goal is partially parallelizable, then you should expand control but use only local resources in a few places.
Example: the research is partially parallelizable, but not totally, as new research is built on top of previous ones, some of it being very interconnected. When your goal is to have a relatively small number of minds that will live and experience things as long as possible (or some other long-term goal that does not envision grand usage of matter and energy), then you should focus on research and risk mitigation through control.
Likely the best long-term method to produce energy is to live in relative nearby of a small black hole and use it for turning matter into energy. Much more effective than fission or fusion. So, a sensible strategy is to use a local system for some initial research + send some backup probes, and then when sensible black hole-using tech is ready, find a few black holes to harness and not care about stars, planets, etc., except as a source of danger which needs to be mitigated (observation, ability to intervene and partially evacuate and sending some stealthy backup probes) and long-term source of matter (that can wait for later usage).
Parallelizing and using too much matter too quickly in this case is a waste, as you will surely duplicate the same lines of thought and research in many places, and you can’t synchronize well enough over vast distances.
What if you need to test a million versions of the same experiment? Then, when the network is mature and it seems safe, you need to send information through your network of control nodes in different systems, and a million of them should expand locally and make the experiment, but not more than that.
If your goal is not parallelizable at all, then you should expand by sending automated probes, but stay local (first near the original star, then find a small black hole). Operate in a rather stealthy way, research and create means for observation and escape. Example: you are one mind or hive-mind that does not have a goal to multiply, expand, or whatever, but wants to experience things and stay alive as long as possible. Likely, you would still create intelligent probes that use resources to expand control, but in a very limited and stealthy fashion.
I agree with much of what you say, and will think about it further. However I don’t think this makes as much sense:
My guess is that the automatic monitoring probes aren’t going to be meaningfully faster/more resource-intensive than the colonization probes. So might as well do the latter.
(I’m assuming colonization won’t be with biological humans or transhumans, for maybe obvious reasons)
Why is there a tradeoff? Why don’t you launch your early comparatively technologically-unsophisticated probes as soon as you can, and then, if you develop faster probes, also launch those if you calculate that they could catch up to the ones that you already launched?
It’s not like the resources spent on early probes trade off appreciably with technological development.
There are a lot of stars and even galaxies in the affectable universe, so if you want to reach all of them quickly, being relatively judicious with your resources, especially your earliest resources, is key. There’s just massive opportunity cost in sending out early probes as opposed to spending relevant resources on compute/research, saving them up for future probes, etc.
I’m skeptical. I guess that cost of probes turns out to be negligible compared to the resources available (and possibly the cost of research will also turn out to be negligible—it remains unclear how fast an intelligence explosion shoots all the way up to technological maturity).
Checking with a BOTEC:
Probes will be nanotechnological, and so probably pretty tiny. The lower the mass the less energy it takes to accelerate them to near lightspeed. Let’s say each probe is the mass of a coke can. (This is probably a significant overestimate.)
Claude tells me that it takes 7.1 e17 J to accelerate that mass to 99.9% the speed of light, assuming unrealistic perfect efficiency. There are an extra 3 or so orders of magnitude for thermodynamic inefficiency. So let’s go up to 7.1 e20 J.
There are ~7 billion galaxies in the reachable universe.
Sending one probe to every galaxy would take 4 e30 J.
Claude also tells me that, in one day, earth’s sun outputs about 3.3 e31 J.
So we could send a probe to every galaxy in the reachable universe using a tenth of the energy of one day, after building a Dyson swarm.
...which is I guess not actually negligible, such that resource allocation question isn’t literally overdetermined.
Hmm I haven’t thought carefully about the numbers but I think the big thing you’re forgetting is the importance of deceleration (or “decel” as the cool kids are calling it).
Anders’ paper is called Eternity in 6 hours, assuming technological maturity and even then it’s at 25% of a day efficiency, despite assuming “only” 30g probes and also substantially slower (like .5c iirc).
Isn’t decel just a difference of a factor of 2?
no because for a probe you don’t have a reverse launcher on the receiving end, which means that to decelerate:
you can’t use some of the “long-launcher” technologies Claude was referring to, like a particle accelerator or a E-M railgun.
If you’re planning to decelerate via fuel, you need to ~square the single-burn mass ratio, thanks to the rocket equation. iiuc it gets a bit worse with relativity.
You might be able to decel without carrying deceleration fuel (eg with magsails), but this also adds weight to your payload.
I thought that there were mechanisms for using the same particle beam to decelerate as to accelerate?
Something like “You put a mirror that can be deployed at the front of your probe. When you want to start slowing down, you aim the beam at the mirror, and it bounces off and hits the probe, now adding thrust away from the direction of motion.”
Why would there be? Is the disagreement with Eternity in 6 hours that technological maturity would take a long time to reach? My best guess is it would take a few weeks at most when you are actually at the point of having nanomachines autonomously self-replicating into compute.
The material costs of marginal self-replicating probes assuming you are at the point of disassembling Mercury are extremely negligible. You can send hundreds of them to each solar system without breaking a sweat.
“You can send hundreds of them to each solar system without breaking a sweat.”
Can you demonstrate this with a BOTEC? I don’t think it’s correct. Mercury has a weight of 10^23 kg, and there are ~10^22 stars in the reachable universe, so sending say a few hundred probes to each star is already burning a significant fraction of your mass, not to mention energy costs.
The speed of reaching technological maturity is evidence in favor of my point, not against it. The faster you can reach technological maturity, the lower the EV of sending probes early is and the better it is to wait until maturity.