Superintelligence, pp. 122–3. 2014.
Consider a technologically mature civilization capable of building sophisticated von Neumann probes of the kind discussed in the text. If these can travel at 50% of the speed of light, they can reach some
If we assume that 10% of stars have a planet that is—or could by means of terraforming be rendered—suitable for habitation by human-like creatures, and that it could then be home to a population of a billion individuals for a billion years (with a human life lasting a century), this suggests that around
There are, however, reasons to think this greatly underestimates the true number. By disassembling non-habitable planets and collecting matter from the interstellar medium, and using this material to construct Earth-like planets, or by increasing population densities, the number could be increased by at least a couple of orders of magnitude. And if instead of using the surfaces of solid planets, the future civilization built O’Neill cylinders, then many further orders of magnitude could be added, yielding a total of perhaps
Many more orders of magnitude of human-like beings could exist if we countenance digital implementations of minds—as we should. To calculate how many such digital minds could be created, we must estimate the computational power attainable by a technologically mature civilization. This is hard to do with any precision, but we can get a lower bound from technological designs that have been outlined in the literature. One such design builds on the idea of a Dyson sphere, a hypothetical system (described by the physicist Freeman Dyson in 1960) that would capture most of the energy output of a star by surrounding it with a system of solar-collecting structures. For a star like our Sun, this would generate
Combining these estimates with our earlier estimate of the number of stars that could be colonized, we get a number of about
It might not be immediately obvious to some readers why the ability to perform
In other words, assuming that the observable universe is void of extraterrestrial civilizations, then what hangs in the balance is at least 10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives (though the true number is probably larger). If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy.
I read this in 2019; it helped me understand that the long-term future is astronomically more important than whatever happens on Earth this millennium. See also Astronomical Waste.
Edit: but as various commenters observe, the actual amount you should care about the long-term future and space stuff isn’t super related to the figure (or whatever the true number is) because of acausal trade and ECL.
Have you seen my Is the potential astronomical waste in our universe too small to care about?
(These days I’ll often read some piece of bad news and think to myself, “at least we’re not in a much bigger/richer universe!” followed by “but what does it imply about what is happening in those universes?”)
Of course!
I agree things aren’t as simple as ” is a big number, therefore optimize the cosmic endowment.” Maybe we should act based on trade/ECL reasons. But the stakes are still high in expectation, as you discuss in Beyond Astronomical Waste.
Annoying highly galaxy brained consideration, but pretty plausibly, considerations about how big the cosmic endowment is are dwarfed by considerations about how big our logical endowment is, eg, I would probably prefer to be in a much smaller but much simpler universe than in a bigger but more complicated universe for acausal trade reasons.
Ahhh, I see there was already a Wei-Dai post about this referenced in the comments.
Between the cosmic endowment, the doomsday argument, and simulation theory, my sense of cosmic importance has been yoyoing across ~60 orders of magnitude.
If there is weirder physics, such that FTL or relaxations to the laws of thermodynamics are possible, I assume that the estimation increases. Then again, under those conditions there may not be a finite upper bound.
You already get arbitrarily high upper bounds with reversible computation, and waiting until the universe gets cooler can yield 10^30x additional computation, no need for weirder physics. Bostrom mentioned both above, he was pretty explicit about the 10^85 ops cosmic endowment being a conservative lower bound. Krauss & Starkman’s 1.35e120 ops bound derived from the observed acceleration of the universe is the non-weird physics upper bound AFAIK (h/t Wei Dai).
Yes. FTL would be surprising given that we find ourselves in a 14 billion year old universe — you’d expect there to be aliens by now. But:
We will likely have better things to do than simulate humans
Mature technology may enable more computation than the conservative guess in this piece
Acausal trade and ECL may present opportunities to have effects outside the lightcone
There are probably more such considerations!
Although Robin Hanson’s “grabby aliens” theory does cut in the other direction, suggesting it’s much more likely than it naively appears that the universe is full of fast-expanding alien civilizations, and therefore that humanity’s share of the cosmos might be much smaller than Bostrom guesses here. (ie instead of being bounded by light-speed and cosmological-expansion constraints, we much sooner butt up against the expanding borders of our neighboring alien civilizations on all sides.
https://grabbyaliens.com
Sure, divide by 1000 or something
A different analogy I came up with was, you have an Earth where every grain of sand was another Earth. Then you spend a simulated human year examining each grain of sand on every one of those Earths, and that would be about one billionth or so of the endowment. Well, it varies based on your order of magnitude, but it gets in the right ballpark.
I have slight discomfort with Bostrom’s reasoning: I agree there is an enormous amount of resources that potentially are at stake in the future. But I struggle with putting a number on it or even thinking about how to think about it. The reason is that his analysis is almost entirely anchored on value as arising from human or biological-like things, i.e., relatively small, short-lived creatures that have a particular form of agency/identity, do things in communities and have the types of valenced states we have. He of course explicitly allows for digital beings, but at least until he explores that topic in more detail with Carl Shulman (2022), it’s not clear whether these digital beings are just amped up human-like experiences. In fact, when he writes about it with Shulman, it’s clear that they could be very different (~immortal, copyable, much larger hedonic range, mind-transparent); it seems to me that applying human-like standards of value to them (especially drawing large quantitative conclusions) seems pretty risky/premature. Now it might be that biological life like us is the only way that advanced societies can come to be (i.e. attractors in evolution, habitability of planets, etc.). But it might alternatively be possible that you could have swarm-like societies where the individual isn’t the primary bearer of value (by our lights anyway: it might not be autonomous, have clear identity, have capacity to suffer, etc.). If that is a realistic possibility, then how do we make a quantitative evaluation on what the value of a future filled with swarm beings is relative to a future filled with human-like beings?
This isn’t to disagree with his qualitative point that the future could be huge and we should be careful what we do with it. but I think putting numbers on it gives a quantitative vibe to something we know very little about.
For those of us who internalized these ideas years ago, there’s not much new here. You mostly find yourself nodding along. But that’s not a criticism. It’s actually refreshing to see this kind of essay on LessWrong again. This is what made the site magnetic in the first place: staring at the actual scale of what’s at stake.
@Nick Bostrom’s line about our great common endowment of negentropy being irreversibly degraded into entropy on a cosmic scale still hits like nothing else. Once you see it, you can’t unsee it. Every second of delay has a cost measured in entire galaxies of potential flourishing slipping beyond our light cone forever. @Wei Dai pushed that picture even further.
The hardest part is always explaining this to people outside this corner of the world. Not because the argument is complex, Bostrom lays it out with brutal clarity, but because the conclusion feels too large to take seriously. People pattern-match it to sci-fi and move on. But 10^58 lives is not a rhetorical flourish. It’s a conservative lower bound.
More essays like this, please. It’s easy to get lost in object-level debates and forget the sheer enormity of what we’re actually trying to protect.
i have basically no ability to reason about events so far in the future, to the point where it seems absurd to make decisions based on them. a fun setting for a fiction, though.
at risk of being boorish, may i humbly request voiced disagreement in this case? i’d like to understand where i am wrong.
to expand on my view, and offer footholds for disagreement:
human civilization—restricted to one planet, with a population that varied only by a few orders of magnitude—looks radically different than it did say ~1000 years ago, to the point where it can be hard to fully understand the past. we can imagine a hyperrational monk who is given divine prophesy of the year 2000. we can grant them plenty of spare time to reflect on this potential. and we assume they don’t fall into any tropey mistakes, such as burning their reputation on rants and raves about the incomprehensible year-to-come. nonetheless, how are they supposed to care about those descendants, practically speaking?
and the situation outlined here is much worse. i can give the outlined future vastly lower credence than our inspired monk could give his. as well, i just don’t know how to picture a society of even 100 billion people, let alone one that spans a scientific notation number of stars, and let alone one that isn’t even physical.
in short, what is the normative claim? what am i asked to do, here, other than marvel at some large-ish numbers, and enjoy a bit of worldbuilding? invest in aerospace?
apologies for the rhetoric. stepping out of the discussion a bit: i honestly do not mean to pooh-pooh. i feel like a child in this discussion; the irreverence is the only way i know to participate.
You could think of these two essays as Will MacAskill’s answer to your question, sort of.