Thanks for putting this together! Lots of ideas I hadn’t seen before.
As for the meta-level problem, I agree with MSRayne to do the thing that maximises EU which leads me to the ADT/UDT approach. This assumes we can have some non-anthropic prior, which seems reasonable to me.
Anecdata: I aim to never take caffeine on two consecutive days, and when I do it’s normally<50mg. This has worked well for me.
Wouldn’t the respective type of utilitarian already have the corresponding expectations on future GCs? If not, then they aren’t the type of utilitarian that they thought they were.
I’m not sure what you’re saying here. Are you saying that in general, a [total][average] utilitarian wagers for [large][small] populations?
So there’s a lower bound on the chance of meeting a GC 44e25 meters away.
Yep! (only if we become grabby though)
Lastly, the most interesting aspect is the symmetry between abiogenesis time and the remaining habitability time (only 500 million years left, not a billion like you mentioned).
What’s your reference for the 500 million lifespan remaining? I followed Hanson et al. in using in using the end of the oxygenated atmosphere as the end of the lifespan.
Just because you can extend the habitability window doesn’t mean you should when doing anthropic calculations due to reference class restrictions.
Yep, I agree. I don’t do the SSA update with reference class of observers-on-planets-of-total-habitability-X-Gy but agree that if I did, this 500 My difference would make a difference.
The habitability of planets around longer lived stars is a crux for those using SSA, but not SIA or decision theoretic approaches with total utilitarianism.
I show in this section that if one is certain that there are planets habitable for at least 20 Gy , then SSA with the reference class of observers in pre-grabby intelligent civilizations gives ~30% on us being alone in the observable universe. For 50 Gy this gives ~10% on being alone.
Great report. I found the high decision-worthiness vignette especially interesting.
Thanks! Glad to hear it
Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that?
Yep, this is kinda what anthropic decision theory (ADT) is designed to be :-D ADT + total utilitarianism often gives similar answers to SIA.
I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded, in which case there could be worlds where aliens were almost arbitrarily rare that still had high decision-worthiness. (Faster than light travel seems like it would have similar implications.)
Yeah, this is a great point. Toby Ord mentions here the potential for dark energy to be harnessed here, which would lead to a similar conclusion. Things like this may be Pascal’s muggings (i.e., we wager our decisions on being in a world where our decisions matter infinitely). Since our decisions might already matter ‘infinitely’ (evidential-like decision theory plus an infinite world) I’m not sure how this pans out.
SIA doomsday is a very different thing than the regular doomsday argument, despite the name, right? The former is about being unlikely to colonize the universe, the latter is about being unlikely to have a high number of observers?
Exactly. SSA (with a sufficiently large reference class) always predicts Doom as a consequence of its structure, but SIA doomsday is contingent on the case we happen to be in (colonisers, as you mention).
Could your model also include a possibility of the SETI-attack: grabby aliens sending malicious radio signals with AI description ahead of their arrival?
I briefly discuss this in Chapter 4. My tentative conclusion is that we have little to worry about in the next hundred or thousand years, especially (which I do not mention) if we think malicious grabby aliens to try particularly hard to have their signals discovered.
I agree it seems plausible SIA favours panspermia, though my rough guess is that doesn’t change the model too much.
Conditioning on panspermia happening (and so the majority of GCs arriving through panspermia) then the number of hard steps n in the model can just be seen as the number of post-panspermia steps.
I then think this doesn’t change the distribution of ICs or GCs spatially if (1) the post-panspermia steps are sufficiently hard (2) a GC can quickly expand to contain the volume over which its panspermia of origin occurred. The hardness assumption implies that GC origin times will be sufficiently spread out for a single to GC to prevent any prevent any planets with m<n step completions of life from becoming GCs.
Ah, I don’t think I was very clear either.
I interpreted this comment as you saying “We could restrict our SSA reference class to only include observers for whom computers were invented 80 years ago”. (Is that right?)
What I wanted to say was: keep the reference class the same, but restrict the types of observers we are we saying we are contained in(the numerator in the SSA ratio) to be only those who (amongst other things) observe the invention of the computer 80 years ago.
And then I was trying to respond to that by saying “Well if we can do that, why can’t we equally well restrict our SSA reference class to only include observers for whom the universe is 13.8 billion years old? And then “humanity is early” stops being true.”
Yep, one can do this. We might still be atypical if we think longer-lived planets are habitable (since life has more time to appear there) but could also restrict the reference class further. Eventually we end up at minimal reference class SSA
Doesn’t sound snarky at all :-)Hanson et al. are conditioning on the observation that the universe is 13.8 billion years old. On page 18 they write
Note that by assuming a uniform distribution over our origin rank r (i.e., that we are equally likely to be any percentile rank in the GC origin time distribution), we can convert distributions over model times τ (e.g., an F(τ ) over GC model origin times) into distributions over clock times t. This in effect uses our current date of 13.8Gyr to estimate a distribution over the model timescale constant k. If instead of the distribution F(τ ) we use the distribution F0(τ ), which considers only those GCs who do not see any aliens at their origin date, we can also apply the information that we humans do not now see aliens.
Formally (and I think spelling it out helps) with SSA with the above reference class, our likelihood ratio is the ratio of [number of observers in pre-grabby civiliations that observe Y] to [number of observers in pre-grabby civilizations] where Y is our observation that the universe is 13.8 billion years old, we are on a planet that has been habitable for ~4.5Gy and has total habitability of ~5.5Gy, we don’t observe any grabby civilizations, etc
Yep, you’re exactly right.
We could further condition on something like “observing that computers were invented ~X years ago” (or something similar that distinguishes observers like) such that the (eventual) population of civilizations doesn’t matter. This conditioning means we don’t have to consider that longer-lived planets will have greater populations.
I’ve been studying & replicating the argument in the paper [& hopefully be sharing results in the next few weeks]
The argument implicitly uses the self-sampling assumption (SSA) with reference class of observers in civilizations that are not yet grabby (and may or may not become grabby).
Their argument is similar in structure to the Doomsday argument:
If there are no grabby aliens (and longer lived planets are habitable) then there will be many civilizations that appear far in the future, making us highly atypical (in particular, ‘early’ in the distribution of arrival times).
If there are sufficiently many grabby aliens (but not too many) they set a deadline (after the current time) by when all civilizations must appear if they appear at all. This makes civilizations/observers like us/ours that appear at ~13.8Gy more typical in the reference class of all civilizations/observers that are not yet grabby.
Throughout we’re assuming the number of observers per pre-grabby civilization is roughly constant. This lets us be loose with the the civilization - observer distinction.
I don’t think the reference class is a great choice. A more natural choice would be the maximal reference class (which includes observers in grabby alien civilization) or the minimal reference class (containing only observers subjectively indistinguishable from you).
It looks like you’ve rediscovered SIA fears (expected) infinity
Something about being watched makes us more responsible. If you can find people that aren’t going to distract you, working alongside them keeps you accountable. If it’s over zoom you can mute them
I like Focusmate for this. You book a 25 minute or 50 minute pomodoro session with another member of the site and video call during the duration. I’ve found sharing my screen also helps.
I’ve finally commented on LessWrong (after lurking for the last few years) which had been on the edge of my comfort zone. Thanks for exercise!
Thanks for this great explainer! For the past few months I’ve been working on the Bayesian update from Hanson’s argument and hoping to share it in the next month or two.
I use Loop Habit Tracker [Android app] for a similar purpose. It’s free and open source and allows notifcations to be set and then habits ticked off. The notifcations can be made sticky too.