Great report. I found the high decision-worthiness vignette especially interesting.
Thanks! Glad to hear it
Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that?
Yep, this is kinda what anthropic decision theory (ADT) is designed to be :-D ADT + total utilitarianism often gives similar answers to SIA.
I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded, in which case there could be worlds where aliens were almost arbitrarily rare that still had high decision-worthiness. (Faster than light travel seems like it would have similar implications.)
Yeah, this is a great point. Toby Ord mentions here the potential for dark energy to be harnessed here, which would lead to a similar conclusion. Things like this may be Pascal’s muggings (i.e., we wager our decisions on being in a world where our decisions matter infinitely). Since our decisions might already matter ‘infinitely’ (evidential-like decision theory plus an infinite world) I’m not sure how this pans out.
SIA doomsday is a very different thing than the regular doomsday argument, despite the name, right? The former is about being unlikely to colonize the universe, the latter is about being unlikely to have a high number of observers?
Exactly. SSA (with a sufficiently large reference class) always predicts Doom as a consequence of its structure, but SIA doomsday is contingent on the case we happen to be in (colonisers, as you mention).
Thanks! Glad to hear it
Yep, this is kinda what anthropic decision theory (ADT) is designed to be :-D ADT + total utilitarianism often gives similar answers to SIA.
Yeah, this is a great point. Toby Ord mentions here the potential for dark energy to be harnessed here, which would lead to a similar conclusion. Things like this may be Pascal’s muggings (i.e., we wager our decisions on being in a world where our decisions matter infinitely). Since our decisions might already matter ‘infinitely’ (evidential-like decision theory plus an infinite world) I’m not sure how this pans out.
Exactly. SSA (with a sufficiently large reference class) always predicts Doom as a consequence of its structure, but SIA doomsday is contingent on the case we happen to be in (colonisers, as you mention).