logical vs indexical dignity

Link post

MIRI’s Death with Dignity post puts forward the notion of “dignity points”:

the measuring units of dignity are over humanity’s log odds of survival—the graph on which the logistic success curve is a straight line. A project that doubles humanity’s chance of survival from 0% to 0% is helping humanity die with one additional information-theoretic bit of dignity.

but, as logical and indexical uncertainty puts it, there are two different kinds of uncertainty: uncertainty over our location within things that exist, called indexical uncertainty, and uncertainty over what gets to exist in the first place, called logical uncertainty.

the matter of there existing many instances of us, can occur not just thanks to the many-worlds interpretation of quantum mechanics, but also thanks to other multiverses like tegmark level 1 and reasonable subsets of tegmark level 4, as well as various simulation hypotheses.

i think that given the logical and indexical uncertainty post’s take on risk aversion — “You probably prefer the indexical coin flip” — we should generally aim to create logical dignity rather than indexical dignity, where logical uncertainty includes things like “what would tend to happen under the laws of physics as we believe them to be”. if there’s a certain amount of indexical uncertainty and logical uncertainty about a plan, the reason we want to tackle the logical uncertainty part by generating logical dignity, is so that what’s left is indexical, and so it will go right somewhere.

as a concrete example, if your two best strategies to save the world are:

  • one whose crux is a theorem being true, which you expect is about 70% likely to be true

  • one whose crux is a person figuring out a required clever idea, which you expect is about 70% likely to happen

and they have otherwise equal expected utility, then you’ll want to favor the latter strategy, because someone figuring out something seems more quantum-determined and less set-in-stone than a theorem being true or not. by making logical stuff be what you’re certain about and indexical stuff be what you’re uncertain about, rather than the other way around, you make it so that in the future, some place will turn out well.

(note that if our impact on utopia is largely indexical, then it might feel like we should focus more on reducing S-risk if you’re e.g. negative utilitarian, because you want utopia somewhere but hell nowhere — but if god isn’t watching to stop computing timelines that aren’t in their interest, and if we are to believe that we should do the normal expected utility maximization thing across timelines, then it probly shouldn’t actually change what we do, merely just how we feel about it)