My position on anthropics is that anthropics is grounded in updateless decision theory,
which AFAIK lead in practice to full non-indexical conditioning.
It doesn’t lead to that; what it leads to depends a lot on your utility function and how you value your copies: https://www.youtube.com/watch?v=aiGOGkBiWEo
It comes back to the same issue again: do you value exact duplicates having exactly the same experience as a sum, or is that the same as one copy having it (equivalent to an average)? Or something in between?
You feeble attempts at munchkining are noted and scorned at. The proper munchkin would pull the lever again and again, creating an army of yous...
It is, however, somewhat like probability, and I don’t see anything in this post that should change anyone’s opinion about that.
How “somewhat”? The kind of behaviour I’m talking about—flipping the lever and letting your copy deliver the message—violates the independence axiom of expected utility, were duplication a probability.
As for why I’m less keen on MWI, it’s simply that I see a) duplication is not a probability b) duplication with a measure doesn’t seem any different to standard duplication, and c) my past experience causes me to see measure as (approximately) a probability.
Hence, MWI seems wrong. You probably disagree with a) or b), but do you see the deduction?
If you start with a prior on the coin being 99:1, then no amount of observations will persuade you otherwise. If you start with a prior that is more spread out across the possible biases of the coin—even if it’s 99:1 in expectation—then you can update from observations.
Decision theory proceeds in exactly the same way; decision theory will “update” towards 50:50 unless it starts with a broken prior.
So essentially there are three things: decision theory, utility, and priors. Using those, you can solve all problems, without needing to define anthropic probabilities.
You can get probabilities from decisions by maximising a proper scoring function for estimates of expectation of an event happening. It works in all cases that probability does. A broken prior will break both probabilities and decision theory.
In the case of anthropics, the probability breaks—as the expectation of an event isn’t well defined across duplicates—while decision theory doesn’t.
If probability makes sense at all, then “I believe that the odds are 2:1, but I *know* that in a minute I’ll believe that it’s 1:1” destroys it as a coherent formalisation of beliefs. Should the 2:1 you force their future copy to stick with 2:1 rather than 1:1? If not, why do they think their own beliefs are right?
You are deciding whether or not to pull the lever. The probability of a past event, known to be in the past, depends on your actions now.
To use your analogy, it’s you deciding whether to label a scientific paper inaccurate or not—your choice of label, not anything else, makes it inaccurate or not.
I’m still uncertain about what happens in the many world scenarios, see https://www.lesswrong.com/posts/NiA59mFjFGx9h5eB6/duplication-versus-probability
For most versions of selfishness, if you’re duplicated, then the two copies will have divergent preferences. However, if one of the copies is destroyed during duplication, this just counts as transportation. So the previous self values either future copies if only one exists. Therefore it seems incoherent for the previous self not to value both future copies if both exist, and hence for the two future copies not to value each other.
(btw, the logical conclusion is that the two copies have the same preferences, not that the two agents must value each other—it’s possible that copy A only cares about themselves, and copy B only cares about copy A).
New and better reason to ignore Boltzmann brains in (some) anthropic calculations: https://www.lesswrong.com/posts/M9sb3dJNXCngixWvy/anthropics-and-fermi
Ok, I’ve revised the idea entirely.
See here for why FNC doesn’t work as a probability theory (and neither do SIA or SSA): https://www.lesswrong.com/posts/iNi8bSYexYGn9kiRh/paradoxes-in-all-anthropic-probabilities
See here for how you can use proper scoring functions to answer the probability of seeing alien life in the galaxy; depending on whether you average the scores or total them, you get SSA-style or SIA-style answers: https://www.lesswrong.com/posts/M9sb3dJNXCngixWvy/anthropics-and-fermi
It seemed it was a general lesswrong problem, now fixed; update done.
I personally think decision theory is more important than probability theory. And anthropics does introduce some subtleties into the betting setup—you can’t bet or receive rewards if you’re dead.
But there are ways around it. For instance, if the cold war is still on, we can ask how large X has to be if you would prefer X units of consumption after the war, if you survive, to 1 unit of consumption now.
Obviously the you that survived the cold war and knows they survived, cannot be given a decent bet on the survival. But we can give you a bet on, for instance “new evidence has just come to light showing that the cuban missile crisis was far more dangerous/far safer than we thought. Before we tell you the evidence, care to bet in which direction the evidence will point?”
Then since we can actually express these conditional probabilities in bets, the usual Dutch Book arguments show that they must update in the standard way.
but instead extend the theory.
I’m not sure that can be done: https://www.lesswrong.com/posts/iNi8bSYexYGn9kiRh/paradoxes-in-all-anthropic-probabilities
I agree with that, but I don’t think the post shows it directly. My video https://www.youtube.com/watch?v=aiGOGkBiWEo does look at two possible versions of selfishness; my own position is that selfishness is incoherent, unless it’s extreme observer-moment selfishness, which is useless.
All the odds are about the outcome of a past coin flip, known to be in the past. This should not change in the ways described here.
I’m pointing out that the negation of S=“X observes A at time T” does not imply that X exists. S’=“X observes ~A at time T” is subset of ~S, but not the whole thing (X not existing at all at time T is also a negation, for example). Therefore, merely because S’ is impossible, does not mean that S is certain.
The point about introducing differences in observers, is that this is the kind of thing that your theory has to track, checking when an observer is sufficiently divergent that they can be considered different/the same. Since I take a more “god’s eye view” of these problems (extinctions can happen, even without observers to observe them), it doesn’t matter to me whether various observers are “the same” or not.