Disagree, but upvoted. Given that there’s a canonical measure on configurations (i.e. the one with certain key symmetries, as with the L^2 measure on the Schrödinger equation), it makes mathematical sense to talk about the measure of various successor states to a person’s current experience.
It is true that we have an evolved sense of anticipated experience (coupled with our imaginations) that matches this concept, but it’s a nonmysterious identity: an agent whose subjective anticipation matches their conditional measure will make more measure-theoretic optimal decisions, and so the vast majority of evolved beings (counting by measure) will have these two match.
It may seem simpler to disregard any measure on the set of configurations, but it really is baked into the structure of the mathematical object.
I think that the mathematical structure of the multiverse matters fundamentally to anthropic probabilities. I think it’s creative but wrong to think that an agent could achieve quantum-suicide-level anthropic superpowers by changing how much ve now cares about certain future versions of verself, instead of ensuring that only some of them will be actual successor states of ver patterns of thought.
However, my own thinking on anthropic probabilites (Bostromian, so far as I understand him) has issues†, so I’m pondering it and reading his thesis.
† In particular, what if someone simulates two identical copies of me simultaneously? Is that different from one copy? If so, how does that difference manifest itself in the gray area between running one and two simulations, e.g. by pulling apart two matching circuitboards running the pattern?
I think it’s creative but wrong to think that an agent could achieve quantum-suicide-level anthropic superpowers by changing how much ve now cares about certain future versions of verself, instead of ensuring that only some of them will be actual successor states of ver patterns of thought.
You can’t change your preference. The changed preference won’t be yours. What you care about is even more unchangeable than reality. So we don’t disagree here, I don’t think you can get anthropic superpowers, because you care about a specific thing.
If we lump together even a fraction of my life as “me” rather than just me-this-instant, we’d find that my preference is actually pretty malleable while preserving the sense of identity. I think it’s within the realm of possibility that my brain could be changed (by a superintelligence) to model a different preference (say, one giving much higher weight to versions of me that win each day’s lottery) without any changes more sudden or salient to me than the changes I’ve already gone through.
If I expected this to be done to me, though, I wouldn’t anticipate finding my new preference to be well-calibrated; I’d rather expect to find myself severely surprised/disappointed by the lottery draw each time.
Am I making sense in your framework, or misunderstanding it?
I am still puzzled how preference corresponds to the physical state of brain. Is preference only partially presented in our universe (intersection of set of universes which correspond to your subjective experience and set of universes which correspond to mine subjective experience)?
I don’t say that the nature of the match is particularly mysterious, indeed measure might count as an independent component of the physical laws as explanation for the process of evolution (and this might explain Born’s rule). But decision-theoretically, it’s more rational to look at what your prior actually is, rather than at what the measure in our world actually is, even if the two very closely match. It’s the same principle as with other components of evolutionary godshatter, but anticipation is baked in most fundamentally.
You don’t discard measure at human level, it’s a natural concept that captures a lot of structure of our preference, and so something to use as a useful heuristic in decision-making, but once you get to be able to work at the greater level of detail, physical laws or measures over the structures that express them cease to matter.
Disagree, but upvoted. Given that there’s a canonical measure on configurations (i.e. the one with certain key symmetries, as with the L^2 measure on the Schrödinger equation), it makes mathematical sense to talk about the measure of various successor states to a person’s current experience.
It is true that we have an evolved sense of anticipated experience (coupled with our imaginations) that matches this concept, but it’s a nonmysterious identity: an agent whose subjective anticipation matches their conditional measure will make more measure-theoretic optimal decisions, and so the vast majority of evolved beings (counting by measure) will have these two match.
It may seem simpler to disregard any measure on the set of configurations, but it really is baked into the structure of the mathematical object.
Do we still have a disagreement? If we do, what is it?
I think that the mathematical structure of the multiverse matters fundamentally to anthropic probabilities. I think it’s creative but wrong to think that an agent could achieve quantum-suicide-level anthropic superpowers by changing how much ve now cares about certain future versions of verself, instead of ensuring that only some of them will be actual successor states of ver patterns of thought.
However, my own thinking on anthropic probabilites (Bostromian, so far as I understand him) has issues†, so I’m pondering it and reading his thesis.
† In particular, what if someone simulates two identical copies of me simultaneously? Is that different from one copy? If so, how does that difference manifest itself in the gray area between running one and two simulations, e.g. by pulling apart two matching circuitboards running the pattern?
You can’t change your preference. The changed preference won’t be yours. What you care about is even more unchangeable than reality. So we don’t disagree here, I don’t think you can get anthropic superpowers, because you care about a specific thing.
If we lump together even a fraction of my life as “me” rather than just me-this-instant, we’d find that my preference is actually pretty malleable while preserving the sense of identity. I think it’s within the realm of possibility that my brain could be changed (by a superintelligence) to model a different preference (say, one giving much higher weight to versions of me that win each day’s lottery) without any changes more sudden or salient to me than the changes I’ve already gone through.
If I expected this to be done to me, though, I wouldn’t anticipate finding my new preference to be well-calibrated; I’d rather expect to find myself severely surprised/disappointed by the lottery draw each time.
Am I making sense in your framework, or misunderstanding it?
I am still puzzled how preference corresponds to the physical state of brain. Is preference only partially presented in our universe (intersection of set of universes which correspond to your subjective experience and set of universes which correspond to mine subjective experience)?
I don’t say that the nature of the match is particularly mysterious, indeed measure might count as an independent component of the physical laws as explanation for the process of evolution (and this might explain Born’s rule). But decision-theoretically, it’s more rational to look at what your prior actually is, rather than at what the measure in our world actually is, even if the two very closely match. It’s the same principle as with other components of evolutionary godshatter, but anticipation is baked in most fundamentally.
You don’t discard measure at human level, it’s a natural concept that captures a lot of structure of our preference, and so something to use as a useful heuristic in decision-making, but once you get to be able to work at the greater level of detail, physical laws or measures over the structures that express them cease to matter.