I think it’s creative but wrong to think that an agent could achieve quantum-suicide-level anthropic superpowers by changing how much ve now cares about certain future versions of verself, instead of ensuring that only some of them will be actual successor states of ver patterns of thought.
You can’t change your preference. The changed preference won’t be yours. What you care about is even more unchangeable than reality. So we don’t disagree here, I don’t think you can get anthropic superpowers, because you care about a specific thing.
If we lump together even a fraction of my life as “me” rather than just me-this-instant, we’d find that my preference is actually pretty malleable while preserving the sense of identity. I think it’s within the realm of possibility that my brain could be changed (by a superintelligence) to model a different preference (say, one giving much higher weight to versions of me that win each day’s lottery) without any changes more sudden or salient to me than the changes I’ve already gone through.
If I expected this to be done to me, though, I wouldn’t anticipate finding my new preference to be well-calibrated; I’d rather expect to find myself severely surprised/disappointed by the lottery draw each time.
Am I making sense in your framework, or misunderstanding it?
I am still puzzled how preference corresponds to the physical state of brain. Is preference only partially presented in our universe (intersection of set of universes which correspond to your subjective experience and set of universes which correspond to mine subjective experience)?
You can’t change your preference. The changed preference won’t be yours. What you care about is even more unchangeable than reality. So we don’t disagree here, I don’t think you can get anthropic superpowers, because you care about a specific thing.
If we lump together even a fraction of my life as “me” rather than just me-this-instant, we’d find that my preference is actually pretty malleable while preserving the sense of identity. I think it’s within the realm of possibility that my brain could be changed (by a superintelligence) to model a different preference (say, one giving much higher weight to versions of me that win each day’s lottery) without any changes more sudden or salient to me than the changes I’ve already gone through.
If I expected this to be done to me, though, I wouldn’t anticipate finding my new preference to be well-calibrated; I’d rather expect to find myself severely surprised/disappointed by the lottery draw each time.
Am I making sense in your framework, or misunderstanding it?
I am still puzzled how preference corresponds to the physical state of brain. Is preference only partially presented in our universe (intersection of set of universes which correspond to your subjective experience and set of universes which correspond to mine subjective experience)?