I think this is weirder than most anthropics. Different levels of reality fluid in non-interacting worlds? Great. But if Alice and Bob are having a conversation, or Alice is stealing Bob’s pie, they’re both part of a joint, interactive computation. It’s a little weird for one part of a joint computation to have a different amount of anthropic measure than another part of a computation.[1]
Like we can stipulate arguendo that it’s anthropically valid for Elon Musk to think “I’m Elon Musk. Much of lightcone will depend on me. The matrix overlords will simulate me, Elon Musk, thousands of times more, and make me a thousand times more real, than any of the plebs I talk to”. But it does not directly follow, I don’t think, that in any particular interaction Elon is realer than the pleb he is talking to. The matrix overlords just simulate Elon Musk talking to a thousand different possible-world plebs and stealing their pie a thousand times.
For this argument for egotism to work, I think you have to expect that anthropically you are often computed in a different way than the people you interact with are computed.
I mean it would be weird for Alice and Bob have different measures if they have the same apparent biology. I can totally imagine human Alice talking to a reversible-computer LLM that has no anthropic measure.
I think this is weirder than most anthropics. Different levels of reality fluid in non-interacting worlds? Great. But if Alice and Bob are having a conversation, or Alice is stealing Bob’s pie, they’re both part of a joint, interactive computation. It’s a little weird for one part of a joint computation to have a different amount of anthropic measure than another part of a computation.[1]
Like we can stipulate arguendo that it’s anthropically valid for Elon Musk to think “I’m Elon Musk. Much of lightcone will depend on me. The matrix overlords will simulate me, Elon Musk, thousands of times more, and make me a thousand times more real, than any of the plebs I talk to”. But it does not directly follow, I don’t think, that in any particular interaction Elon is realer than the pleb he is talking to. The matrix overlords just simulate Elon Musk talking to a thousand different possible-world plebs and stealing their pie a thousand times.
For this argument for egotism to work, I think you have to expect that anthropically you are often computed in a different way than the people you interact with are computed.
I mean it would be weird for Alice and Bob have different measures if they have the same apparent biology. I can totally imagine human Alice talking to a reversible-computer LLM that has no anthropic measure.