There are two people in the world: Alice and Bob. They have unequal levels of anthropic measure/reality fluid (one 95%, one 5%). You are Alice. You can steal Bob’s pie. Should you?
Behind the veil of ignorance it’s good to transfer utility from the person with less reality fluid to the person with more reality fluid. But who’s the one with more reality fluid, Alice or Bob? It’s probably Alice! How do you know? Because you’re Alice! Steal that pie, Alice!
I’m reminded of the egotistical the Copernican principle: “I’m a typical observer in the universe” is equivalent to “Typically observers in the universe are like me”.
I think this is weirder than most anthropics. Different levels of reality fluid in non-interacting worlds? Great. But if Alice and Bob are having a conversation, or Alice is stealing Bob’s pie, they’re both part of a joint, interactive computation. It’s a little weird for one part of a joint computation to have a different amount of anthropic measure than another part of a computation.[1]
Like we can stipulate arguendo that it’s anthropically valid for Elon Musk to think “I’m Elon Musk. Much of lightcone will depend on me. The matrix overlords will simulate me, Elon Musk, thousands of times more, and make me a thousand times more real, than any of the plebs I talk to”. But it does not directly follow, I don’t think, that in any particular interaction Elon is realer than the pleb he is talking to. The matrix overlords just simulate Elon Musk talking to a thousand different possible-world plebs and stealing their pie a thousand times.
For this argument for egotism to work, I think you have to expect that anthropically you are often computed in a different way than the people you interact with are computed.
I mean it would be weird for Alice and Bob have different measures if they have the same apparent biology. I can totally imagine human Alice talking to a reversible-computer LLM that has no anthropic measure.
I’ll spell out a concrete toy version:
There are two people in the world: Alice and Bob. They have unequal levels of anthropic measure/reality fluid (one 95%, one 5%). You are Alice. You can steal Bob’s pie. Should you?
Behind the veil of ignorance it’s good to transfer utility from the person with less reality fluid to the person with more reality fluid. But who’s the one with more reality fluid, Alice or Bob? It’s probably Alice! How do you know? Because you’re Alice! Steal that pie, Alice!
I’m reminded of the egotistical the Copernican principle: “I’m a typical observer in the universe” is equivalent to “Typically observers in the universe are like me”.
I think this is weirder than most anthropics. Different levels of reality fluid in non-interacting worlds? Great. But if Alice and Bob are having a conversation, or Alice is stealing Bob’s pie, they’re both part of a joint, interactive computation. It’s a little weird for one part of a joint computation to have a different amount of anthropic measure than another part of a computation.[1]
Like we can stipulate arguendo that it’s anthropically valid for Elon Musk to think “I’m Elon Musk. Much of lightcone will depend on me. The matrix overlords will simulate me, Elon Musk, thousands of times more, and make me a thousand times more real, than any of the plebs I talk to”. But it does not directly follow, I don’t think, that in any particular interaction Elon is realer than the pleb he is talking to. The matrix overlords just simulate Elon Musk talking to a thousand different possible-world plebs and stealing their pie a thousand times.
For this argument for egotism to work, I think you have to expect that anthropically you are often computed in a different way than the people you interact with are computed.
I mean it would be weird for Alice and Bob have different measures if they have the same apparent biology. I can totally imagine human Alice talking to a reversible-computer LLM that has no anthropic measure.