Positional (e.g. status) competition isn’t literally zero-sum if different people have different “measures” or “magical reality fluid” in Eliezer’s parlance, which seems pretty plausible (e.g. due to simulations and/or something like UDASSA). This can be another reason for one’s moral parliament to endorse behavior that would be conventionally viewed as zero sum.
(The more common-sensical reason being that one’s positional drives/motivations/values ought to have some representatives in the moral parliament who get their way sometimes, i.e., when it doesn’t hurt the other values much.)
I’ve been meaning to make a post about this, and this discussion just reminded me to. Hopefully it’s immediately obvious once pointed out, so I’ll keep this short.
(After consulting a bunch of AIs, their consensus is that the logic works but I may be way underestimating the inferential distance. So I’m giving an AI-written expansion below. Workflow: give the same prompt “can you try to write a longer, but still concise version, that explains the logic more?” to 3 SOTA chatbots, then pick the best response, which was from claude-opus-4-6-thinking, by far. The other two mangled some of the concepts pretty badly.)
Let me unpack the logic, because the post is compressed to the point of obscurity:
Step 1: The standard view. Status competition is considered zero-sum. If you and I compete for a promotion, my gain in status/resources roughly equals your loss. Total welfare stays flat. Many people therefore view status-seeking as morally dubious — you’re not creating value, just redistributing it.
Step 2: Introduce “measure.” Some theories in physics and philosophy suggest not all observers carry equal metaphysical weight. Under simulation hypotheses, some beings might run on more computational substrate than others. Under UDASSA (a framework for assigning probabilities to observer-moments), different observers get different “measure” based on algorithmic complexity. “Measure” here means something like: how much does this person’s experience count in the moral ledger of the universe?
Step 3: The punchline. If person A has measure 2 and person B has measure 1, and they’re in a zero-sum status competition where the winner gains +10 utility and the loser gets −10, then:
If A wins: weighted outcome = 2(+10) + 1(-10) = +10
If B wins: weighted outcome = 2(-10) + 1(+10) = −10
The “zero-sum” game now has a clear winner from a cosmic utility perspective. A winning is actually positive-sum when you weight by measure. So if you suspect you have high measure, your moral parliament — the internal coalition of values that guides your decisions — might reasonably let your status-seeking impulses win more often, because those competitions aren’t really zero-sum after all.
There are two people in the world: Alice and Bob. They have unequal levels of anthropic measure/reality fluid (one 95%, one 5%). You are Alice. You can steal Bob’s pie. Should you?
Behind the veil of ignorance it’s good to transfer utility from the person with less reality fluid to the person with more reality fluid. But who’s the one with more reality fluid, Alice or Bob? It’s probably Alice! How do you know? Because you’re Alice! Steal that pie, Alice!
I’m reminded of the egotistical the Copernican principle: “I’m a typical observer in the universe” is equivalent to “Typically observers in the universe are like me”.
I think this is weirder than most anthropics. Different levels of reality fluid in non-interacting worlds? Great. But if Alice and Bob are having a conversation, or Alice is stealing Bob’s pie, they’re both part of a joint, interactive computation. It’s a little weird for one part of a joint computation to have a different amount of anthropic measure than another part of a computation.[1]
Like we can stipulate arguendo that it’s anthropically valid for Elon Musk to think “I’m Elon Musk. Much of lightcone will depend on me. The matrix overlords will simulate me, Elon Musk, thousands of times more, and make me a thousand times more real, than any of the plebs I talk to”. But it does not directly follow, I don’t think, that in any particular interaction Elon is realer than the pleb he is talking to. The matrix overlords just simulate Elon Musk talking to a thousand different possible-world plebs and stealing their pie a thousand times.
For this argument for egotism to work, I think you have to expect that anthropically you are often computed in a different way than the people you interact with are computed.
I mean it would be weird for Alice and Bob have different measures if they have the same apparent biology. I can totally imagine human Alice talking to a reversible-computer LLM that has no anthropic measure.
While I do fully support and experience differential individual weighting in my utility, I’m not sure I understand what would justify the idea of “cosmic utility”. I don’t believe there is any shared universal (or cross-universal) experience that really corresponds to a valuation or valence. Utility/preference is individual, all the way down (and all the way up).
I think there IS a different asymmetry that can make status (and most interactions that appear zero-sum in resources) not actually zero-sum for the participants: mapping of shared/objective world-state to individual perceived status-value. It’s possible that if participants are thinking of slightly different dimension of what increases or reduces their status, that many changes can increase A’s (perceived) status more than it decreases B’s. I think this is the standard “private utility function” problem very often mentioned in decision theory. You don’t focus on this in your post, but I think it’s the stronger model.
Positional (e.g. status) competition isn’t literally zero-sum if different people have different “measures” or “magical reality fluid” in Eliezer’s parlance, which seems pretty plausible (e.g. due to simulations and/or something like UDASSA). This can be another reason for one’s moral parliament to endorse behavior that would be conventionally viewed as zero sum.
(The more common-sensical reason being that one’s positional drives/motivations/values ought to have some representatives in the moral parliament who get their way sometimes, i.e., when it doesn’t hurt the other values much.)
I’ve been meaning to make a post about this, and this discussion just reminded me to. Hopefully it’s immediately obvious once pointed out, so I’ll keep this short.
(After consulting a bunch of AIs, their consensus is that the logic works but I may be way underestimating the inferential distance. So I’m giving an AI-written expansion below. Workflow: give the same prompt “can you try to write a longer, but still concise version, that explains the logic more?” to 3 SOTA chatbots, then pick the best response, which was from claude-opus-4-6-thinking, by far. The other two mangled some of the concepts pretty badly.)
Let me unpack the logic, because the post is compressed to the point of obscurity:
Step 1: The standard view. Status competition is considered zero-sum. If you and I compete for a promotion, my gain in status/resources roughly equals your loss. Total welfare stays flat. Many people therefore view status-seeking as morally dubious — you’re not creating value, just redistributing it.
Step 2: Introduce “measure.” Some theories in physics and philosophy suggest not all observers carry equal metaphysical weight. Under simulation hypotheses, some beings might run on more computational substrate than others. Under UDASSA (a framework for assigning probabilities to observer-moments), different observers get different “measure” based on algorithmic complexity. “Measure” here means something like: how much does this person’s experience count in the moral ledger of the universe?
Step 3: The punchline. If person A has measure 2 and person B has measure 1, and they’re in a zero-sum status competition where the winner gains +10 utility and the loser gets −10, then:
If A wins: weighted outcome = 2(+10) + 1(-10) = +10
If B wins: weighted outcome = 2(-10) + 1(+10) = −10
The “zero-sum” game now has a clear winner from a cosmic utility perspective. A winning is actually positive-sum when you weight by measure. So if you suspect you have high measure, your moral parliament — the internal coalition of values that guides your decisions — might reasonably let your status-seeking impulses win more often, because those competitions aren’t really zero-sum after all.
I’ll spell out a concrete toy version:
There are two people in the world: Alice and Bob. They have unequal levels of anthropic measure/reality fluid (one 95%, one 5%). You are Alice. You can steal Bob’s pie. Should you?
Behind the veil of ignorance it’s good to transfer utility from the person with less reality fluid to the person with more reality fluid. But who’s the one with more reality fluid, Alice or Bob? It’s probably Alice! How do you know? Because you’re Alice! Steal that pie, Alice!
I’m reminded of the egotistical the Copernican principle: “I’m a typical observer in the universe” is equivalent to “Typically observers in the universe are like me”.
I think this is weirder than most anthropics. Different levels of reality fluid in non-interacting worlds? Great. But if Alice and Bob are having a conversation, or Alice is stealing Bob’s pie, they’re both part of a joint, interactive computation. It’s a little weird for one part of a joint computation to have a different amount of anthropic measure than another part of a computation.[1]
Like we can stipulate arguendo that it’s anthropically valid for Elon Musk to think “I’m Elon Musk. Much of lightcone will depend on me. The matrix overlords will simulate me, Elon Musk, thousands of times more, and make me a thousand times more real, than any of the plebs I talk to”. But it does not directly follow, I don’t think, that in any particular interaction Elon is realer than the pleb he is talking to. The matrix overlords just simulate Elon Musk talking to a thousand different possible-world plebs and stealing their pie a thousand times.
For this argument for egotism to work, I think you have to expect that anthropically you are often computed in a different way than the people you interact with are computed.
I mean it would be weird for Alice and Bob have different measures if they have the same apparent biology. I can totally imagine human Alice talking to a reversible-computer LLM that has no anthropic measure.
While I do fully support and experience differential individual weighting in my utility, I’m not sure I understand what would justify the idea of “cosmic utility”. I don’t believe there is any shared universal (or cross-universal) experience that really corresponds to a valuation or valence. Utility/preference is individual, all the way down (and all the way up).
I think there IS a different asymmetry that can make status (and most interactions that appear zero-sum in resources) not actually zero-sum for the participants: mapping of shared/objective world-state to individual perceived status-value. It’s possible that if participants are thinking of slightly different dimension of what increases or reduces their status, that many changes can increase A’s (perceived) status more than it decreases B’s. I think this is the standard “private utility function” problem very often mentioned in decision theory. You don’t focus on this in your post, but I think it’s the stronger model.