in real life no intelligent being … can convert themselves into a rock
if they become a rock … the other players will not know it
Refusing in the ultimatum game punishes the prior decision to be unfair, not what remains after the decision is made. It doesn’t matter if what remains is capable of making further decisions, the negotiations backed by ability to refuse an unfair offer are not with them, but with the prior decision maker that created them.
If you convert yourself into a rock (or a utility monster), it’s the decision to convert yourself that’s the opponent of refusal to accept the rock’s offer, the rock is not the refusal’s opponent, even as the refusal is being performed against a literal rock. Predictions about the other players turn anti-inductive when they get exploited, exploiting a prediction about behavior too much makes it increasingly incorrect, since the behavior adapts in response to exploitation starting to show up in the prior. If most rocks that enter the ultimatum game are remains of former unfair decision makers with the rocks’ origins perfectly concealed (as a ploy to make it so that the other player won’t suspect anything and so won’t refuse), then this general fact makes the other player suspect all rocks and punish their possible origins, destroying the premise of not-knowing necessary for the strategy of turning yourself into a rock to shield the prior unfair decision makers from negotiations.
After thinking about it more, it’s possible your model of why Commitment Races resolve fairly, is more correct than my model of why Commitment Races resolve fairly, although I’m less certain they do resolve fairly.
My model’s flaw
My model is that acausal influence does not happen until one side deliberately simulates the other and sees their commitment. Therefore, it is advantageous for both sides to commit up to but not exceeding some Schelling point of fairness, before simulating the other, so that the first acasual message will maximize their payoff without triggering a mutual disaster.
I think one possibly fatal flaw of my model is that it doesn’t explain why one side shouldn’t add the exception “but if the other side became a rock with an ultimatum, I’ll still yield to them, conditional on the fact they became a rock with an ultimatum before realizing I will add this exception (by simulating me or receiving acausal influence from me).”
According to my model, adding this exception improves ones encounters with rocks with ultimatums by yielding to them, and does not increase the rate of encountering rocks with ultimatums (at least in the first round of acausal negotation, which may be the only round), since the exception explicitly rules out yielding to agents affected by whether you make exception.
This means that in my model, becoming a rock with an ultimatum may still be the winning strategy, conditional on the fact the agent becoming a rock with an ultimatum doesn’t know it is the winning strategy, and the Commitment Race problem may reemerge.
Your model
My guess of your model, is that acausal influence is happening a lot, such that refusing in the ultimatum game can successfully punish the prior decision to be unfair (i.e. reduce the frequency of prior decisions to be unfair).
In order for your refusal to influence their frequency of being unfair, your refusal has to have some kind of acausal influence on them, even if they are relatively simpler minds than you (and can’t simulate you).
At first, this seemed impossible to me, but after thinking about it more, maybe even if you are a more complex mind than the other player, your decision-making may be made out of simpler algorithms, some of which they can imagine and be influenced by.
Refusing in the ultimatum game punishes the prior decision to be unfair, not what remains after the decision is made. It doesn’t matter if what remains is capable of making further decisions, the negotiations backed by ability to refuse an unfair offer are not with them, but with the prior decision maker that created them.
If you convert yourself into a rock (or a utility monster), it’s the decision to convert yourself that’s the opponent of refusal to accept the rock’s offer, the rock is not the refusal’s opponent, even as the refusal is being performed against a literal rock. Predictions about the other players turn anti-inductive when they get exploited, exploiting a prediction about behavior too much makes it increasingly incorrect, since the behavior adapts in response to exploitation starting to show up in the prior. If most rocks that enter the ultimatum game are remains of former unfair decision makers with the rocks’ origins perfectly concealed (as a ploy to make it so that the other player won’t suspect anything and so won’t refuse), then this general fact makes the other player suspect all rocks and punish their possible origins, destroying the premise of not-knowing necessary for the strategy of turning yourself into a rock to shield the prior unfair decision makers from negotiations.
After thinking about it more, it’s possible your model of why Commitment Races resolve fairly, is more correct than my model of why Commitment Races resolve fairly, although I’m less certain they do resolve fairly.
My model’s flaw
My model is that acausal influence does not happen until one side deliberately simulates the other and sees their commitment. Therefore, it is advantageous for both sides to commit up to but not exceeding some Schelling point of fairness, before simulating the other, so that the first acasual message will maximize their payoff without triggering a mutual disaster.
I think one possibly fatal flaw of my model is that it doesn’t explain why one side shouldn’t add the exception “but if the other side became a rock with an ultimatum, I’ll still yield to them, conditional on the fact they became a rock with an ultimatum before realizing I will add this exception (by simulating me or receiving acausal influence from me).”
According to my model, adding this exception improves ones encounters with rocks with ultimatums by yielding to them, and does not increase the rate of encountering rocks with ultimatums (at least in the first round of acausal negotation, which may be the only round), since the exception explicitly rules out yielding to agents affected by whether you make exception.
This means that in my model, becoming a rock with an ultimatum may still be the winning strategy, conditional on the fact the agent becoming a rock with an ultimatum doesn’t know it is the winning strategy, and the Commitment Race problem may reemerge.
Your model
My guess of your model, is that acausal influence is happening a lot, such that refusing in the ultimatum game can successfully punish the prior decision to be unfair (i.e. reduce the frequency of prior decisions to be unfair).
In order for your refusal to influence their frequency of being unfair, your refusal has to have some kind of acausal influence on them, even if they are relatively simpler minds than you (and can’t simulate you).
At first, this seemed impossible to me, but after thinking about it more, maybe even if you are a more complex mind than the other player, your decision-making may be made out of simpler algorithms, some of which they can imagine and be influenced by.