My suspicion is that this just corresponds to some particular rule for normalizing preferences over strategies. The “amount of power” given to each faction is capped, so that even if some faction has an extreme opinion about one issue it can only express itself by being more and more willing to trade other things to get it.
If goodness numbers are normalized, and some moral theory wants to express a large relative preference for one thing over another, it can’t just crank up the number on the thing it likes—it must flatten the contrast of things it cares less about in order to express a more extreme preference for one thing.
Normalisation procedures: if they are ‘structural’ (not caring about details like the names of the theories or outcomes), then the two theories are symmetric, so they must be normalised in the same way. WLOG, as follows:
Then letting q = (1-p) the aggregate preferences T are given by:
T(A) = 2p, T(B) = q, T(C) = p, T(D) = q
So:
if p > 2⁄3, the aggregate chooses A and C
if 1⁄3 < p < 2⁄3, the aggregate chooses A and D
if p < 1⁄3, the aggregate chooses B and D
The advantage of this simple set-up is that I didn’t have to make any assumptions about the normalisation procedure beyond that it is structural. If the bargaining outcome agrees with this we may need to look at more complicated cases; if it disagrees we have discovered something already.
For the bargaining outcome, I’ll assume we’re looking for a Nash Bargaining Solution (as suggested in another comment thread).
The defection point has expected utility 3p/2 for Theory I and expected utility 3q/2 for Theory II (using the same notation as I did in this comment).
I don’t see immediately how to calculate the NBS from this.
Then Theory I has expected utility 1, and Theory 2 has expected utility 1⁄2.
Assume (x,y) is the solution point, where x represents probability of voting for A (over B), and y represents probability of voting for C (over D). I claim without proof that the NBS has x=1 … seems hard for this not to be the case, but would be good to check it carefully.
Then the utility of Theory 1 for the point (1, y) = 1 + y/2, and utility of Theory 2 = 1 - y. To maximise the product of the benefits over the defection point we want to maximise y/2*(1/2 - y). This corresponds to maximising y/2 - y^2. Taking the derivative this happens when y = 1⁄4.
Note that the normalisation procedure leads to being on the fence between C and D at p = 2⁄3.
If I’m correct in my ad-hoc approach to calculating the NBS when p = 2⁄3, then this is firmly in the territory which prefers D to C. Therefore the parliamentary model gives different solutions to any normalisation procedure.
My suspicion is that this just corresponds to some particular rule for normalizing preferences over strategies.
Yes, assuming that the delegates always take any available Pareto improvements, it should work out to that [edit: nevermind; I didn’t notice that owencb already showed that that is false]. That doesn’t necessarily make the parliamentary model useless, though. Finding nice ways to normalize preferences is not easy, and if we end up deriving some such normalization rule with desirable properties from the parliamentary model, I would consider that a success.
Harsanyi’s theorem will tell us that it will after the fact be equivalent to some normalisation—but the way you normalise preferences may vary with the set of preferences in the parliament (and the credences they have). And from a calculation elsewhere in this comment thread I think it will have to vary with these things.
I don’t know if such a thing is still best thought of as a ‘rule for normalising preferences’. It still seems interesting to me.
Yes, that sounds right. Harsanyi’s theorem was what I was thinking of when I made the claim, and then I got confused for a while when I saw your counterexample.
My suspicion is that this just corresponds to some particular rule for normalizing preferences over strategies. The “amount of power” given to each faction is capped, so that even if some faction has an extreme opinion about one issue it can only express itself by being more and more willing to trade other things to get it.
If goodness numbers are normalized, and some moral theory wants to express a large relative preference for one thing over another, it can’t just crank up the number on the thing it likes—it must flatten the contrast of things it cares less about in order to express a more extreme preference for one thing.
I propose to work through a simple example to check whether it aligns with the methods which normalise preferences and sum even in a simple case.
Setup:
Theory I, with credence p, and and Theory II with credence 1-p.
We will face a decision either between A and B (with probability 50%), or between C and D (with probability 50%).
Theory I prefers A to B and prefers C to D, but cares twice as much about the difference between A and B as that between C and D.
Theory II prefers B to A and prefers D to C, but cares twice as much about the difference between D and C as that between B and A.
Questions: What will the bargaining outcome be? What will normalisation procedures do?
Normalisation procedures: if they are ‘structural’ (not caring about details like the names of the theories or outcomes), then the two theories are symmetric, so they must be normalised in the same way. WLOG, as follows:
T1(A) = 2, T1(B) = 0, T1(C) = 1, T1(D) = 0 T2(A) = 0, T2(B) = 1, T2(C) = 0, T2(D) = 2
Then letting q = (1-p) the aggregate preferences T are given by:
T(A) = 2p, T(B) = q, T(C) = p, T(D) = q
So:
if p > 2⁄3, the aggregate chooses A and C
if 1⁄3 < p < 2⁄3, the aggregate chooses A and D
if p < 1⁄3, the aggregate chooses B and D
The advantage of this simple set-up is that I didn’t have to make any assumptions about the normalisation procedure beyond that it is structural. If the bargaining outcome agrees with this we may need to look at more complicated cases; if it disagrees we have discovered something already.
For the bargaining outcome, I’ll assume we’re looking for a Nash Bargaining Solution (as suggested in another comment thread).
The defection point has expected utility 3p/2 for Theory I and expected utility 3q/2 for Theory II (using the same notation as I did in this comment).
I don’t see immediately how to calculate the NBS from this.
Assume p = 2⁄3.
Then Theory I has expected utility 1, and Theory 2 has expected utility 1⁄2.
Assume (x,y) is the solution point, where x represents probability of voting for A (over B), and y represents probability of voting for C (over D). I claim without proof that the NBS has x=1 … seems hard for this not to be the case, but would be good to check it carefully.
Then the utility of Theory 1 for the point (1, y) = 1 + y/2, and utility of Theory 2 = 1 - y. To maximise the product of the benefits over the defection point we want to maximise y/2*(1/2 - y). This corresponds to maximising y/2 - y^2. Taking the derivative this happens when y = 1⁄4.
Note that the normalisation procedure leads to being on the fence between C and D at p = 2⁄3.
If I’m correct in my ad-hoc approach to calculating the NBS when p = 2⁄3, then this is firmly in the territory which prefers D to C. Therefore the parliamentary model gives different solutions to any normalisation procedure.
Yes, assuming that the delegates always take any available Pareto improvements, it should work out to that [edit: nevermind; I didn’t notice that owencb already showed that that is false]. That doesn’t necessarily make the parliamentary model useless, though. Finding nice ways to normalize preferences is not easy, and if we end up deriving some such normalization rule with desirable properties from the parliamentary model, I would consider that a success.
Harsanyi’s theorem will tell us that it will after the fact be equivalent to some normalisation—but the way you normalise preferences may vary with the set of preferences in the parliament (and the credences they have). And from a calculation elsewhere in this comment thread I think it will have to vary with these things.
I don’t know if such a thing is still best thought of as a ‘rule for normalising preferences’. It still seems interesting to me.
Yes, that sounds right. Harsanyi’s theorem was what I was thinking of when I made the claim, and then I got confused for a while when I saw your counterexample.
This actually sounds plausible to me, but I’m not sure how to work it out formally. It might make for a suprising and interesting result.
I think there’s already been a Stuart Armstrong post containing the essential ideas, but I can’t find it. So asking him might be a good start.