# Gains from trade: Slug versus Galaxy—how much would I give up to control you?

Edit: Moved to main at ThrustVectoring’s suggestion.

*A suggestion as to how to split the gains from trade in some situations.*

## The problem of Power

A year or so ago, people in the FHI embarked on a grand project: to try and find out if there was a single way of resolving negotiations, or a single way of merging competing moral theories. This project made a lot of progress in finding out how hard this was, but very little in terms of solving it. It seemed evident that the correct solution was to weigh the different utility functions, and then for everyone maximise the weighted sum, but all ways of weighting had their problems (the weighting with the most good properties was a very silly one: use the “min-max” weighting that sets your maximal attainable utility to 1 and your minimal to 0).

One thing that we didn’t get close to addressing is the concept of power. If two partners in the negotiation have very different levels of power, then abstractly comparing their utilities seems the wrong solution (more to the point: it wouldn’t be accepted by the powerful party).

The New Republic spans the Galaxy, with Jedi knights, battle fleets, armies, general coolness, and the manufacturing and human resources of countless systems at its command. The dull slug, ARthUrpHilIpDenu, moves very slowly around a plant, and possibly owns one leaf (or not—he can’t produce the paperwork). Both these entities have preferences, but if they meet up, and their utilities are normalised abstractly, then ARthUrpHilIpDenu’s preferences will weigh in far too much: a sizeable fraction of the galaxy’s production will go towards satisfying the slug. Even if you think this is “fair”, consider that the New Republic is the merging of countless individual preferences, so it doesn’t make any sense that the two utilities get weighted equally.

## The default point

After looking at various blackmail situations, it seems to me that it’s the concept of default, or status quo, that most clearly differentiates between a threat and an offer. I wouldn’t want you to make a credible threat, because this worsens the status quo, I would want you to make a credible offer, because this improves it. How this default is established is another matter—there may be some super-UDT approach that solves it from first principles. Maybe there is some deep way of distinguishing between threats and promises in some other way, and the default is simply the point between them.

In any case, without going any further into it’s meaning or derivation, I’m going to assume that the problem we’re working on has a definitive default/disagreement/threat point. I’ll use the default point terminology, as that is closer to the concept I’m considering.

Simple trade problems often have a very clear default point. These are my goods, those are your goods, the default is we go home with what we started with. This is what I could build, that’s what you could build, the default is that we both build purely for ourselves.

If we imagine ARthUrpHilIpDenu and the New Republic were at opposite ends of a regulated wormhole, and they could only trade in safe and simple goods, then we’ve got a pretty clear default point.

Having a default point opens up a whole host of new bargaining equilibriums, such as the Nash Bargaining Solution (NBS) and the Kalai-Smorodinsky Bargaining Solution (KSBS). But neither of these are really quite what we’d want: the KSBKS is all about fairness (which generally reduced expected outcomes), while the NBS uses a *product* of utility values, something that makes no intrinsic sense at all (NBS has some nice properties, like independence of irrelevant alternatives, but this only matters if the default point is reached through a process that has the same properties—and it can’t be).

## What am I *really* offering you in trade?

When two agents meet, especially if they are likely to meet more in the future (and most especially if they don’t know the number of times and the circumstances in which they will meet), they should merge their utility functions: fix a common scale for their utility functions, add them together, and then both proceed to maximise the sum.

This explains what’s really being offered in a trade. Not a few widgets or stars, but the possibility of copying your utility function into mine. But why would you want that? Because that will change my decisions, into a direction you find more pleasing. So what I’m actually offering you, is access to my decision points.

*What is actually on offer in a trade, is access by one player’s utility function to the other player’s decision points.*

This gives a novel way of normalising utility functions. How much, precisely, is access to my decision points worth to you? If the default point gives a natural zero, then complete control over the other player’s decision points is a natural one. “Power” is a nebulous concept, and different players may disagree as to how much power they each have. But power can only be articulated through making decisions (if you can’t change any of your decisions, you have no power), and this normalisation allows each player to specify exactly how much they value the power/decision points of the other. Outcomes that involve one player controlling the other player’s decision points can be designated the “utopia” point for that first player. These are what would happen if everything went exactly according to what they wanted.

What does this mean for ARthUrpHilIpDenu and the New Republic? Well, the New Republic stands to gain a leaf (maybe). From it’s perspective, the difference between default (all the resources of the galaxy and no leaf) and utopia (all the resources of the galaxy plus one leaf) is tiny. And yet that tiny difference will get normalised to one: the New Republic’s utility function will get multiplied by a huge amount. It will weigh heavily in any sum.

What about ARthUrpHilIpDenu? It stands to gain the resources of a galaxy. The difference between default (a leaf) and utopia (all the resources of a galaxy dedicated to making leaves) is unimaginably humongous. And yet that huge difference will get normalised to one: the ARthUrpHilIpDenu’s utility function will get divided by a huge amount. It will weigh very little in any sum.

Thus if we add the two normalised utility functions, we get one that is nearly totally dominated by the New Republic. Which is what we’d expect, given the power differential between the two. So this bargaining system reflects the relative power of the players. Another way of thinking of this is that each player’s utility is normalised taking into account how much they would give up to control the other. I’m calling it the “Mutual Worth Bargaining Solution” (MWBS), as it’s the worth to players of the other player’s decision points that are key. Also because I couldn’t think of a better title.

## Properties of the Mutual Worth Bargaining Solution

How does the MWBS compare with the NBS and the KSBS? The NBS is quite different, because it has no concept of relative power, normalising purely by the players’ preferences. Indeed, one player could have no control at all, no decision points, and the NBS would still be unchanged.

The KSBS is more similar to the MWBS: the utopia points of the KSBS are the same as those of the MWBS. If we set the default point to (0,0) and the utopia points to (1,-) and (-,1), then the KSBS is given by the highest h such that (h,h) is a possible outcome. Whereas the MWBS is given by the outcome (x,y) such that x+y is highest possible.

Which is preferable? Obviously, if they knew exactly what the outcomes and utilities were on offer, then each player would have preferences as to which system to use (the one that gives them more). But if they didn’t, if they had uncertainties as to what players and what preferences they would face in the future, then MWBS generally comes out on top (in expectation).

How so? Well, if a player doesn’t know what other players they’ll meet, they don’t know in what way their decision points will be relevant to the other, and vice versa. They don’t know what pieces of their utility will be relevant to the other, and vice versa. So they can expect to face a wide variety of normalised situations. To a first approximation, it isn’t too bad an idea to assume that one is equally likely to face a certain situation as it’s symmetric complement. Using the KSBS, you’d expect to get a utility of h (same in both case); under the MWBS, a utility of (x+y)/2 (x in one case, y in the other). Since x+y ≥ h+h = 2h by the definition of the MWBS, it comes out ahead in expectation.

Another important distinction between the MWBS is that while the KSBS and the NBS only allow Pareto improvements from the default point, MWBS does allow for some situation where one player will lose from the deal. It is possible, for instance, that (1/2,-1/4) is a possible outcome (summed utility ^{1}⁄_{4}), and there are no better options possible. Doesn’t this go against the spirit of the default point? Why would someone go into a deal that leaves them poorer than before?

First off all, that situation will be rare. All MWBS must be in the triangle bounded by x<1, y<1 and x+y>0. The first bounds are definitional: one cannot get more expected utility that one’s utopia point. The last bound comes from the fact that the default point is itself an option, with summed utility 0+0=0, so all summed utilities must be above zero. Sprinkle a few random outcome points into that triangle, and it very likely that the one with highest summed utility will be a Pareto improvement over (0,0).

But the other reason to accept the risk of losing, is because of the opportunity of gain. One could modify the MWBS to only allow Pareto improvements over the default: but in expectation, this would perform worse. The player would be immune from losing ^{1}⁄_{4} utility from (1/2,-1/4), but unable to gain ^{1}⁄_{2} from the (-1/4,1/2): the argument is the same as above. In ignorance as to the other player’s preferences, accepting the possibility of loss improves the expected outcome.

It should be noted that the maximum that a player could theoretically lose by using the MWBS is equal to the maximum they could theoretically win. So the New Republic could lose at most a leaf, meaning that even powerful players would not be reluctant to trade. For less powerful players, the potential losses are higher, but so are the potential rewards.

## Directions of research

The MWBS is somewhat underdeveloped, and the explanation here isn’t as clear as I’d have liked. However, me and Miriam are about to have a baby, so I’m not expecting to have any time at all soon, so I’m pushing out the idea, unpolished.

Some possible routes for further research: what are the other properties of MWBS? Are they properties that make MWBS feel more or less likely or acceptable? The NBS is equivalent with certain properties: what are the properties that are necessary and sufficient for the MWBS (and can they suggest better Bargaining Solutions)? Can we replace the default point? Maybe we can get a zero by imagining what would happen if the second player’s decision nodes were under the control of an anti-agent (an agent that’s the opposite of the first player), or a randomly selected agent?

The most important research route is to analyse what happens if several players come together at different times, and repeatedly normalise their utilities using the MWBS: does it matter the order in which they meet? I strongly feel that it’s exploring this avenue that will reach “the ultimate” bargaining solution, if such a thing is to be found. Some solution that is stable under large numbers of agents, who don’t know each other or how many they are, coming together in a order they can’t predict.

- Original Research on Less Wrong by 29 Oct 2012 22:50 UTC; 39 points) (
- Research interests I don’t currently have time to develop alone by 16 Oct 2013 10:31 UTC; 26 points) (
- Intertheoretic utility comparison by 3 Jul 2018 13:44 UTC; 22 points) (
- Lying in negotiations: a maximally bad problem by 17 Nov 2014 15:17 UTC; 18 points) (
- Normalising utility as willingness to pay by 18 Jul 2019 11:44 UTC; 16 points) (
- Even with default points, systems remain exploitable by 19 Jul 2013 16:24 UTC; 14 points) (
- Mutual Worth without default point (but with potential threats) by 31 Jul 2013 9:52 UTC; 12 points) (
- 8 Feb 2015 17:56 UTC; 6 points) 's comment on Have you changed your mind recently? by (
- 3 Jul 2018 16:23 UTC; 3 points) 's comment on Intertheoretic utility comparison by (
- Intertheoretic utility comparison: simple theory by 3 May 2017 8:36 UTC; 1 point) (
- 14 Sep 2013 1:42 UTC; 1 point) 's comment on Games for Rationalists by (
- Acausal trade: full decision algorithms by 15 May 2017 10:12 UTC; 0 points) (
- 8 Jun 2017 11:10 UTC; 0 points) 's comment on Cooperative Oracles: Stratified Pareto Optima and Almost Stratified Pareto Optima by (
- 13 Jan 2014 17:42 UTC; 0 points) 's comment on Double-thick transistors and other subjective phenomena by (
- 14 Sep 2013 0:47 UTC; 0 points) 's comment on Games for Rationalists by (

If I understand the proposal correctly, I think this bargaining solution can heavily favor someone with diminishing marginal utility vs someone with linear utility. For example suppose Alice and Bob each own 1 (infinitely divisible) unit of resource, and that’s the default point. Alice values all resources linearly, with utility function A(a,b)=a, where a is resources consumed by Alice, b is resources consumed by Bob. Bob’s utility function is B(a,b) = b if b<=1, else 1+(b-1)/9 if b<=1.9, else 1.1. Normalization causes Bob’s utility function to be multiplied by 10, and the bargaining solution ends up giving Bob 1.9 units of resources. Correct?

Here’s another strange property of the bargaining solution. (Again I would appreciate confirmation that I’m understanding it correctly.) Suppose Alice and Carol both have linear utility, and Alice owns a tiny bit more resources than Carol. Then Alice’s utility function gets normalized to have a higher marginal utility than Carol, and

allthe resources go to Alice. If Carol instead brought a tiny bit more resources to the table, then all the resources would go to Carol.I find it puzzling that the post is getting upvoted so much with so little discussion. Is anyone else looking at whether the solution produces outcomes that make intuitive sense?

It’s even worse. Suppose Alice and Carol are as you say, with Alice having x more resources than Carol. Before the trade, Alice is given the chance to benefit them both; if she accepts the offer, she will gain y resources, and Carol will gain y+2x. This is a pareto improvement, and prima facie it seems she should do so.

But alas! If she does so, Carol will have x more resources that Alice does—so all the resources will go to Carol! So Alice definitely does not want to accept the offer—even though accepting Omega’s offer could leave them both better off post-trade.

Ideally the default point would be set before Omega’s offer. After all, that is a decision point of Alice’s, that Carol woul value being able to decide...

I’m certainly not looking at the solution; I have no idea how to work any of this maths, and am mostly skimming along with a bazillion other blogs anyway! I know I trust Lesswrong, so obviously if there’s a confident-sounding post here I assume it’s correct.

[/confession]

You are correct. But if both utilities are linear in resources, then the outcome space forms a line in the joint utility space. And when maximising utility versus a line, the generic outcome is to pick one end of the line or the other, never a midpoint.

This is a feature of any approach that maximises a summed utility: you (almost always) will pick the end point of any straight segment.

It is the fundamental theorem of linear programming.

To me that suggests looking at approaches (like NBS) that don’t involve directly maximizing summed utility. The solution could still guarantee a Pareto optimal outcome, which means it ends up being maximal for

somesummed utility function, but you’re much more likely to end up in the middle of a line segment (which often seem to be the most intuitively acceptable outcome).Why? If you can end up on either end of a long line segment, then you have a chance of winning a lot or losing a lot. But you shouldn’t be risk averse with your utility—risk aversion should already be included. So “towards the middle” is no better in expectation than “right end or left end”.

Maybe you’re thinking we shouldn’t be maximising expected utility? I’m actually quite sympathetic to that view...

And with complex real world valuations (eg anything with a diminishing marginal utility), then any Pareto line segments are likely to be short.

Nonlinear utility functions (as a function of resources) do not accurately model human risk aversion. That could imply that we should either change our (or they/their) risk aversion or not be maximising expected utility.

Nonlinear jumps in utility from different amounts of a resource seem common for humans at least at some points in time. Example: Either I have enough to pay off the loan shark, or he’ll break my legs.

Yep. Humans are not expected utility maximisers. But there’s strong arguments that an AI would be...

If you have a line segment that crosses the quadrant of joint utility space that represents Pareto improvements over the status quo, then ending up on an end means one of the parties is made worse off. To generalize this observation, it’s hard to guarantee that no one is made worse off, unless the bargaining solution explicitly

triesto do that. If you maximize summed utility, and your weights are not picked to ensure that the outcome is a Pareto improvement (which generally involves picking the Pareto improvement first and then working backwards to find the weights), then there will be situations where it makes one party worse off.You talk about “in expectation” but I’m not sure that changes the picture. It seems like the same argument applies: you can’t guarantee that nobody is made worse off in expectation, unless you explicitly try to.

You can restrict the MWBS to only consider strict Pareto improvements over default, if you want. That’s another bargaining solution—call it PMWBS.

My (informal) argument was that in situations of uncertainty as to who you are facing, MWBS gives you a higher expected value than PMWBS (informally because you expect to gain more when the deal disadvantages your opponent, than you expect to lose when it disadvantages you). Since the expected value of PMWBS is evidently positive, that of MWBS must be too.

I think you may have to formalize this to figure out what you need to assume to make the argument work. Clearly MWBS doesn’t always give positive expected value in situations of uncertainty as to who you are facing. For example suppose Alice expects to face either Carol or Dave, with

^{50}⁄_{50}probability. Carol has slightly more resources than Alice, and Dave has almost no resources. All three have linear utility. Under MWBS, Alice now has 50% probability of losing everything and 50% probability of gaining a small amount.I was trying to assume maximum ignorance—maximum uncertainty as to who you might meet, and their abilities and values.

If you have a better idea as to what you face, then you can start shopping around bargaining solutions to get the one you want. And in your example, Alice would certainly prefer KSBS and NBS over PMWBS, which she would prefer over MWBS.

But if, for instance, Dave had slightly less resources that Alice, then it’s no longer true. And if any of them depart from (equal) linear utility in

every single resource, then it’s likely no longer true either.Indeed. That is discussed in the last part of the “properties” section.

I think the argument is that this possible outcome is acceptable to Alice because she expects an equal chance of encountering trade opportunities where she benefits from the bargain.

I see a similarity between this risk and the Newcomb problem, but I’m not sure what additional assumptions this brings into the theory. What knowledge of your trading partner’s decision mechanisms (source code) is necessary to commit to this agreement?

Seems correct. But high marginal utility in strategic places can work too. What if Alice had linear utility, up until 1.05, where she suddenly has utility 2? Then the split is Alice 1.05, Bob 0.95.

Diminishing marginal returns isn’t so much the issue, rather it’s the low utopia point (as a consequence of diminishing marginal returns).

I do not think that it is true that it is “very likely” that the solution will be net positive for both players. If players have a variety of marginal utilities from resources, it seems reasonable to expect that this will cause most ‘negotiations’ to result in pure redistribution, and there are many cases (such as Wei_Dai’s second example) where one can simply lose all their resources.

It also seems like a very bad assumption for agents to assume that they’ll be exposed to these situations symmetrically; most agents should be able to have a rough idea where they lie on the spectrum compared to their likely trading partners.

More than that, in a world where this was an enforced negotiating style, it seems that you have a dystopia where the best way to gain utility is do a combination of modifying your utility function such that you gain transfers of resources, and/or seeking out trading partners who will be forced to give you resources, and that such efforts will rapidly consume a growing share of the resources. That is certainly what happens when I game out a real world test, with Omega enforcing the rules!

There’s a followup that describes more problems.

In the triangle of possible outcomes, if any of the joint utility points lie in the triangle bounded by x+y>1 (which occupies a quarter of the space of possibilities), then net loss for either player become impossible (and that’s a sufficient, not necessary condition for that).

But if you want, you can restrict to strict Pareto improvements over the default...

True, but points are decreasingly likely to be possible as they become more positive—it’s relatively easy to find trades that are bad ideas, or that have really bad distributions (especially assuming multiple resources, which is presumably why you trade in the first place). They’re also highly correlated: chances are either there are no such points available, or a lot of such points are available.

I’ve looked at it a number of ways, and in each case x+y>1 seems unlikely to exist in a given negotiation unless what is brought to the table is very narrowly defined and the gains from trade are very large relative to quantities traded.

What problem are we trying to solve? What are we trying to optimize?

(e.g.: What determines a better or worse outcome of the sum of these deals? Which agents have to agree to it, and what information are they assumed to have access to about the other agents? Which of that information do they have before agreeing to use this method in general? Is it imposed from on high? How much self-modification can agents make to achieve better deals? Etc, etc.)

I think this is worth putting on the main page, as opposed to discussion.

Not sure I’m understanding fully, but it sounds like this reasoning might fall prey to the two envelopes paradox.

The assumption about situations and their symmetric complements seems like it implies that you’re equally likely to trade with an agent with twice or with half as much power as yourself. In which case, drawing the conclusion that MWBS comes out ahead in expectation is analogous to deciding to switch in the two envelopes problem.

So it seems like you can’t make that assumption. Is that not the case?

You can face both situation A, and complement A, against agents weaker and more powerful than yourself (not strictly true, but true if you don’t look at options worse than the default point, which you don’t care about anyway).

Hmm, okay. I guess what I really want to know is whether your relative level of power compared to other agents you expect to meet affects whether you’d want to employ the MWBS vs something else.

Based on my naive understanding, it sounds to me like an agent who believes themselves to have low expected relative power might prefer the h utility of KSBS versus the ~min(x,y) utility of MWBS. I’m not sure whether the details actually work out that way though.

I believe that is incorrect (low relative power is a high risk-high reward situation). But we’d have to analyse it properly, with prior probs, etc… Which I have no time for now! :-(

Ah, yeah, I was thinking the high risk, high reward thing might be the answer, based on other statements in your post. Fair enough. Thanks for taking the time to respond!

Change presumes comparison to something else. You determine your decisions, but there doesn’t need to be anything that can be compared to them, so there doesn’t need to be any “change”. (I understand that you are talking in the context of “status quo”, so this is more of a foundational nitpick.) In determining your decisions, what matters are the reasons/causes of these decisions. If decisions are such that they optimize your values, that gives them “power” of enacting these values. Someone else’s decisions can also have the “power” to optimize your values, which is when “trade” is useful.

I knew this would come up! :-)

I shouldn’t have been so informal, I was just trying to get a concept across. But yeah, philosophers disagree and there’s more complexity and issues and caveats.

Due to the Pareto improvement problem, I don’t think this actually describes what people mean by the word “trade”.

I asked this in person, but I don’t think you’ve addressed it in the write-up:

The use of utility functions to try to capture the power dynamic seems to run into problems. Wei_Dai has an example with non-linear utility functions, but we can make it even more obvious.

In your original example, say the slug just doesn’t really care about anything other than his leaf. It would marginally prefer the entire galaxy to be turned into a garden of leaves, but that preference is really miniscule compared to its normal preferences about its own domain. Then we are in the situation where the two agents—the galactic empire and slug—each care a lot about their own domain, and very little about that of the other. If it happens that the slug is even more self-centred than the empire, then with your solution the slug’s preferences win out … to the limited degree permitted by the constraint that the empire can’t lose more than it can gain.

Even if you don’t think you’d naturally find utility functions like this (although many people seem to have preferences of roughly this shape), if this were a standard bargaining system it would become pragmatic to modify your own utility function to care very little about things you don’t expect to be able to affect.

As a side note, I wonder if it would be better to name apart what you’re trying to do here as something other than ‘bargaining’. It runs against my intuitive understanding of the word, and I believe the standard use in the literature (but I’m not certain), to establish a defection point then allow ‘bargains’ which are worse for one side than the defection point.

If the slug gains little, it can lose little. That’s the only constraint; we don’t know who’s preferences will “win”.

This was an interesting post.

A potential extension of this problem is to worry about what happens when agents can lie about their utility function. In business this isn’t usually a problem, since everyone is trying to maximize profit, but it often is in social interactions.

See this: http://lesswrong.com/lw/i20/even_with_default_points_systems_remain/

It’s about as bad as it can be :-(

There’s something to be said for this model… it looks rather realistic in the case where there is a large imbalance of power between trading partners. The powerful partner is likely to get everything it wants, and the powerless one may quite plausibly be worse off than if the trade never happens. I think poor little Slug isn’t going to keep that leaf for very long.

Currently, I see no evidence for that claim (apart from the utility-equally-linear-in-

every-resource case). Slug can easily have a utility that gives it two leaves, for instance.The only constraint is that no player can lose more than they could theoretically gain. Within that constraint, there exists utilities that give Slug the maximum that he could get.

I can’t see how that works, provided the Galaxy partner has declining marginal utility in leaves. Suppose the New Republic has N leaves already (where N is a very large number). By definition of the zero-point, and scaling, New Republic has Utility(N leaves) = 0, Utility(N+1 leaves) = 1. With declining marginal utility, it must have Utility(N-1 leaves) < −1.

At most, Slug can have utility(2 leaves) = 1, in the case where its limited mind thinks one extra leaf is as good as a whole galaxy of extra leaves, but then it is still impossible for the optimal trade to be a transfer of one leaf from Galaxy to Slug. The same applies for transfer of two or more leaves. The only exception I can see is if the Slug happens to have some sort of really exotic leaf which is more valuable to Galaxy than two or more regular leaves. Then there could be an exchange. The problem is that Slug is then likely to be adapted to its exotic leaf, and can’t eat the regular ones.

The only other possible trades are: No exchange at all; Slug loses its leaf and gets nothing in return; Slug loses its leaf and gets something else in return (such as a slug pellet). Whether Slug keeps its leaf seems to depend on whether it has utility(0 leaves) < −1 that is, whether its own marginal utility declines so rapidly that its single existing leaf is more valuable than the incremental value from a whole galaxy of additional leaves! That seems a bit unlikely if it has any sort of Darwinian utility (the expected fitness gain from N leaves is truly enormous, compared to the fitness loss of its own life).

Basically, under any plausible utility functions, Slug is going to get squashed by Galaxy. Which sounds realistic, as I said.

In your example, you setup the galaxy to care more about losing a leaf than gaining one (which is fine!), hence triggering the “you can only lose as much as you can gain”.

But within these constraints, there is no reason to suspect one player will come out ahead.

You x+y > 2h proof is flawed, since my utility may be normalised differently in different scenarios, but this does not mean I will personally weight scenarios where it is normalised to a large number higher than those where it is normalised to a small number. I would give an example if I had more time.

Yes. But you can face situation A and symmetric A both at high number normalisation scenario, and at a low one.

We need proper priors over unknown players’ utilities to solve this correctly (finding a specific counter example is easy).

If situation A is one where I am more powerful, then I will always face it at high-normalisation, and always face its complement at low normalisation. Since this system generally gives almost everything to the more powerful player, if I make the elementary error of adding the differently normalised utilities I will come up with an overly rosy view of my future prospects.

It does not. See this post ( http://lesswrong.com/lw/i20/even_with_default_points_systems_remain/ ): any player can lie about their utility to force their preferred outcome to be chosen (as long as it’s admissible). The weaker player can thus lie to get the maximum possible out of the stronger player. This means that there are weaker players with utility functions that would naturally give them the maximum possible. We can’t assume either the weaker player or the stronger one will come out ahead in a trade, without knowing more.

If you don’t know the opposing player, then you don’t know what you’ll find important with them and what they’ll find important with you. Suppose for instance that you can produce ten million different goods, at various inefficiencies and marginal prices. Then you meet someone who only cares about good G, and only offers good H. Then the shape of your trade situation is determined entirely by each player’s valuations of G and H and their ability to produce it. Even if you’re extraordinarily powerful, you and they can have valuations/ability to produce of G and H that make the situation take any shape you want to (the default point is removing most of your options from consideration, so only a very few of them matter).

I don’t have time to do the maths, but if your values are complicated enough, you can certainly face both A and symmetric A against (different) weaker players (and against stronger ones).

Alice has $1000. Bob has $1100. The only choices available to them are to give some of their money to the other. With linear utility on both sides, the most obvious utility function, Alice gives all her money to Bob. There is no pair of utility functions under which Bob gives all his money to Alice.

Are you enforcing that choice? Because it’s not a natural one.

Linear utility is not the most obviously correct utility function: diminishing marginal returns, for instance.

Let Alice value $2100 at 1, $1000 at 0, and $0 at −1. Let Bob value $2100 at 1, $1100 at 0, and $0 at −0.5 (interpolate utility linearly between these values).

These utility functions are already normalised for the MVBS, and since they interpolate linearly, only these three points are possible solutions: Alice $2100, default ($1000,$1100), and Bob $2100. The first has a summed utility of 0.5, the second 0, the third 0 as well.

Thus Alice gets everything.

That example is artificial, but it shows that unless you posit that everyone has (equal) linear utility in every resource, there is no reason to assume the powerful player will get everything: varying marginal valuations can push the solution in one direction or the other.

You’re right, I made a false statement because I was in a rush. What I meant to say was that as long as Bob’s utility was linear, whatever utility function Alice has there is no way to get all the money.

It simplifies the scenario, and suggests.

Why is diminishing marginal returns any more obvious that accelerating marginal returns. The former happens to be the human attitude to the thing humans most commonly gamble with (money) but there is no reason to privilege it in general. If Alice and Bob have accelerating returns then in general the money will always be given to Bob, if they have linear returns, it will always be given to Bob, if they have Diminishing returns, it could go either way. This does not seem fair to me.

This is true, but the default is for them to go to the powerful player.

Look at a moderately more general example, the treasure splitting game. In this version, if Alice and Bob work together, they can get a large treasure haul, consisting of a variety of different desirable objects. We will suppose that if they work separately, Bob is capable of getting a much smaller haul for himself, while Alice can get nothing, mkaing Bob more powerful.

In this game, Alice’s value for the whole treasure gets sent to 1, Bob’s value for the whole treasure gets sent to a constant more than 1, call it k. For any given object in the treasure, we can work out what proportional of the total value each thinks it is, if Alice’s number is at least k times Bob’s, then she gets it, otherwise Bob does. This means, if their valuations are identical or even roughly similar, Bob gets everything. There are ways for Alice to get some of it if she values it more, but there are symmetric solutions that favour Bob just as much. The ‘central’ solution is vastly favourable to Bob.

Technically true: if he’s linear, Bob can’t lost more than $1000, because he can’t gain more than $1000.

But Alice can certainly get almost everything. Say she has this: $1999.99 (or above): utility 1, $1000-$1999.99: utility 0, below $1000: utility −100. Then Alice gets $1999.99 and Bob loses 999.99.

If the value of the hoard is large, then k is very close to 1. Alice will get the things she really likes (relative to Bob’s valuation of them).

In the default, Alice gets nothing. If k is small, she’ll likely get a good chunk of the stuff. If k is large, that means that Bob can generate most of the value on his own: Alice isn’t contributing much at all, but will still get something if she really cares about it. I don’t see this as ultra-unfavourable to Alice!

I admit there is an issue with (quasi)-linear preferences if both players have similar relative valuations. However I don’t see anything that argues that “the default is for them to go to the powerful player”, apart from in that linear case. In the real world, agent’s marginals vary a lot, and the gains from trade are huge, so this isn’t likely to come up.

I doubt this claim, particularly the second part.

True, many interactions have gains from trade, but I suspect the weight of these interactions is overstated in most people’s minds by the fact that they are the sort of thing that spring to mind when you talk about making deals.

Probably the most common form of interaction I have with people is when we walk past each-other in the street and neither of us hands the other the contents of their wallet. I admit I am using the word ‘interaction’ quite strangely here, but you have given no reason why this shouldn’t count as a game for the purposes of bargaining solutions, we certainly both stand to gain more than the default outcome if we could control the other). My reaction to all but a tiny portion of humanity is to not even think about them, and in a great many cases there is not much to be gained by thinking about them.

I suspect the same is true of marginal preferences, in games with small amounts at stake, preferences should be roughly linear, and where desirable objects are fungible, as they often are, will be very similar accross agents.

If k is moderately large, e.g. 1.5 at least, then Alice will probably get less than half of the remaining treasure (i.e. treasure Bob couldn’t have acquired on his own) even by her own valuation. Of course the are individual differences, but it seems pretty clear to me that compared to other bargaining solutions, this one is quite strongly biased towards the powerful.

This question isn’t precisely answerable without a good prior over games, and any such prior is essentially arbitrary, but I hope I have made it clear that it is at the very least not obvious that there is any degree of symmetry between the powerful and the weak. This renders the x+y > 2h ‘proof’ in your post bogus, as x and y are normalised differently, so adding them is meaningless.

Your “walking by in the street” example is interesting. But the point of weighting your utilities is to split the gains from every single future transaction and interaction with them. Since you’re both part of the same economic system, they will have (implicit or explicit) interactions in the future. Though I don’t yet know the best way of normalising multiple agents utilities, which we’d need to make this fully rigorous.

And seeing how much world GDP is dependent on trade, I’d say the gains from trade are immense! I note your treasure hunting example has rather large gains from trade...

So, what we do know:

1) If everyone has utility equally linear in every resource (which we know is false), then the more powerful player wins everything (note that this one of the rare cases where there is an unarguable “most powerful player”)

2) In general, to within the usual constraints of not losing more than you can win, any player can get anything out of the deal (http://lesswrong.com/r/discussion/lw/i20/even_with_default_points_systems_remain/ , but you consider these utilities naturally occurring, rather than the product of lying)

I don’t therefore see strong evidence I should reject my informal proof at this point.

I think you and I have very different understandings of the word ‘proof’.

It’s a proof based on premises of uncertain validity. So it certainly proves something, in some situations—the question is whether these situations are narrow, or broad.

Would it be possible to make those clearer in the post?

I had thought, from the way you phrased it, that the assumption was that for any game, I would be equally likelly to encounter a game with the choices and power levels of the original game reversed. This struck me as plausible, or at least a good point to start from.

What you in fact seem to need, is that I am equally likely to encounter a game with the outcome under this scheme reversed, but the power levels kept the same. This continues to strike me as a very substansive and almost certainly false assertion about the games I am likely to face.

After the baby, when I have time to do it properly :-)

Fair enough

[Emphasis added]

Why is the difference normalized to one instead of zero when considering ARthUrpHilIpDenu?

? It’s normalised to one for both players..:

I did not understand the math. I have more to learn. Thanks.

The “default point” sounds like it’s a special case of a Schelling point.

which is itself a special case of a Nash equilibrium.

This is probably (and quite possibly by an order of magnitude so) the most important contribution from lesswrong in it’s entirety in several months.

I like your skilled use of understatement! ;-)

It’s interesting, but it assumes that human desires can be meaningfully mapped into something like a utility function, in a way which makes me skeptical about its usefulness. (Though I have a hard time articulating my objection more clearly than that.)

I recognise that argument, but surely we can use consideration of utility function in models in order to make progress along thinking about these things.

Even if we crudely imagine a typical human who happens to be ticking all Mazlow’s boxes with access to happiness, meaning and resources tending to be more towards our (current...) normalised ‘1’ and someone in solitary confinement, in psychological torture, tending towards our normalised ‘0’ as a utility point – even then the concept is sufficiently coherent and grokable to allow use of these kinds of models?

Do you disagree? I am curious – I have encountered this point several times and I’d like to see where we differ.

I don’t believe this is possible in useful way. However, having a utility solution may mean we can generalise to other situations...

Do you mean not possible for humans with current tools or theoretically impossible? (It seems to me that in principle human preferences can be mapped to something like a utility function in a way that is at least

useful, even if not ideal.)That’s a whole conversation! I probably shouldn’t start talking about this, since I don’t have the time to do it justice.

In the main, I feel that humans are not easily modelled by a utility function, but we have meta-preferences that cause us to hate facing the kind of trade-offs that utility functions imply. I’d bet most people would pay to not have their preferences replaced with a utility function, no matter how well defined it was.

But that’s a conversation for after the baby!