# Don’t Get Distracted by the Boilerplate

Author’s Note: Please don’t get scared off by the first sen­tence. I promise it’s not as bad as it sounds.

There’s a the­o­rem from the early days of group the­ory which says that any con­tin­u­ous, mono­tonic func­tion which does not de­pend on the or­der of its in­puts can be trans­formed to ad­di­tion. A good ex­am­ple is mul­ti­pli­ca­tion of pos­i­tive num­bers: f(x, y, z) = x*y*z. It’s con­tin­u­ous, it’s mono­tonic (in­creas­ing any of x, y, or z in­creases f), and we can change around the or­der of in­puts with­out chang­ing the re­sult. In this case, f is trans­formed to ad­di­tion us­ing a log­a­r­ithm: log(f(x, y, z)) = log(x) + log(y) + log(z).

Now, at first glance, we might say this is a very spe­cial­ized the­o­rem. “Con­tin­u­ous” and “mono­tonic” are very strong con­di­tions; they’re not both go­ing to ap­ply very of­ten. But if we ac­tu­ally look through the proof, it be­comes clear that these as­sump­tions aren’t as im­por­tant as they look. Weak­en­ing them does change the the­o­rem, but the core idea re­mains. For in­stance, if we re­move mono­ton­ic­ity, then our func­tion can still be writ­ten in terms of vec­tor ad­di­tion.

Many the­o­rems/​proofs con­tain pieces which are re­ally just there for mod­el­ling pur­poses. The cen­tral idea of the proof can ap­ply in many differ­ent set­tings, but we need to pick one of those set­tings in or­der to for­mal­ize it. This cre­ates some math­e­mat­i­cal boiler­plate. Typ­i­cally, we pick a set­ting which keeps the the­o­rem sim­ple—but that may in­volve stronger boiler­plate as­sump­tions than are strictly nec­es­sary for the main idea.

In such cases, we can usu­ally re­lax the boiler­plate as­sump­tions and end up with slightly weaker forms of the the­o­rem, which nonethe­less main­tain the cen­tral con­cepts.

Un­for­tu­nately, the boiler­plate oc­ca­sion­ally dis­tracts peo­ple who aren’t fa­mil­iar with the full idea un­der­ly­ing the proof. For some rea­son, I see this prob­lem most with the­o­rems in eco­nomics, game the­ory and de­ci­sion the­ory—the sort of the­o­rems which say “ei­ther X, or some­body is giv­ing away free money”. Peo­ple will come along and say “but wait, the the­o­rem as­sumes Y, which is com­pletely un­re­al­is­tic!” But re­ally, Y is of­ten just boiler­plate, and the core ideas still ap­ply even if Y is re­laxed to some­thing more re­al­is­tic. In fact, in many cases, the con­fu­sion is over the word­ing of the boiler­plate! Just be­cause we use the word “bet”, doesn’t mean peo­ple need to be at a cas­ino for the the­o­rem to ap­ply.

A few ex­am­ples:

• “VNM util­ity the­o­rem is un­re­al­is­tic! It re­quires that we have prefer­ences over ev­ery pos­si­ble state of the uni­verse.” Re­sponse: Com­plete­ness is re­ally just there to keep the math clean. The core ideas of the proof still show that, if we don’t have a util­ity func­tion over some neigh­bor­hood of world-states, then we can be ex­ploited us­ing only those world-states.

• “All these ra­tio­nal­ity the­o­rems are un­re­al­is­tic! They’re only rele­vant to wor­lds where evil agents are con­stantly run­ning around look­ing to ex­ploit us.” Re­sponse: We face trade-offs in the real world, and if we don’t choose be­tween the op­tions con­sis­tently, we’ll end up throw­ing away re­sources un­nec­es­sar­ily. Whether an evil per­son ma­nipu­lates us into it, or we stum­ble into it, isn’t re­ally rele­vant. The more situ­a­tions where we lo­cally vi­o­late VNM util­ity, the more situ­a­tions where we’ll lose re­sources.

• “VNM util­ity the­o­rem is un­re­al­is­tic! It as­sumes we’re will­ing to ac­cept ei­ther a trade or its op­po­site (or both) - rather than just ig­nor­ing offers.” Re­sponse: We face trade-offs in the real world where “ig­nore” is not an op­tion, and if we don’t choose be­tween the op­tions con­sis­tently, we’ll end up throw­ing away re­sources un­nec­es­sar­ily.

• “Dutch Book the­o­rems are un­re­al­is­tic! They as­sume we’re will­ing to ac­cept ei­ther a bet or it’s op­po­site, rather than ig­nor­ing both.” Re­sponse: same as pre­vi­ous. Alter­na­tively, we can build bid-ask spreads into the model, and most of the struc­ture re­mains.

• “Dutch Book The­o­rems are un­re­al­is­tic! They as­sume we’re con­stantly mak­ing bets on ev­ery­thing pos­si­ble.” Re­sponse: ev­ery time we make a de­ci­sion un­der un­cer­tainty, we make a bet. Do so in­con­sis­tently, and we throw away re­sources un­nec­es­sar­ily.

In clos­ing, one im­por­tant note: I definitely do not want to claim that all ob­jec­tions to the use of VNM util­ity the­o­rem, Dutch Book the­o­rems, etc make this kind of mis­take.

• I think your post needs a coun­ter­point: to de­serve that kind of trust, a re­sult needs to also have good em­piri­cal rep­u­ta­tion. Not all the­o­ret­i­cal re­sults are like that. For ex­am­ple, Au­mann agree­ment makes perfect sense in the­ory and is ro­bust to small changes, but doesn’t hap­pen in re­al­ity. A big part of study­ing econ is figur­ing out which parts have em­piri­cal back­ing and how much.

• Is Au­mann ro­bust to un­trust­wor­thi­ness ?

• I re­ally liked this post, be­cause I have been on both sides of the coin here: that is to say, I have been the per­son who thought a the­ory was ir­rele­vant be­cause its as­sump­tions were too ex­treme, and I have been the per­son try­ing to ap­ply the core in­sights of the the­ory, and been crit­i­cized be­cause the situ­a­tion to which I was ap­ply­ing it did not meet var­i­ous as­sump­tions. I was con­fused each time, and I am pretty sure I have even been on both sides of the same the­ory at least once.

It is prac­ti­cally in­evitable that ei­ther side is the cor­rect an­swer de­pend­ing on the situ­a­tion, and pos­si­ble that I was closer to cor­rect than the per­son I was dis­agree­ing with. But then, I was con­fused each time; the sim­pler ex­pla­na­tion by far is that I was con­fused about the the­o­ries un­der dis­cus­sion.

When I am try­ing to learn about a par­tic­u­lar the­ory or field, I now set as the first pri­or­ity the his­tor­i­cal con­text for its de­vel­op­ment. This is very re­li­able for com­mu­ni­cat­ing the un­der­ly­ing in­tu­itions, and also can be counted on to de­scribe the real-life situ­a­tions that in­spired them or to which they were ap­plied.

• I don’t think this post passes the In­tel­lec­tual Tur­ing Test for peo­ple (like me) who ob­ject to the sorts of the­o­rems you cite.

You say:

In such cases, we can usu­ally re­lax the boiler­plate as­sump­tions and end up with slightly weaker forms of the the­o­rem, which nonethe­less main­tain the cen­tral con­cepts.

But in most such cases, whether the “weaker forms” of the the­o­rems do, in fact, “main­tain the cen­tral con­cepts”, is ex­actly what is at is­sue.

Let’s go through a cou­ple of ex­am­ples:

The core ideas of the proof [of the VNM the­o­rem] still show that, if we don’t have a util­ity func­tion over some neigh­bor­hood of world-states, then we can be ex­ploited us­ing only those world-states.

This one is not a cen­tral ex­am­ple, since I’ve not seen any VNM-pro­po­nent put it in quite these terms. A cita­tion for this would be nice. In any case, the sort of thing you cite is not re­ally my pri­mary ob­jec­tion to VNM (in­so­far as I even have “ob­jec­tions” to the the­o­rem it­self rather than to the ir­re­spon­si­ble way in which it’s of­ten used), so we can let this pass.

We face trade-offs in the real world, and if we don’t choose be­tween the op­tions con­sis­tently, we’ll end up throw­ing away re­sources un­nec­es­sar­ily. … The more situ­a­tions where we lo­cally vi­o­late VNM util­ity, the more situ­a­tions where we’ll lose re­sources.

Yes, this is ex­actly the claim un­der dis­pute. This is the one you need to be defend­ing, se­ri­ously and in de­tail.

“VNM util­ity the­o­rem is un­re­al­is­tic! It as­sumes we’re will­ing to ac­cept ei­ther a trade or its op­po­site (or both) - rather than just ig­nor­ing offers.” Re­sponse: We face trade-offs in the real world where “ig­nore” is not an op­tion, and if we don’t choose be­tween the op­tions con­sis­tently, we’ll end up throw­ing away re­sources un­nec­es­sar­ily.

Ditto.

“Dutch Book the­o­rems are un­re­al­is­tic! They as­sume we’re will­ing to ac­cept ei­ther a bet or it’s op­po­site, rather than ig­nor­ing both.” Re­sponse: same as pre­vi­ous.

Ditto again. I have asked for a demon­stra­tion of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in re­lated con­texts. I’ve never got­ten so much as a se­ri­ous at­tempt at a re­sponse. I ask you the same: demon­strate, please, and with (real-world!) ex­am­ples.

“Dutch Book The­o­rems are un­re­al­is­tic! They as­sume we’re con­stantly mak­ing bets on ev­ery­thing pos­si­ble.” Re­sponse: ev­ery time we make a de­ci­sion un­der un­cer­tainty, we make a bet. Do so in­con­sis­tently, and we throw away re­sources un­nec­es­sar­ily.

Once again, please provide some real-world ex­am­ples of when this ap­plies.

In sum­mary: you seem to think, and claim, that peo­ple sim­ply aren’t aware that there’s a weaker form of the the­o­rem, which is still claimed to be true. I sub­mit to you that if your in­ter­locu­tor is in­tel­li­gent and in­formed, then this is al­most always not the case. Rather, peo­ple are aware of the “weaker form”, but do not ac­cept it as true!

(After all, the “strong form” has a proof, which we can, like, look up on the in­ter­net and so on. The “weak form” has… what? Usu­ally, noth­ing but hand-wav­ing… or that’s how it seems, any­way! In any case, mak­ing a se­ri­ous, con­vinc­ing case for the “weak form”, with real-world ex­am­ples, that en­gages with doubters and ad­dresses ob­jec­tions, etc., is where the meat of this sort of ar­gu­ment has to be.)

• This one is not a cen­tral ex­am­ple, since I’ve not seen any VNM-pro­po­nent put it in quite these terms. A cita­tion for this would be nice. In any case, the sort of thing you cite is not re­ally my pri­mary ob­jec­tion to VNM (in­so­far as I even have “ob­jec­tions” to the the­o­rem it­self rather than to the ir­re­spon­si­ble way in which it’s of­ten used), so we can let this pass.

VNM is used to show why you need to have util­ity func­tions if you don’t want to get Dutch-booked. It’s not some­thing the OP in­vented, it’s the whole point of VNM. One won­der what you thought VNM was about.

Yes, this is ex­actly the claim un­der dis­pute. This is the one you need to be defend­ing, se­ri­ously and in de­tail.

That we face trade-offs in the real world is a claim un­der dis­pute ?

Ditto.

Another way of phras­ing it is that we can model “ig­nore” as a choice, and de­rive the VNM the­o­rem just as usual.

Ditto again. I have asked for a demon­stra­tion of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in re­lated con­texts. I’ve never got­ten so much as a se­ri­ous at­tempt at a re­sponse. I ask you the same: demon­strate, please, and with (real-world!) ex­am­ples.

Ditto.

Once again, please provide some real-world ex­am­ples of when this ap­plies.

OP said it: ev­ery time we make a de­ci­sion un­der un­cer­tainty. Every de­ci­sion un­der un­cer­tainty can be mod­eled as a bet, and Dutch book the­o­rems are de­rived as usual.

• VNM is used to show why you need to have util­ity func­tions if you don’t want to get Dutch-booked. It’s not some­thing the OP in­vented, it’s the whole point of VNM. One won­der what you thought VNM was about.

This is a con­fused and in­ac­cu­rate com­ment.

The von Neu­mann-Mor­gen­stern util­ity the­o­rem states that if an agent’s prefer­ences con­form to the given ax­ioms, then there ex­ists a “util­ity func­tion” that will cor­re­spond to the agent’s prefer­ences (and so that agent can be said to be­have as if max­i­miz­ing a “util­ity func­tion”).

We may then ask whether there is any nor­ma­tive rea­son for our prefer­ences to con­form to the given ax­ioms (or, in other words, whether the ax­ioms are jus­tified by any­thing).

If the an­swer to this lat­ter ques­tion turned out to be “no”, the VNM the­o­rem would con­tinue to hold. The the­o­rem is en­tirely ag­nos­tic about whether any agent “should” hold the given ax­ioms; it only tells us a cer­tain math­e­mat­i­cal fact about agents that do hold said ax­ioms.

It so hap­pens to be the case that for at least some[1] of the ax­ioms, an agent that vi­o­lates that ax­iom will agree to a Dutch book. Note, how­ever, that the truth of this fact is in­de­pen­dent of the truth of the VNM the­o­rem.

Once again: if the VNM the­o­rem were false, it could still be the case that an agent that vi­o­lated one or more of the given ax­ioms would agree to a Dutch book; and, con­versely, if the lat­ter were not the case, the VNM the­o­rem would re­main as true as ever.

[1] It would be rather au­da­cious to claim that this is true for each of the four ax­ioms. For in­stance, do please demon­strate how you would Dutch-book an agent that does not con­form to the com­plete­ness ax­iom!

We face trade-offs in the real world, and if we don’t choose be­tween the op­tions con­sis­tently, we’ll end up throw­ing away re­sources un­nec­es­sar­ily. … The more situ­a­tions where we lo­cally vi­o­late VNM util­ity, the more situ­a­tions where we’ll lose re­sources.

Yes, this is ex­actly the claim un­der dis­pute. This is the one you need to be defend­ing, se­ri­ously and in de­tail.

That we face trade-offs in the real world is a claim un­der dis­pute ?

Your ques­tions give the im­pres­sion that you’re be­ing de­liber­ately dense.

Ob­vi­ously it’s true that we face trade-offs. What is not so ob­vi­ous is liter­ally the en­tire rest of the sec­tion I quoted.

“Dutch Book the­o­rems are un­re­al­is­tic! They as­sume we’re will­ing to ac­cept ei­ther a bet or it’s op­po­site, rather than ig­nor­ing both.” Re­sponse: same as pre­vi­ous.

Ditto again. I have asked for a demon­stra­tion of this claim many times, when I’ve seen Dutch Books brought up on Less Wrong and in re­lated con­texts. I’ve never got­ten so much as a se­ri­ous at­tempt at a re­sponse. I ask you the same: demon­strate, please, and with (real-world!) ex­am­ples.

Another way of phras­ing it is that we can model “ig­nore” as a choice, and de­rive the VNM the­o­rem just as usual.

As I ex­plained above, the VNM the­o­rem is or­thog­o­nal to Dutch book the­o­rems, so this re­sponse is a non se­quitur.

More gen­er­ally, how­ever… I have heard glib re­sponses such as “Every de­ci­sion un­der un­cer­tainty can be mod­eled as a bet” many times. Yet if the ap­pli­ca­bil­ity of Dutch book the­o­rems is so ubiquitous, why do you (and oth­ers who say similar things) seem to find it so difficult to provide an ac­tual, con­crete, real-world ex­am­ple of any of the claims in the OP? Not a class of ex­am­ples; not an anal­ogy; not even a for­mal proof that ex­am­ples ex­ist; but an ac­tual ex­am­ple. In fact, it should not be oner­ous to provide—let’s say—three ex­am­ples, yes? Please be spe­cific.

• [1] It would be rather au­da­cious to claim that this is true for each of the four ax­ioms. For in­stance, do please demon­strate how you would Dutch-book an agent that does not con­form to the com­plete­ness ax­iom!

How can an agent not con­form the com­plete­ness ax­iom ? It liter­ally just say “ei­ther the agent pre­fer A to B, or B to A, or don’t pre­fer any­thing”. Offer me an ex­am­ple of an agent that don’t con­form to the com­plete­ness ax­iom.

Ob­vi­ously it’s true that we face trade-offs. What is not so ob­vi­ous is liter­ally the en­tire rest of the sec­tion I quoted.

The en­tire rest of the sec­tion is a straight­for­ward ap­pli­ca­tion of the the­o­rem. The ob­jec­tion is that X don’t hap­pen in real life, and the counter-ob­jec­tion is that some­thing like X do hap­pen in real life, mean­ing the the­o­rem do ap­ply.

As I ex­plained above, the VNM the­o­rem is or­thog­o­nal to Dutch book the­o­rems, so this re­sponse is a non se­quitur.

Yeah, sorry for be­ing im­pre­cise in my lan­guage. Can you just be char­i­ta­ble and see that my state­ment make sense if you re­place “VNM” by “Dutch book” ? Your be­hav­ior does not re­ally send the vibe of some­one who want to ap­proach this com­pli­cated is­sue hon­estly, and more send the vibe of some­one look­ing for In­ter­net de­bate points.

More gen­er­ally, how­ever… I have heard glib re­sponses such as “Every de­ci­sion un­der un­cer­tainty can be mod­eled as a bet” many times. Yet if the ap­pli­ca­bil­ity of Dutch book the­o­rems is so ubiquitous, why do you (and oth­ers who say similar things) seem to find it so difficult to provide an ac­tual, con­crete, real-world ex­am­ple of any of the claims in the OP? Not a class of ex­am­ples; not an anal­ogy; not even a for­mal proof that ex­am­ples ex­ist; but an ac­tual ex­am­ple. In fact, it should not be oner­ous to provide—let’s say—three ex­am­ples, yes? Please be spe­cific.

• If I cross the street, I make a bet about whether a car will run over me.

• If I eat a pizza, I make a bet about whether the pizza will taste good.

• If I’m post­ing this com­ment, I make a bet about whether it will con­vince any­one.

• etc.

• (Note: I ask that you not take this as an in­vi­ta­tion to con­tinue ar­gu­ing the pri­mary topic of this thread; how­ever, one of the points you made is in­ter­est­ing enough on its own, and tan­gen­tial enough from the main dis­pute, that I wanted to ad­dress it for the benefits of any­one read­ing this.)

[1] It would be rather au­da­cious to claim that this is true for each of the four ax­ioms. For in­stance, do please demon­strate how you would Dutch-book an agent that does not con­form to the com­plete­ness ax­iom!

How can an agent not con­form the com­plete­ness ax­iom ? It liter­ally just say “ei­ther the agent pre­fer A to B, or B to A, or don’t pre­fer any­thing”. Offer me an ex­am­ple of an agent that don’t con­form to the com­plete­ness ax­iom.

This turns out to be an in­ter­est­ing ques­tion.

One ob­vi­ous coun­terex­am­ple is sim­ply an agent whose prefer­ences are not to­tally de­ter­minis­tic; sup­pose that when choos­ing be­tween A and B (though not nec­es­sar­ily in other cases in­volv­ing other choices), the agent flips a coin, prefer­ring A if heads, B oth­er­wise (and thence­forth be­haves ac­cord­ing to this coin flip). How­ever, un­til they ac­tu­ally have to make the choice, they have no prefer­ence. How do you pro­pose to con­struct a Dutch book for this agent? Re­mem­ber, the agent will only de­ter­mine their prefer­ence af­ter be­ing pro­vided with your offered bets.

A less triv­ial ex­am­ple is the case of bounded ra­tio­nal­ity. Sup­pose you want to know if I pre­fer A to B. How­ever, ei­ther or both of A/​B are out­comes that I have not con­sid­ered yet. Sup­pose also (as is of­ten the case in re­al­ity) that when­ever I do en­counter this choice, I will at once per­ceive that to fully eval­u­ate it would be com­pu­ta­tion­ally (or oth­er­wise cog­ni­tively) in­tractable given the limi­ta­tions of time and other re­sources that I am will­ing to spend on mak­ing this de­ci­sion. I will there­fore rely on cer­tain heuris­tics (which I have in­her­ited from evolu­tion, from my life ex­pe­riences, or from god knows where else), I will con­sider cer­tain pre­vi­ously known data, I will per­haps spend some small amount of time/​effort on ac­quiring in­for­ma­tion to im­prove my un­der­stand­ing of A and B, and then form a prefer­ence.

My prefer­ence will thus de­pend on var­i­ous con­tin­gent fac­tors (what heuris­tics I can read­ily call to mind, what in­for­ma­tion is eas­ily available for me to use in de­cid­ing, what has taken place in my life up to the point when I have to de­cide, etc.). Many, if not most, of these con­tin­gent fac­tors, are not known to you; and even were they known to you, their effects on my prefer­ence are likely to be in­tractable to de­ter­mine. You there­fore are not able to model me as an agent whose prefer­ences are com­plete. (We might, at most, be able to say some­thing like “Omega, who can see the en­tire man­i­fold of ex­is­tence in all di­men­sions and time di­rec­tions, can model me as an agent with com­plete prefer­ences”, but cer­tainly not that you, nor any other re­al­is­tic agent, can do so.)

Fi­nally, “Ex­pected Utility The­ory with­out the Com­plete­ness Ax­iom” (Dubra et. al., 2001) is a fas­ci­nat­ing pa­per that ex­plores some of the im­pli­ca­tions of com­plete­ness ax­iom vi­o­la­tion in some de­tail. Key quote:

Be­fore stat­ing more care­fully our goal and the con­tri­bu­tion thereof, let us note that there are sev­eral eco­nomic rea­sons why one would like to study in­com­plete prefer­ence re­la­tions. First of all, as ad­vanced by sev­eral au­thors in the liter­a­ture, it is not ev­i­dent if com­plete­ness is a fun­da­men­tal ra­tio­nal­ity tenet the way the tran­si­tivity prop­erty is. Au­mann (1962), Bewley (1986) and Man­dler (1999), among oth­ers, defend this po­si­tion very strongly from both the nor­ma­tive and pos­i­tive view­points. In­deed, if one takes the psy­cholog­i­cal prefer­ence ap­proach (which de­rives choices from prefer­ences), and not the re­vealed prefer­ence ap­proach, it seems nat­u­ral to define a prefer­ence re­la­tion as a po­ten­tially in­com­plete pre­order, thereby al­low­ing for the oc­ca­sional “in­de­ci­sive­ness” of the agents. Se­condly, there are eco­nomic in­stances in which a de­ci­sion maker is in fact com­posed of sev­eral agents each with a pos­si­bly dis­tinct ob­jec­tive func­tion. For in­stance, in coal­i­tional bar­gain­ing games, it is in the na­ture of things to spec­ify the prefer­ences of each coal­i­tion by means of a vec­tor of util­ity func­tions (one for each mem­ber of the coal­i­tion), and this re­quires one to view the prefer­ence re­la­tion of each coal­i­tion as an in­com­plete prefer­ence re­la­tion. The same rea­son­ing ap­plies to so­cial choice prob­lems; af­ter all, the most com­monly used so­cial welfare or­der­ing in eco­nomics, the Pareto dom­i­nance, is an in­com­plete pre­order. Fi­nally, we note that in­com­plete prefer­ences al­low one to en­rich the de­ci­sion mak­ing pro­cess of the agents by pro­vid­ing room for in­tro­duc­ing to the model im­por­tant be­hav­ioral traits like sta­tus quo bias, loss aver­sion, pro­ce­du­ral de­ci­sion mak­ing, etc.

I en­courage you to read the whole thing (it’s a mere 13 pages long).

• P.S. Here’s the afore­men­tioned “Au­mann (1962)” (yes, that very same Robert J. Au­mann)—a pa­per called “Utility The­ory with­out the Com­plete­ness Ax­iom”. Au­mann writes in plain lan­guage wher­ever pos­si­ble, and the pa­per is very read­able. It in­cludes this line:

Of all the ax­ioms of util­ity the­ory, the com­plete­ness ax­iom is per­haps the most ques­tion­able.[8] Like oth­ers of the ax­ioms, it is in­ac­cu­rate as a de­scrip­tion of real life; but un­like them, we find it hard to ac­cept even from the nor­ma­tive view­point.

The full elab­o­ra­tion for this (per­haps quite shock­ing) com­ment is too long to quote; I en­courage any­one who’s at all in­ter­ested in util­ity the­ory to read the pa­per.

• Though there’s a great deal more I could say here, I think that when ac­cu­sa­tions of “look­ing for In­ter­net de­bate points” start to fly, that’s the point at which it’s best to bow out of the con­ver­sa­tion.