# A Pure Math Argument for Total Utilitarianism

Sum­mary: I sketch an ar­gu­ment that pop­u­la­tion ethics should, in a cer­tain tech­ni­cal sense, be similar to ad­di­tion. I show that a sur­pris­ing the­o­rem of Hölder’s im­plies that this means that we should be to­tal util­i­tar­i­ans.

Ad­di­tion is a very spe­cial op­er­a­tion. De­spite the wide va­ri­ety of es­o­teric math­e­mat­i­cal ob­jects known to us to­day, none of them have the ba­sic de­sir­able prop­er­ties of grade-school ar­ith­metic.

This fact was in­tu­ited by 19th cen­tury philoso­phers in the de­vel­op­ment of what we now call “to­tal” util­i­tar­i­anism. In this eth­i­cal sys­tem, we can as­sign each per­son a real num­ber to in­di­cate their welfare, and the value of an en­tire pop­u­la­tion is the sum of each in­di­vi­d­u­als’ welfare.

Us­ing mod­ern math­e­mat­ics, we can now prove the in­tu­ition of Mills and Ben­tham: be­cause ad­di­tion is so spe­cial, any eth­i­cal sys­tem which is in a cer­tain tech­ni­cal sense “rea­son­able” is equiv­a­lent to to­tal util­i­tar­i­anism.

### What do we mean by ethics?

The most ba­sic premise is that we have some way of or­der­ing in­di­vi­d­ual lives.

We don’t need to say how much bet­ter some life is than an­other, we just need to be able to put them in or­der. We might have some un­cer­tainty as to which of two lives is bet­ter:

In this case, we aren’t cer­tain if “Medium” or “Medium 2″ is bet­ter. How­ever, we know they’re both bet­ter than “Bad” and worse than “Good”.

In the case when we always know which of two lives is bet­ter, we say that lives are to­tally or­dered. If there is un­cer­tainty, we say they are lat­tice or­dered.

In ei­ther case, we re­quire that the rank­ing re­main con­sis­tent when we add peo­ple to the pop­u­la­tion. Here we add a per­son of “Medium” util­ity to each pop­u­la­tion:

The rank­ing on the right side of the figure above is le­gi­t­i­mate be­cause it keeps the or­der—if some life X is worse than Y, then (X + Medium) is still worse than (Y + Medium). This rank­ing be­low for ex­am­ple would fail that:

This rank­ing is in­con­sis­tent be­cause it some­times says that “Bad” is worse than “Medium” and other times says “Bad” is bet­ter than “Medium”. A ba­sic prin­ci­ple of ethics is that rank­ings should be con­sis­tent, and so rank­ings like the lat­ter are ex­cluded.

### In­creas­ing pop­u­la­tion size

The most ob­vi­ous way of defin­ing an ethics of pop­u­la­tions is to just take an or­der­ing of in­di­vi­d­ual lives and “glue them to­gether” in an or­der-pre­serv­ing way, like I did above. This gen­er­ates what math­e­mat­i­ci­ans would call the free group. (The only tricky part is that we need good and bad lives to “can­cel out”, some­thing which I’ve talked about be­fore.)

It turns out that merely glu­ing pop­u­la­tions to­gether in this way gives us a highly struc­tured ob­ject known as a “lat­tice-or­dered group”. Here is a snip­pet of the re­sult­ing lat­tice:

This rank­ing is similar to what philoso­phers of­ten call “Dom­i­nance”—if ev­ery­one in pop­u­la­tion P is bet­ter off than ev­ery­one in pop­u­la­tion Q, then P is bet­ter than Q. How­ever, this is some­what stronger—it al­lows us to com­pare pop­u­la­tions of differ­ent sizes, some­thing that the tra­di­tional dom­i­nance crite­rion doesn’t let us do.

Let’s take a minute to think about what we’ve done. Us­ing only the fact that in­di­vi­d­u­als’ lives can be or­dered and the re­quire­ment that pop­u­la­tion ethics re­spects this or­der­ing in a cer­tain tech­ni­cal sense, we’ve de­rived a ro­bust pop­u­la­tion ethics, about which we can prove many in­ter­est­ing things.

### Get­ting to to­tal utilitarianism

One ob­vi­ous facet of the above rank­ing is that it’s not to­tal. For ex­am­ple, we don’t know if “Very Good” is bet­ter than “Good, Good”, i.e. if it’s bet­ter to have welfare “spread out” across mul­ti­ple peo­ple, or con­cen­trated in one. This ob­vi­ously pro­hibits us from claiming that we’ve de­rived to­tal util­i­tar­i­anism, be­cause un­der that sys­tem we always know which is bet­ter.

How­ever, we can still de­rive a form of to­tal util­i­tar­i­anism which is equiv­a­lent in a large set of sce­nar­ios. To do so, we need to use the idea of an em­bed­ding. This is merely a way of as­sign­ing each welfare level a num­ber. Here is an ex­am­ple em­bed­ding:

• Medium = 1

• Good = 2

• Very Good = 3

Here’s that same or­der­ing, ex­cept I’ve tagged each pop­u­la­tion with the to­tal “util­ity” re­sult­ing from that em­bed­ding:

This is clearly not iden­ti­cal to to­tal util­i­tar­i­anism—“Very Good” has a higher to­tal util­ity than “Medium, Medium” but we don’t know which is bet­ter, for ex­am­ple.

How­ever, this rank­ing never dis­agrees with to­tal util­i­tar­i­anism—there is never a case where P is bet­ter than Q yet P has less to­tal util­ity than Q.

Due to a sur­pris­ing the­o­rem of Holder which I have dis­cussed be­fore, as long as we dis­al­low “in­finitely good” pop­u­la­tions, there is always some em­bed­ding like this. Thus, we can say that:
To­tal util­i­tar­i­anism is the moral “baseline”. There might be cir­cum­stances where we are un­cer­tain whether or not P is bet­ter than Q, but if we are cer­tain, then it must be that P has greater to­tal util­ity than Q.

### An application

Here is one con­se­quence of these re­sults. Many peo­ple, in­clud­ing my­self, have the in­tu­ition that in­equal­ity is bad. In fact, it is so bad that there are cir­cum­stances where in­creas­ing equal­ity is good even if peo­ple are, on av­er­age, worse off.

If we ac­cept the premises of this blog post, this in­tu­ition sim­ply can­not be cor­rect. If the in­equitable so­ciety has greater to­tal util­ity, it must be at least as good as the equitable one.

### Con­clud­ing remarks

There are cer­tain re­stric­tions we want the “ad­di­tion” of a per­son to a pop­u­la­tion to obey. It turns out that there is only one way to obey them: by us­ing grade school ad­di­tion, i.e. to­tal util­i­tar­i­anism.

[For those in­ter­ested in the tech­ni­cal re­sult: Holder showed that any archimedean l-group is l-iso­mor­phic to a sub­group of (R,+). The proof can be found in Glass’ Par­tially Ordered Groups as Corol­lary 4.1.4. This ar­ti­cle was origi­nally posted here.]

• The most ba­sic premise is that we have some way of or­der­ing in­di­vi­d­ual lives.

I re­ject this premise. Speci­fi­cally, I be­lieve I have some or­der­ing, and you have some or­der­ing, but strongly sus­pect those or­der­ings dis­agree, so don’t think we have one un­am­bigu­ous joint or­der­ing.

In ei­ther case, we re­quire that the rank­ing re­main con­sis­tent when we add peo­ple to the pop­u­la­tion.

I re­ject this premise. Speci­fi­cally, I be­lieve that lives in­ter­act. Sup­pose Bob by him­self has a medium qual­ity life, and Alice by her­self has a medium qual­ity life. Put­ting them in a uni­verse to­gether by no means guaran­tees that each of them will have a medium qual­ity life.

To­tal util­i­tar­i­anism is a dead sim­ple con­clu­sion from its premises—you don’t need to bring in group the­ory. This is only a “pure math” ar­gu­ment for to­tal util­i­tar­i­anism be­cause you’re talk­ing about the group (R,+) in­stead of ad­di­tion, but the two are the same, and the core of the ar­gu­ment re­mains the con­tentious moral premises.

• Speci­fi­cally, I be­lieve I have some or­der­ing, and you have some or­der­ing, but strongly sus­pect those or­der­ings dis­agree, so don’t think we have one un­am­bigu­ous joint or­der­ing.

I’m not cer­tain this proves what you want it to—it would still hold that you and I are in­di­vi­d­u­ally to­tal util­i­tar­i­ans. We would just dis­agree about what those util­ities are.

Speci­fi­cally, I be­lieve that lives interact

I guess I don’t find this very con­vinc­ing. Any rea­son­ably com­pli­cated ar­gu­ment is go­ing to say “ce­teris paribus” at some point—I don’t think you can just re­ject the con­clu­sion be­cause of this.

This is only a “pure math” ar­gu­ment for to­tal util­i­tar­i­anism be­cause you’re talk­ing about the group (R,+) in­stead of ad­di­tion, but the two are the same

I guess I don’t know what you mean. By (R,+) I was try­ing to re­fer to ad­di­tion, so I apol­o­gize if this has some other mean­ing and you thought I was “prov­ing” them equiv­a­lent.

• I’m not cer­tain this proves what you want it to—it would still hold that you and I are in­di­vi­d­u­ally to­tal util­i­tar­i­ans. We would just dis­agree about what those util­ities are.

I was un­clear, and agree that stated re­jec­tion is weak. Here’s the stronger ver­sion: I see the cen­tral premise un­der­ly­ing to­tal and av­er­age util­i­tar­i­anism as “Prefer­ences are de­ter­mined over life-his­to­ries, rather than uni­verse-his­to­ries.” If you ac­cept this premise, then you need some way to ag­gre­gate life-util­ities to get a uni­verse-util­ity. But if you re­ject that premise, and see all prefer­ences as over uni­verse-his­to­ries, then it’s not clear that an ag­gre­ga­tion pro­ce­dure is nec­es­sary.

I guess I don’t find this very con­vinc­ing. Any rea­son­ably com­pli­cated ar­gu­ment is go­ing to say “ce­teris paribus” at some point—I don’t think you can just re­ject the con­clu­sion be­cause of this.

But look at the hor­rible world you’ve cre­ated! Any sort of em­pa­thy is banned. Bob can­not delight in Alice’s hap­piness, and Alice can­not suffer be­cause of Bob’s sad­ness. They can­not even be heartless traders, who are both made wealthier and hap­pier by the other’s ex­is­tence, even though they are oth­er­wise in­differ­ent to whether or not the other lives or dies.

The ar­gu­ment against var­i­ous re­pug­nant con­clu­sions of­ten hinges on ce­teris paribus be­ing vi­o­lated. The “mere ad­di­tion” para­dox, for ex­am­ple, is eas­ily dis­pensed with if each per­son has a slight nega­tive penalty in their util­ity func­tion for the num­ber of other peo­ple that ex­ist, or that ex­ist be­low a cer­tain util­ity thresh­old, or so on. It’s worth point­ing out that many moral sen­sa­tions seem like they could be in­ter­nal­iza­tion of prac­ti­cal con­straints- when you talk about adding more and more peo­ple to the world, an in­stinc­tual back­lash against crowd­ing is prob­a­bly not due to any malev­olence, but rather due to the com­bined effects of traf­fic and pol­lu­tion and scarcity which, in the real world, ac­com­pany such crowd­ing.

I, for one, find it lu­dicrous to posit that the util­ity func­tions of a so­cial species would not de­pend on the sort of so­ciety they find them­selves in, and that their util­ities can­not con­tain any rel­a­tive mea­sures.

I guess I don’t know what you mean. By (R,+) I was try­ing to re­fer to ad­di­tion, so I apol­o­gize if this has some other mean­ing and you thought I was “prov­ing” them equiv­a­lent.

I was ob­ject­ing to the ti­tle, mostly. In my mind, the core of the ar­gu­ment in this post is “if you be­lieve that prefer­ences are ex­pressed over in­di­vi­d­ual lives, and that the num­ber of lives shouldn’t be rele­vant to prefer­ences, then to­tal util­i­tar­i­anism must fol­low,” which I think is a cor­rect ar­gu­ment. But I dis­agree that prefer­ences are ex­pressed over in­di­vi­d­ual lives (or at least I think that is a con­tentious claim which should not be taken as a premise)

• Em­pa­thy banned? Na­ture does that for you. ″Brain cells we use to mull over our past must switch off when we do sums, say re­searchers, who have been spy­ing on a pre­vi­ously in­ac­cessible part of the brain.”″

• Many peo­ple, in­clud­ing my­self, have the in­tu­ition that in­equal­ity is bad. In fact, it is so bad that there are cir­cum­stances where in­creas­ing equal­ity is good even if peo­ple are, on av­er­age, worse off. If we ac­cept the premises of this blog post, this in­tu­ition sim­ply can­not be cor­rect.

Don’t ar­gu­ments re­lated to the bad­ness of in­equal­ity of­ten rely on the ex­is­tence of envy such that if I envy you then my util­ity goes down as yours in­creases.

• Yes, one way to res­cue this is to value equal­ity in­stru­men­tally, in­stead of in­trin­si­cally.

• (Similarly, I ten­ta­tively am an av­er­age util­i­tar­ian, but I still value pop­u­la­tion size in­stru­men­tally.)

• If we ac­cept the premises of this blog post, this in­tu­ition sim­ply can­not be cor­rect. If the in­equitable so­ciety has greater to­tal util­ity, it must be at least as good as the equitable one.

Not sure if that’s an ap­pli­ca­tion as much as a tau­tol­ogy. Valu­ing equal­ity means that you re­ject the as­sump­tion of “we re­quire that the rank­ing re­main con­sis­tent when we add peo­ple to the pop­u­la­tion”, so of course ac­cept­ing that as­sump­tion is in­com­pat­i­ble with valu­ing equal­ity.

At least, that’s as­sum­ing that you value equal­ity as an in­trin­sic good. As James Miller pointed out, one can also op­pose in­equal­ity on the ground that it ends up mak­ing peo­ple’s lives worse off, which is an em­piri­cal claim sep­a­rate from util­i­tar­i­anism.

• Not sure if that’s an ap­pli­ca­tion as much as a tautology

It’s a proof, so sure it’s a tau­tol­ogy.

Here’s a bet­ter way of mask­ing it though: sup­pose we be­lieve:

1. We should be non-sadis­tic: X < 0 ==> X+Y < Y

2. Ac­cept­ing of dom­i­nance: X > 0 ==> X+Y > Y

This is ex­actly what it means to be or­der pre­serv­ing, but maybe when phrased this way the re­sult seems more sur­pris­ing (in the sense that those ax­ioms are harder to re­fute)?

• The only part that makes this to­tal util­i­tar­i­anism is the rank­ing you match the em­bed­ding to. So what, math­e­mat­i­cally, goes wrong if you em­bed the av­er­age of your in­di­vi­d­ual num­bers into a di­rected graph like (Very Good) > (Good, Good, Good, Good) ~~ (Good) > (Medium).

• I think this is a great ques­tion, as peo­ple who ac­cept the premises of this ar­ti­cle are likely to ac­cept some sort of util­i­tar­i­anism, so a ma­jor re­sult is that av­er­age util­i­tar­i­anism doesn’t work.

If we are av­er­age util­i­tar­i­ans, then we be­lieve that (2) ~~ (1,2,3). But this must mean that (2,6) ~~ (1,2,3,6) to be or­der pre­serv­ing, which is not true. (The former’s av­er­age util­ity is 4, the lat­ter’s 3.)

• Ah, great, I un­der­stand more now—the linch­pin is the premise that what we re­ally want, is to pre­serve or­der when we add an­other per­son. So what sort of premise would lead to av­er­age util­i­tar­i­anism?

How about—or­der should be pre­served if we shift the zero-point of our hap­piness mea­sure­ment. That seems pretty com­mon-sense. And yet it rules out to­tal util­i­tar­i­anism. (2,2,2) > (5), but (1,1,1) < (4).

Or maybe we could al­low av­er­age util­i­tar­i­anism just by weak­en­ing the premise—so that we want to pre­serve the or­der­ing only if we add an av­er­age mem­ber.

• How about—or­der should be pre­served if we shift the zero-point of our hap­piness mea­sure­ment. That seems pretty com­mon-sense. And yet it rules out to­tal util­i­tar­i­anism. (2,2,2) > (5), but (1,1,1) < (4).

The usual defi­ni­tion of “zero-point” is “it doesn’t mat­ter whether that per­son ex­ists or not”. By that defi­ni­tion, there is no (uni­ver­sal) zero-point in av­er­age util­i­tar­i­anism. (2,2,0) != (2,2) etc.

By the way, it’s true you can’t shift by a con­stant in to­tal util­i­tar­i­anism, but you can scale by a con­stant/​

• ...Or you could no­tice that re­quiring that or­der be pre­served when you add an­other mem­ber is out­right as­sum­ing that you care about the to­tal and not about the av­er­age. You as­sume the con­clu­sion as one of your premises, mak­ing the ar­gu­ment triv­ial.

• Near the be­gin­ning you write this:

Us­ing mod­ern math­e­mat­ics, we can now prove the in­tu­ition of Mills and Ben­tham: be­cause ad­di­tion is so spe­cial, any eth­i­cal sys­tem which is in a cer­tain tech­ni­cal sense “rea­son­able” is equiv­a­lent to to­tal util­i­tar­i­anism.

but then your ac­tual ar­gu­ment in­cludes steps like these:

The most ob­vi­ous way of defin­ing an ethics of pop­u­la­tions is to just take an or­der­ing of in­di­vi­d­ual lives and “glue them to­gether” in an or­der-pre­serv­ing way, like I did above.

which, please note, does not amount to any sort of ar­gu­ment that we must or even should just glue val­ues-of-lives to­gether in this sort of way.

I do not see any sign in what you have writ­ten that Hölder’s the­o­rem is do­ing any real work for you here. It says that an archimedean to­tally or­dered group is iso­mor­phic to a sub­set of (R,+) -- but all the con­tentious stuff about to­tal util­i­tar­i­anism is already there by the time you sup­pose that util­ities form an archimedean to­tally or­dered group and that com­bin­ing peo­ple is just a mat­ter of ap­ply­ing the group op­er­a­tion to their in­di­vi­d­ual util­ities.

• which, please note, does not amount to any sort of ar­gu­ment that we must or even should just glue val­ues-of-lives to­gether in this sort of way.

Thanks for the feed­back, I should’ve used clearer ter­minol­ogy.

I do not see any sign in what you have writ­ten that Hölder’s the­o­rem is do­ing any real work for you here

This seems to be the con­sen­sus. It’s very sur­pris­ing to me that we get such a strong re­sult from only the l-group ax­ioms, and the fact that his re­sult is so cel­e­brated seems to in­di­cate that other math­e­mat­i­ci­ans find it sur­pris­ing too, but the com­menters here are rather blase.

Do you think giv­ing ex­am­ples of how many things com­pletely un­re­lated to ad­di­tion are groups (wal­l­pa­per groups, ru­bik’s cube, func­tions un­der com­po­si­tion, etc.) would help show that the re­ally re­stric­tive ax­iom is the archimedean one?

• I should’ve used clearer terminology

It doesn’t seem to me like the is­sue is one of ter­minol­ogy, but maybe I’m miss­ing some­thing.

Do you think giv­ing ex­am­ples [...] would help show that the re­ally re­stric­tive ax­iom is the archimedean one?

I’m not con­vinced that it is. The ex­am­ples you give aren’t or­dered groups, af­ter all.

It’s un­clear to me whether your main pur­pose here is to ex­hibit a sur­pris­ing fact about ethics (which hap­pens to be proved by means of Hölder’s the­o­rem) or to ex­hibit an in­ter­est­ing math­e­mat­i­cal the­o­rem (which hap­pens to have a nice illus­tra­tion in­volv­ing ethics). From the origi­nal post­ing it looked like the former but what you’ve now writ­ten seems to sug­gest the lat­ter.

My im­pres­sion is that the blasé-ness is aimed more at the alleged ap­pli­ca­tion to ethics rather than deny­ing that the the­o­rem, quite math­e­mat­i­cal the­o­rem, is in­ter­est­ing and sur­pris­ing.

• Two points:

1. I don’t know the Holder the­o­rem, but if it ac­tu­ally de­pends on the lat­tice be­ing a group, that in­cludes an ex­tra as­sump­tion of the ex­is­tence of a neu­tral el­e­ment and in­verse el­e­ments. The neu­tral el­e­ment would have to be a life of ex­actly zero value, so that kil­ling that per­son off wouldn’t mat­ter at all, ei­ther pos­i­tively or nega­tively. The in­verse el­e­ments would mean that for ev­ery happy live you can imag­ine an ex­actly op­po­site un­happy live, so that kil­ling off both leaves the world ex­actly as good as be­fore.

2. Prov­ing this might be hard for in­finite cases, but it would be triv­ial for finite gen­er­at­ing groups. Most Less Wrong util­i­tar­i­ans would be­lieve there are only finitely many brain states (oth­er­wise simu­la­tions are im­pos­si­ble!) and util­ity is a func­tion of brain states. That would mean only finitely many util­ity lev­els and then the re­sult is ob­vi­ous. The math­e­mat­i­cally in­ter­est­ing part is that it still works if we go in­finite on some things but not on oth­ers, but that’s not rele­vant to the gen­eral Less Wrong be­lief sys­tem.

(Also, here I’m dis­cussing the de­tails of util­i­tar­ian sys­tems ar­guendo, but I’m stick­ing with the gen­eral claim that all of them are math­e­mat­i­cally in­con­sis­tent or hor­rible un­der Ar­row’s the­o­rem.)

• it would be triv­ial for finite gen­er­at­ing groups… That would mean only finitely many util­ity lev­els and then the re­sult is obvious

Z^2 lex­i­cally or­dered is finitely gen­er­ated, and can’t be em­bed­ded in (R,+). [EDIT: I’m now not sure if you meant “finitely gen­er­ated” or “finite” here. If it’s the lat­ter, note that any or­dered group must be tor­sion-free, which ob­vi­ously ex­cludes finite groups.]

But your im­plicit point is valid (+1) - I should’ve spent more time ex­plain­ing why this re­sult is sur­pris­ing. Just about ev­ery com­ment on this ar­ti­cle is “this is ob­vi­ous be­cause ”, which I guess is an in­di­ca­tion LWers are so im­mersed in util­i­tar­i­anism that counter-ex­am­ples don’t even come to mind.

• I’m a bit out of my depth here. I un­der­stood an “or­dered group” as a group with an or­der on its el­e­ments. That clearly can be finite. If it’s more than that the ques­tion would be why we should as­sume what­ever fur­ther ax­ioms char­ac­ter­ize it.

• If it’s more than that the ques­tion would be why we should as­sume what­ever fur­ther ax­ioms char­ac­ter­ize it

from wikipe­dia:

a par­tially or­dered group is a group (G,+) equipped with a par­tial or­der “≤” that is trans­la­tion-in­var­i­ant; in other words, “≤” has the prop­erty that, for all a, b, and g in G, if a ≤ b then a+g ≤ b+g and g+a ≤ g+b

So if a > 0, a+a > a etc. which re­sults means the group has to be tor­sion free.

• If the in­equitable so­ciety has greater to­tal util­ity, it must be at least as good as the equitable one.

No, the premises don’t ne­ces­si­tate that. “A is at least as good as B”, in our lan­guage, is ¬(A < B). But you’ve stated that the lack of an edge from A to B says noth­ing about whether A < B, now you’re talk­ing like if the premises don’t con­clude that A < B they must con­clude ¬(A < B), which is kinda af­firm­ing the con­se­quent.

It might have been a slip of the tongue, or it might be an in­di­ca­tion that you’re over­es­ti­mat­ing the sig­nifi­cance of this al­ign­ment. Th­ese premises don’t prove that a higher util­ity in­equitable so­ciety is at least as good as a lower util­ity equitable one. They merely don’t dis­agree.

I may be wrong here, but it looks as though, just as the premises sup­port (A < B) ⇒ (util­ity(A) < util­ity(B)), they also sup­port (A < B) ⇒ (nor­mal­izedU(A)) < nor­mal­izedU(B))), such that nor­mal­izedU(World) = sum(log(util­ity(life)) for life in el­e­ments(World)) a perfectly rea­son­able sort of pop­u­la­tion util­i­tar­i­anism where util­ity mon­sters are fairly well seen to. In this case equal­ity would usu­ally yield greater bet­ter­ness than in­equal­ity de­spite it be­ing per­mit­ted by the premises.

• But you’ve stated that the lack of an edge from A to B says noth­ing about whether A < B, now you’re talk­ing like if the premises don’t con­clude that A < B they must con­clude ¬(A < B), which is kinda af­firm­ing the con­se­quent.

This is a good point, what I was try­ing to say is slightly differ­ent. Ba­si­cally, we know that (A < B) ==> (f(A) < f(B)), where f is our or­der em­bed­ding. So it is in­deed true that f(A) > f(B) ==> ¬(A < B), by modus tol­lens.

just as the premises sup­port (A < B) ⇒ (util­ity(A) < util­ity(B)), they also sup­port (A < B) ⇒ (nor­mal­izedU(A)) < nor­mal­izedU(B))), such that nor­mal­izedU(World) = sum(log(util­ity(life))

Yeah, that’s a pretty clever way to get around the con­straint. I think my claim “If the in­equitable so­ciety has greater to­tal util­ity, it must be at least as good as the equitable one” would still hold though, no?

• “If the in­equitable so­ciety has greater to­tal util­ity, it must be at least as good as the equitable one” would still hold though, no?

Well… …. yeah, tech­ni­cally. But for ex­am­ple in the model ( wor­lds={A, B}, f(W)=sum(log(felic­ity(e)) for e in pop­u­la­tion(W)) ), such that world A=(2,2,2,2), and world B=(1,1,1,9). f(A) ≥ f(B), IE ¬(f(A) < f(B)), so ¬(A < B), IE, the equitable so­ciety is also at least as good as the in­equitable, higher sum util­ity one. So if you want to sup­port all em­bed­dings via sum­ma­tion of an in­creas­ing func­tion of the units’ QoL.. I’d be sur­prised if those em­bed­dings had any­thing in com­mon aside from what the premises re­quired. I sus­pect any­thing that agreed with all of them would re­quire all wor­lds the origi­nal premises don’t re­late to be equal, IE, ¬(A<B) ∧ ¬(B<A).

… look­ing back, I’m op­posed to your im­plicit defi­ni­tion of a ” “baseline” ”, the origi­nal pop­u­la­tion par­tial or­der­ing premises are the baseline, here, not to­tal util­i­tar­i­anism.

• In your “In­creas­ing pop­u­la­tion size”, you put “Medium, Medium” as more valuable than “Medium”, but that doesn’t seem to de­rive from the premises you’d been us­ing so far (apart from the “glue them to­gether” part). I found that sur­pris­ing, since you seem to go at big­ger lengths to jus­tify other things that seem more self-ev­i­dent to me.

• Would Xo­darap agree that the premises are (as­sum­ing we have op­er­a­tor over­loads for mul­ti­sets rather than sets)

• the bet­ter set is a su­per­set (A ⊂ B) ⇒ (A < B)

• or ev­ery­thing in the bet­ter set that’s not in the worse set is bet­ter than ev­ery­thing that’s in the worse set that’s not in the bet­ter set, (∀a∈(A\B), b∈(B\A) value(a) < value(b)) ⇒ (A < B)

• Yeah, maybe things just get worse and worse as you add more peo­ple—but uniformly, so that adding an­other per­son pre­serves or­der­ing :P

• If you change the value of “medium” from “1″ to “-5” while leav­ing the other two states the same, your con­clu­sion no longer holds. For ex­am­ple, on your last graph, (very good, medium) would out­rank (very good), even though the former has a value of −2 and the lat­ter of +3. This sug­gests your sys­tem doesn’t al­low nega­tive util­ities, which seems bad be­cause in­tu­itively it’s pos­si­ble for util­ity to some­times be nega­tive (eg eu­thana­sia ar­gu­ments).

• This sug­gests your sys­tem doesn’t al­low nega­tive util­ities, which seems bad be­cause in­tu­itively it’s pos­si­ble for util­ity to some­times be nega­tive (eg eu­thana­sia ar­gu­ments).

It must al­low nega­tive num­bers, or it’s not a group, as (R+,+) is not a group. (Each el­e­ment must has an in­verse which re­turns that el­e­ment to the iden­tity el­e­ment, which for this par­tic­u­lar free group is “no one al­ive”.)

How­ever, I be­lieve this spe­cific is­sue is solved by the lat­tice struc­ture. If “medium” were “-5″ in­stead of “1”, when you add “medium” to any uni­verse, you cre­ate a lat­tice el­e­ment be­low the origi­nal uni­verse, be­cause we know it is worse than the origi­nal uni­verse.

• This is a good point—I am now re­gret­ting not hav­ing given more tech­ni­cal de­tails on what it means to be “or­der pre­serv­ing”.

The re­quire­ment is that `X > 0 ==> X + Y > Y`. I’ve gen­er­ated the graph un­der the as­sump­tion that `Medium > 0`, which re­sults in (very good, medium) > (very good). Clearly the an­tecedent doesn’t hold if `Medium < 0`, in which case the graph would go the other di­rec­tion, as you point out.

• First, I think that what you call lat­tice or­der is more like par­tial or­der, un­less you can also show that a join always ex­ists. The pic­tures have it, but I am not con­vinced that they con­sti­tute a proof.

There might be cir­cum­stances where we are un­cer­tain whether or not P is bet­ter than Q, but if we are cer­tain, then it must be that P has greater to­tal util­ity than Q.

It looks like all you have “shown” is that if you em­bed some par­tial or­der into a to­tal or­der, then you can map this to­tal or­der­ing into in­te­gers. I am not a math­e­mat­i­cian, but this seems rather triv­ial.

• First, I think that what you call lat­tice or­der is more like par­tial or­der, un­less you can also show that a join always ex­ists. The pic­tures have it, but I am not con­vinced that they con­sti­tute a proof.

I agree, I didn’t show this. It’s not hard, but it’s a bit of writ­ing to prove that (x1x2 \/​ y1y2)=(x1\/​y1)(x2\/​y2) which in­duc­tively shows that this is an l-group.

It looks like all you have “shown” is that if you em­bed some par­tial or­der into a to­tal or­der, then you can map this to­tal or­der­ing into in­te­gers. I am not a math­e­mat­i­cian, but this seems rather triv­ial.

It’s not a to­tal or­der, nor is it true that all to­tally or­dered groups can be em­bed­ded into Z (con­sider R^2, lex­i­cally or­dered, for ex­am­ple. Heck, even R it­self can’t be mapped to Z since it’s un­countable!). So not only would this be a non-triv­ial proof, it would be an im­pos­si­ble one :-)

• Not all, just countable...

• Not all, just countable...

Z^2 lex­i­cally or­dered is countable but can’t be em­bed­ded in Z.

It seems like your in­tu­ition is shared by a lot of LW though—peo­ple seem to think it’s “ob­vi­ous” that these re­stric­tions re­sult in to­tal util­i­tar­i­anism, even though it’s ac­tu­ally pretty tricky.

• 27 Oct 2013 19:05 UTC
0 points

If the in­equitable so­ciety has greater to­tal util­ity, it must be at least as good as the equitable one.

Well, yes. The bad­ness of in­equal­ity will show up in the util­ities. Once you’ve mapped states of so­ciety onto util­ities, you’ve already taken it into ac­count. You still need an ad­di­tional em­piri­cal ar­gu­ment to say any­thing in­ter­est­ing (for ex­am­ple, that a so­ciety with an equal dis­tri­bu­tion of wealth is not as good as a so­ciety with slightly more to­tal wealth in an in­equitable dis­tri­bu­tion; that may or may not be what you had in mind, but it seemed worth clar­ify­ing).

• The bad­ness of in­equal­ity will show up in the utilities

Sure. This is prob­a­bly not a ma­jor­ity opinion on LW, but there are a lot of peo­ple who be­lieve that equal­ity is good even be­yond util­ity max­i­miza­tion (c.f. Rawls). That’s what I was try­ing to get at when I said:

In fact, it is so bad that there are cir­cum­stances where in­creas­ing equal­ity is good even if peo­ple are, on av­er­age, worse off.