Eudaimonic Utilitarianism

Eliezer Yud­kowsky on sev­eral oc­ca­sions has used the term “Eu­daimo­nia” to de­scribe an ob­jec­tively de­sir­able state of ex­is­tence. While the meta-ethics se­quence on Less Wrong has been rather em­phatic that sim­ple uni­ver­sal moral the­o­ries are in­ad­e­quate due to the com­plex na­ture of hu­man val­ues, one won­ders, just what would hap­pen if we tried any­way to build a moral the­ory around the no­tion of Eu­daimo­nia. The fol­low­ing is a cur­sory at­tempt to do so. Even if you don’t agree with ev­ery­thing I say here, I ask that you please bear with me to the end be­fore mak­ing judg­ments about this the­ory. Also, if you choose to down­vote this post, please offer some crit­i­cism in the com­ments to ex­plain why you choose to do so. I am ad­mit­tedly new to post­ing in the Less Wrong com­mu­nity, and would greatly ap­pre­ci­ate your com­ments and crit­i­cisms. Even though I use im­per­a­tive lan­guage to ar­gue my ideas, I con­sider this the­ory to be a work in progress at best. So with­out fur­ther ado, let us be­gin…

Clas­si­cal Utili­tar­i­anism al­lows for situ­a­tions where you could the­o­ret­i­cally jus­tify uni­ver­sal drug ad­dic­tion as a way to max­i­mize hap­piness if you could find some mag­i­cal drug that made peo­ple su­per happy all the time with no side effects. There’s a book called Brave New World by Al­dous Huxley, where this drug called Soma is used to se­date the en­tire pop­u­la­tion, mak­ing them docile and de­pen­dent and very, very happy. Now, John Stu­art Mill does ar­gue that some plea­sures are of a higher qual­ity than oth­ers, but how ex­actly do you define and com­pare that qual­ity? What ex­actly makes Shake­speare bet­ter than Real­ity TV? Ar­guably a lot of peo­ple are bored by Shake­speare and made hap­pier by Real­ity TV.

En­ter Aris­to­tle. Aris­to­tle had his own defi­ni­tion of hap­piness, which he called Eu­daimo­nia. Roughly trans­lated, it means “Hu­man Flour­ish­ing”. It is a com­plex con­cept, but I like to think of it as “reach­ing your full po­ten­tial as a hu­man be­ing”, “be­ing the best that you can be”, “fulfilling your pur­pose in life”, and “au­then­tic hap­piness” (based on the ex­is­ten­tial no­tion of au­then­tic­ity). I think a bet­ter way to ex­plain it is like this. The Clas­si­cal Utili­tar­ian con­cept of hap­piness is sub­jec­tive. It is just the hap­piness that you feel in your limited un­der­stand­ing of ev­ery­thing. The Eu­daimonic Utili­tar­ian con­cept of hap­piness is ob­jec­tive. It is the hap­piness you would have if you did know ev­ery­thing that was re­ally hap­pen­ing. If you, from the per­spec­tive of an im­par­tial ob­server, knew the to­tal truth (perfect in­for­ma­tion), would you be happy with the situ­a­tion? You would prob­a­bly only be truly happy if you were in the pro­cess of be­ing the best pos­si­ble you, and if it was the best pos­si­ble re­al­ity. Theists have an­other name for this, and it is God’s Will (See: Div­ine Benev­olence, or an At­tempt to Prove That the Prin­ci­pal End of the Div­ine Prov­i­dence and Govern­ment is the Hap­piness of His Crea­tures (1731) by Thomas Bayes) (yes, that Bayes).

Look­ing at the metaphor of God, an om­nibenev­olent God wants ev­ery­one to be happy. But more than just happy as docile crea­tures, he wants them to fulfill their pur­pose and des­tiny and achieve their ful­lest po­ten­tial for great­ness be­cause do­ing so al­lows them to con­tribute so much more to ev­ery­thing, and make the whole uni­verse and His cre­ation bet­ter. Now, it’s quite pos­si­ble that God does not ex­ist. But His per­spec­tive, that of the im­par­tial ob­server with perfect in­for­ma­tion and ra­tio­nal­ity, is still a tremen­dously use­ful per­spec­tive to have to make the best moral de­ci­sions, and is es­sen­tially the one that Eu­daimonic Utili­tar­i­anism would like to be able to rea­son from.

Such hap­piness would be based on perfect ra­tio­nal­ity, and the as­sump­tion that hap­piness is the emo­tional goal state. It is the state that we achieve when we ac­com­plish our goals, that is to say, we are be­ing ra­tio­nal, and com­mit­ting ra­tio­nal ac­tivity, also known as Arête. For this rea­son, Eu­daimo­nia as a state is not nec­es­sar­ily hu­man-spe­cific. Any ra­tio­nal agent with goals, in­clud­ing, say a Paper­clip Max­i­mizer, might reach a Eu­daimonic state even if it isn’t “sen­tient” or “in­tel­li­gent” in the way that we would un­der­stand it. It need not “feel happy” in a bio­chem­i­cal man­ner, only be goal-di­rected and have some sort of de­sired suc­cess state. Though I could ar­gue that this de­sired suc­cess state would be the me­chan­i­cal equiv­a­lent of hap­piness to a Really Pow­er­ful Op­ti­miza­tion Pro­cess, that in its own way the Paper­clip Max­i­mizer feels plea­sure when it suc­ceeds at max­i­miz­ing pa­per­clips, and pain when it fails to do so.

Re­gard­less, Eu­daimo­nia would not be max­i­mized by tak­ing Soma. Eu­daimo­nia would not be achieved by hook­ing up to the ma­trix if the ma­trix was a perfect utopia of hap­piness, be­cause that utopia and hap­piness aren’t real. They’re a fan­tasy, a drug that pre­vents them from ac­tu­ally liv­ing and be­ing who they’re sup­posed to be, who they can be. They would be liv­ing a lie. Eu­daimo­nia is based on the truth. It is based on re­al­ity and what can and should be done. It re­quires perform­ing ra­tio­nal ac­tivity or ac­tu­ally achiev­ing goals. It is an op­ti­miza­tion given all the data.

I have be­gun by ex­plain­ing how Eu­daimonic Utili­tar­i­anism is su­pe­rior to Clas­si­cal Utili­tar­i­anism. I will now try to ex­plain how Eu­daimonic Utili­tar­i­anism is both su­pe­rior and com­pat­i­ble to Prefer­ence Utili­tar­i­anism. Reg­u­lar Prefer­ence Utili­tar­i­anism is ar­guably even more sub­jec­tive than Clas­si­cal Utili­tar­i­anism. With Prefer­ence Utili­tar­i­anism, you’re es­sen­tially say­ing that what­ever peo­ple think is in their in­ter­ests, is what should be max­i­mized. But this as­sumes that their prefer­ences are ra­tio­nal. In re­al­ity, most peo­ple’s prefer­ences are strongly in­fluenced by emo­tions and bounded ra­tio­nal­ity.

For in­stance, take the ex­am­ple of a suici­dal and de­pressed man. Due to emo­tional fac­tors, this man has the ir­ra­tional de­sire to kill him­self. Prefer­ence Utili­tar­i­anism would ei­ther have to ac­cept this prefer­ence even though most would agree it is ob­jec­tively “bad” for him, or do some­thing like call this “man­i­fest” prefer­ence to be in­fe­rior to the man’s “true” prefer­ences. “Man­i­fest” prefer­ences are what a per­son’s ac­tual be­havi­our would sug­gest, while “true” prefer­ences are what they would have if they could view the situ­a­tion with all rele­vant in­for­ma­tion and ra­tio­nal care. But how do we go about de­ter­min­ing a per­son’s “true” prefer­ences? Do we not have to re­sort to some kind of ob­jec­tive crite­rion of what is ra­tio­nal be­havi­our?

But where is this ob­jec­tive crite­rion com­ing from? Well a Clas­si­cal Utili­tar­ian would ar­gue that suicide would lead to a nega­tion of all the po­ten­tial hap­piness that the per­son could feel in the fu­ture, and that ra­tio­nal­ity is what max­i­mizes hap­piness. A Eu­daimonic Utili­tar­ian would go fur­ther and state that if the per­son knew ev­ery­thing, both their hap­piness and their prefer­ences would be al­igned to­wards ra­tio­nal ac­tivity and there­fore not only would their ob­jec­tive hap­piness be max­i­mized by not com­mit­ting suicide, but their “true” prefer­ences would also be max­i­mized. Eu­daimo­nia there­fore is the ob­jec­tive crite­rion of ra­tio­nal be­havi­our. It is not merely sub­jec­tive prefer­ence, but a kind of ob­jec­tive prefer­ence based on perfect in­for­ma­tion and perfect ra­tio­nal­ity.

Prefer­ence Utili­tar­i­anism only re­ally works as a moral the­ory if the per­son’s prefer­ences are based on ra­tio­nal­ity and com­plete knowl­edge of ev­ery­thing. Coin­ci­den­tally, Eu­daimonic Utili­tar­i­anism, as­sumes this po­si­tion. It as­sumes that what should be max­i­mized is the per­son’s prefer­ences if they were com­pletely ra­tio­nal and knew ev­ery­thing, be­cause those prefer­ences would nat­u­rally al­ign with achiev­ing Eu­daimo­nia.

There­fore, Eu­daimonic Utili­tar­i­anism can be seen as a merg­ing, a unifi­ca­tion of both Clas­si­cal and Prefer­ence Utili­tar­i­anism be­cause, from the per­spec­tive of an ob­jec­tive im­par­tial ob­server, the state of Eu­daimo­nia is si­mul­ta­neously hap­piness and ra­tio­nal prefer­ence achieved through Arête, or ra­tio­nal ac­tivity, which is equiv­a­lent to “do­ing your best” or “max­i­miz­ing your po­ten­tial”.

Prefer­ence Utili­tar­i­anism is neu­tral as to whether or not to take Soma or plug into the Utopia Ma­trix. For Prefer­ence Utili­tar­i­anism, it’s up to the in­di­vi­d­ual’s “ra­tio­nal” prefer­ence. Eu­daimonic Utili­tar­i­anism on the other hand would ar­gue that it is only ra­tio­nal to take Soma or plug into the Utopia Ma­trix if do­ing so still al­lows you to achieve Eu­daimo­nia, which is un­likely, as do­ing so pre­vents one from perform­ing Arête in the real world. At the very least, rather than bas­ing it on a sub­jec­tive prefer­ence, we are now us­ing an ob­jec­tive eval­u­a­tion func­tion.

The main challenge of Eu­daimonic Utili­tar­i­anism of course is that we as hu­man be­ings with bounded ra­tio­nal­ity, do not have ac­cess to the po­si­tion of God with re­gards to perfect in­for­ma­tion. Nev­er­the­less, we can still ap­ply Eu­daimonic Utili­tar­i­anism in ev­ery­day sce­nar­ios.

For in­stance, con­sider the prob­lem of Adultery. A com­mon crit­i­cism of Clas­si­cal Utili­tar­i­anism is that it doesn’t con­demn acts like Adultery be­cause at first glance, an act like Adultery seems like it would in­crease net hap­piness and there­fore be con­doned. This does not take into ac­count the prob­a­bil­ities of be­ing caught how­ever. Given un­cer­tainty, it is usu­ally safe to as­sume a uniform dis­tri­bu­tion of prob­a­bil­ities, which means that get­ting caught has a 0.5 prob­a­bil­ity. We must then com­pare the util­ities of not get­ting caught, and get­ting caught. It doesn’t re­ally mat­ter what the ex­act num­bers are, so much as the rel­a­tive re­la­tion­ship of the val­ues. So for in­stance, we can say that Adultery in the not get­ting caught sce­nario has a +5 to each mem­ber of the Adultery, for a to­tal of +10. How­ever, in the get­ting caught sce­nario, there is a +5 to the un­cou­pled mem­ber, but a net loss of −20 to the cou­pled mem­ber, and −20 to the wronged part­ner, due to the po­ten­tial fal­ling out and loss of trust re­sult­ing from the dis­cov­ered Adultery.

<col> <col> <col>

Com­mit Adultery

Don’t Com­mit Adultery

Truth Discovered

-35 effect x 0.5 probability

0 effect x 0.5 probability

Truth Not Discovered

+10 effect x 0.5 probability

0 effect x 0.5 probability

Po­ten­tial Consequences

-12.5

0

Thus the net to­tal effect of Adultery in the caught sce­nario is −35. If we as­sign the prob­a­bil­ities to each sce­nario, +10 x 0.5 = +5, while −35 x 0.5 = −17.5. +5 – 17.5 = −12.5, there­fore the prob­a­ble net effect of Adultery is ac­tu­ally nega­tive and there­fore morally wrong.

But what if get­ting caught is very un­likely? Well, we can show that to a true ag­nos­tic at least, the prob­a­bil­ity of get­ting caught would be at least 0.5, be­cause if we as­sume to­tal ig­no­rance, the prob­a­bil­ity that God and/​or an af­ter­life ex­ist would be a uniform dis­tri­bu­tion, as sug­gested by the Prin­ci­ple of In­differ­ence and the Prin­ci­ple of Max­i­mum En­tropy. Thus there is at least a 0.5 chance that even­tu­ally the other part­ner will find out. But as­sum­ing in­stead a strong athe­is­tic view, there is the dan­ger that hy­po­thet­i­cally, if the prob­a­bil­ity of truth not dis­cov­ered was 1, then this calcu­la­tion would ac­tu­ally sug­gest that com­mit­ting Adultery would be moral.

The pre­vi­ous ex­am­ple is based on the sub­jec­tive hap­piness of Clas­si­cal Utili­tar­i­anism, but what if we used a crite­rion of Eu­daimo­nia, or the ob­jec­tive hap­piness we would feel if we knew ev­ery­thing? In that case the Adultery sce­nario looks even more nega­tive.

In this in­stance, we can say that Adultery in the not get­ting caught sce­nario has a +5 to each mem­ber of the Adultery, but also a −20 to the part­ner who is be­ing wronged be­cause that is how much they would suffer if they knew, which is a net −10. In the get­ting caught sce­nario, there is a +5 to the un­cou­pled mem­ber, but a net loss of −20 to the cou­pled mem­ber and an ad­di­tional −20 to the part­ner be­ing wronged, due to the po­ten­tial fal­ling out and loss of trust re­sult­ing from the dis­cov­ered Adultery.

<col> <col> <col>

Com­mit Adultery

Don’t Com­mit Adultery

Truth Discovered

-35 effect x 0.5 probability

0 effect x 0.5 probability

Truth Not Discovered

-10 effect x 0.5 probability

0 effect x 0.5 probability

Po­ten­tial Consequences

-22.5

0

As you can see, with a Eu­daimonic Utili­tar­ian crite­rion, even if the prob­a­bil­ity of truth not dis­cov­ered was 1, it would still be nega­tive and there­fore morally wrong. Thus, whereas Clas­si­cal Utili­tar­i­anism based on sub­jec­tive hap­piness bases its case against Adultery on the prob­a­bil­ity of be­ing caught and the po­ten­tial nega­tive con­se­quences, Eu­daimonic Utili­tar­i­anism takes a more solid case that Adultery would always be wrong be­cause re­gard­less of the prob­a­bil­ity of be­ing caught, the con­se­quences are in­her­ently nega­tive. It is there­fore un­nec­es­sary to re­sort to tra­di­tional Prefer­ence Utili­tar­i­anism to achieve our moral in­tu­itions about Adultery.

Con­sider an­other sce­nario. You are plan­ning a sur­prise birth­day party for your friend, and she asks you what you are do­ing. You can ei­ther tell the truth or lie. Clas­si­cal Utili­tar­i­anism would say to lie be­cause the hap­piness of the sur­prise birth­day party out­weighs the hap­piness of be­ing told the truth. Prefer­ence Utili­tar­i­anism how­ever would ar­gue that it is ra­tio­nal for the friend to want to know the truth and not have her friends lie to her gen­er­ally, that this would be her “true” prefer­ence. Thus, Prefer­ence Utili­tar­i­anism would ar­gue in favour of tel­ling the truth and spoiling the sur­prise. The hap­piness that the sur­prise would cause does not fac­tor into Prefer­ence Utili­tar­i­anism at all, and the friend has no prior prefer­ence for a sur­prise party she doesn’t even know about.

What does Eu­daimonic Utili­tar­i­anism say? Well, if the friend re­ally knew ev­ery­thing that was go­ing on, would she be hap­pier and pre­fer to know the truth in this situ­a­tion, or be hap­pier and pre­fer not to know? I would sug­gest she would be hap­pier and pre­fer not to know, in which case Eu­daimonic Utili­tar­i­anism agrees with Clas­si­cal Utili­tar­i­anism and says we should lie to pro­tect the se­cret of the sur­prise birth­day party.

Again, what’s the differ­ence be­tween eu­daimo­nia and prefer­ence-fulfill­ment? Ba­si­cally, prefer­ence-fulfill­ment is based on peo­ple’s sub­jec­tive prefer­ences, while Eu­daimo­nia is based on ob­jec­tive well-be­ing, or as I like to ex­plain, the hap­piness they would feel if they had perfect in­for­ma­tion.

The differ­ence is some­what sub­tle to the ex­tent that a per­son’s “true” prefer­ences are sup­posed to be “the prefer­ences he would have if he had all the rele­vant fac­tual in­for­ma­tion, always rea­soned with the great­est pos­si­ble care, and were in a state of mind most con­ducive to ra­tio­nal choice.” (Harsanyi 1982) Note that rele­vant fac­tual in­for­ma­tion is not the same thing as perfect in­for­ma­tion.

For in­stance, take the clas­sic crit­i­cism of Utili­tar­i­anism in the form of the sce­nario where you hang an in­no­cent man to satisfy the de­sires for jus­tice of the un­ruly mob. Un­der both he­do­nis­tic and prefer­ence util­i­tar­i­anism, the hang­ing of the in­no­cent man can be jus­tified be­cause hang­ing the in­no­cent man satis­fies both the hap­piness of the mob, and the prefer­ences of the mob. How­ever, hang­ing an in­no­cent man does not satisfy the Eu­daimo­nia of the mob, be­cause if the peo­ple in the mob knew that the man was in­no­cent and were truly ra­tio­nal, they would not want to hang him af­ter all. Note that in this case they only have this in­for­ma­tion un­der perfect in­for­ma­tion, as it is as­sumed that the man ap­pears to all ra­tio­nal par­ties to be guilty even though he is ac­tu­ally in­no­cent.

So, Eu­daimo­nia as­sumes that in a hy­po­thet­i­cal state of perfect in­for­ma­tion and ra­tio­nal­ity (that is to say ob­jec­tivity), a per­son’s hap­piness would best be satis­fied by ac­tions that might differ from what they might pre­fer in their nor­mal sub­jec­tive state, and that we should com­mit to the ac­tions that satisfy this ob­jec­tive hap­piness (or well-be­ing), rather than satisfy sub­jec­tive hap­piness or sub­jec­tive prefer­ences.

For in­stance, we can take the ex­am­ple from John Rawls of the grass-counter. “Imag­ine a brilli­ant Har­vard math­e­mat­i­cian, fully in­formed about the op­tions available to her, who de­vel­ops an over­rid­ing de­sire to count the blades of grass on the lawns of Har­vard.” Un­der both he­do­nis­tic and prefer­ence util­i­tar­i­anism, this would be ac­cept­able. How­ever, a Eu­daimonic in­ter­pre­ta­tion would ar­gue that count­ing blades of grass would not max­i­mize her ob­jec­tive hap­piness, that there is an ob­jec­tive state of be­ing that would ac­tu­ally make her hap­pier, even if it went against her per­sonal prefer­ences, and that this state of be­ing is what should be max­i­mized. Similarly, con­sider the ra­tio­nal philoso­pher who has come to the con­clu­sion that life is mean­ingless and not worth liv­ing and there­fore de­vel­ops a prefer­ence to com­mit suicide. This would be his “true” prefer­ence, but it would not max­i­mize his Eu­daimo­nia. For this rea­son, we should try to per­suade the suici­dal philoso­pher not to com­mit suicide, rather than helping him do so.

How does Eu­daimo­nia com­pare with Eliezer Yud­kowsky’s con­cept of Co­her­ent Ex­trap­o­lated Vo­li­tion (CEV)? Similarly to Eu­daimo­nia, CEV is based on what an ideal­ized ver­sion of us would want “if we knew more, thought faster, were more the peo­ple we wished we were, had grown up farther to­gether”. This is similar to but not the same thing as an ideal­ized ver­sion of us with perfect in­for­ma­tion and with perfect ra­tio­nal­ity. Ar­guably Eu­daimo­nia is sort of an ex­treme form of CEV that en­dorses the limits in this re­gard.

Fur­ther­more, CEV as­sumes that the de­sires of hu­man­ity con­verge. The con­cept of Eu­daimo­nia does not re­quire this. The Eu­daimo­nia of differ­ent sen­tient be­ings may well con­flict, in which case Eu­daimonic Utili­tar­i­anism takes the Utili­tar­ian route and sug­gests the com­pro­mise of max­i­miz­ing Eu­daimo­nia for the great­est num­ber of sen­tient be­ings, with a hi­er­ar­chi­cal prefer­ence for more con­scious be­ings such as hu­mans, over say ants. This is not to say that hu­mans are nec­es­sar­ily ab­solute util­ity mon­sters to the ants. One could in­stead set it up so that the hu­mans are much more heav­ily weighted in the moral calcu­lus by their level of con­scious­ness. Though that could con­ceiv­ably lead to the situ­a­tion where a billion ants might be more heav­ily weighted than a sin­gle hu­man. If such a no­tion is anath­ema to you, then per­haps mak­ing hu­mans ab­solute util­ity mon­sters may be rea­son­able to you af­ter all. How­ever, keep in mind that the same ar­gu­ment can be made that a su­per­in­tel­li­gent A.I. is a util­ity mon­ster to hu­mans. The idea that seven billion hu­mans might out­weigh one su­per­in­tel­li­gent A.I. in the moral calcu­lus may not be such a bad idea.

In any case, Eu­daimonic Utili­tar­i­anism does away with many of the un­in­tu­itive weak­nesses of both Clas­si­cal He­donis­tic Utili­tar­i­anism, and Prefer­ence Utili­tar­i­anism. It val­i­dates our in­tu­itions about the im­por­tance of au­then­tic­ity and ra­tio­nal­ity in moral be­havi­our. It also at­tempts to unify moral­ity and ra­tio­nal­ity. Though it is not with­out its is­sues, not the least of which be­ing that it in­cor­po­rates a very sim­plified view of hu­man val­ues, I nev­er­the­less offer it as an al­ter­na­tive to other ex­ist­ing forms of Utili­tar­i­anism for your con­sid­er­a­tion.