Cultivating our own gardens

This is a post about moral philos­o­phy, ap­proached with a math­e­mat­i­cal metaphor.

Here’s an in­ter­est­ing prob­lem in math­e­mat­ics. Let’s say you have a graph, made up of ver­tices and edges, with weights as­signed to the edges. Think of the ver­tices as US cities and the edges as roads be­tween them; the weight on each road is the length of the road. Now, know­ing only this in­for­ma­tion, can you draw a map of the US on a sheet of pa­per? In math­e­mat­i­cal terms, is there an iso­met­ric em­bed­ding of this graph in two-di­men­sional Eu­clidean space?

When you think about this for a minute, it’s clear that this is a prob­lem about rec­on­cil­ing the lo­cal and the global. Start with New York and all its neigh­bor­ing cities. You have a sort of star shape. You can cer­tainly draw this on the plane; in fact, you have many de­grees of free­dom; you can ar­bi­trar­ily pick one way to draw it. Now start adding more cities and more roads, and even­tu­ally the de­grees of free­dom diminish. If you made the wrong choices ear­lier on, you might paint your­self in a cor­ner and have no way to keep all the dis­tances con­sis­tent when you add a new city. This is known as a “syn­chro­niza­tion prob­lem.” Get­ting it to work lo­cally is easy; get­ting all the lo­cal pieces rec­on­ciled with each other is hard.

This is a lovely prob­lem and some ac­quain­tances of mine have writ­ten a pa­per about it. (http://​​www.math.prince­ton.edu/​​~mcu­curin/​​Sen­sors_ASAP_TOSN_fi­nal.pdf) I’ll pick out some in­sights that seem rele­vant to what fol­lows. First, some ob­vi­ous ap­proaches don’t work very well. It might be thought we want to op­ti­mize over all pos­si­ble em­bed­dings, pick­ing the one that has the low­est er­ror in ap­prox­i­mat­ing dis­tances be­tween cities. You come up with a “penalty func­tion” that’s some sort of sum of er­rors, and use stan­dard op­ti­miza­tion tech­niques to min­i­mize it. The trou­ble is, these ap­proaches tend to work spot­tily—in par­tic­u­lar, they some­times pick out lo­cal rather than global op­tima (so that the er­ror can be quite high af­ter all.)

The ap­proach in the pa­per I linked is differ­ent. We break the graph into over­lap­ping smaller sub­graphs, so small that they can only be em­bed­ded in one way (that’s called rigidity) and then “stitch” them to­gether con­sis­tently. The “stitch­ing” is done with a very handy trick in­volv­ing eigen­vec­tors of sparse ma­tri­ces. But the point I want to em­pha­size here is that you have to look at the small scale, and let all the lit­tle patches em­bed them­selves as they like, be­fore try­ing to rec­on­cile them globally.

Now, rather dar­ingly, I want to ap­ply this idea to ethics. (This is an ex­pan­sion of a post peo­ple seemed to like:

The thing is, hu­man val­ues differ enor­mously. The di­ver­sity of val­ues is an em­piri­cal fact. The Ja­panese did not have a word for “thank you” un­til the Por­tuguese gave them one; this is a sim­ple ex­am­ple, but it ab­solutely shocked me, be­cause I thought “thank you” was a uni­ver­sal con­cept. It’s not. (ed­ited for lack of fact-check­ing.) And we do not all agree on what virtues are, or what the best way to raise chil­dren is, or what the best form of gov­ern­ment is. There may be no prin­ci­ple that all hu­mans agree on—dis­sen­ters who be­lieve that geno­cide is a good thing may be pretty awful peo­ple, but they un­doubt­edly ex­ist. Creat­ing the best pos­si­ble world for hu­mans is a syn­chro­niza­tion prob­lem, then—we have to figure out a way to bal­ance val­ues that in­evitably clash. Here, nodes are in­di­vi­d­u­als, each in­di­vi­d­ual is tied to its neigh­bors, and a choice of em­bed­ding is a par­tic­u­lar ac­tion. The worse the em­bed­ding near an in­di­vi­d­ual fits the “true” un­der­ly­ing man­i­fold, the greater the “penalty func­tion” and the more mis­er­able that in­di­vi­d­ual is, be­cause the ac­tion goes against what he val­ues.

If we can ex­tend the metaphor fur­ther, this is a prob­lem for util­i­tar­i­anism. Max­i­miz­ing some­thing globally—say, hap­piness—can be a dead end. It can hit a lo­cal max­i­mum—the max­i­mum for those peo­ple who value hap­piness—but do noth­ing for the peo­ple whose high­est value is loy­alty to their fam­ily, or truth-seek­ing, or prac­tic­ing re­li­gion, or free­dom, or mar­tial valor. We can’t re­ally op­ti­mize, be­cause a lot of peo­ple’s val­ues are other-re­gard­ing: we want Aunt Susie to stop smok­ing, be­cause of the prin­ci­ple of the thing. Or more se­ri­ously, we want peo­ple in for­eign coun­tries to stop perform­ing cli­toridec­tomies, be­cause of the prin­ci­ple of the thing. And Aunt Susie or the for­eign­ers may feel differ­ently. When you have a set of val­ues that ex­tends to the whole world, con­flict is in­evitable.

The analogue to break­ing down the graph is to keep val­ues lo­cal. You have a small star-shaped graph of peo­ple you know per­son­ally and ac­tions you’re per­son­ally ca­pa­ble of tak­ing. Within that star, you define your own val­ues: what you’re ready to cheer for, work for, or die for. You’re free to choose those val­ues for your­self—you don’t have to drop them be­cause they’re per­haps not op­ti­mal for the world’s well-be­ing. But be­yond that ra­dius, opinions are dan­ger­ous: both be­cause you’re more ig­no­rant about dis­tant is­sues, and be­cause you run into this prob­lem of globally rec­on­cil­ing con­flict­ing val­ues. Rec­on­cili­a­tion is only pos­si­ble if ev­ery­one’s mind­ing their own busi­ness. If things are re­ally bro­ken down into rigid com­po­nents. It’s some­thing akin to what Thomas Nagel said against util­i­tar­i­anism:

“Ab­solutism is as­so­ci­ated with a view of one­self as a small be­ing in­ter­act­ing with oth­ers in a large world. The jus­tifi­ca­tions it re­quires are pri­mar­ily in­ter­per­sonal. Utili­tar­i­anism is as­so­ci­ated with a view of one­self as a benev­olent bu­reau­crat dis­tribut­ing such benefits as one can con­trol to countless other be­ings, with whom one can have var­i­ous re­la­tions or none. The jus­tifi­ca­tions it re­quires are pri­mar­ily ad­minis­tra­tive.” (Mor­tal Ques­tions, p. 68.)

Any­how, try­ing to em­bed our val­ues on this dark con­ti­nent of a man­i­fold seems to re­quire break­ing things down into lit­tle lo­cal pieces. I think of that as “cul­ti­vat­ing our own gar­dens,” to quote Can­dide. I don’t want to be so con­fi­dent as to have uni­ver­sal ide­olo­gies, but I think I may be quite con­fi­dent and de­ci­sive in the lit­tle area that is mine: my per­sonal re­la­tion­ships; my ar­eas of ex­per­tise, such as they are; my own home and what I do in it; ev­ery­thing that I know I love and is worth my time and money; and bad things that I will not per­mit to hap­pen in front of me, so long as I can help it. Lo­cal val­ues, not global ones.

Could any AI be “friendly” enough to keep things lo­cal?

• The Ja­panese did not have a word for “thank you” un­til the Por­tuguese gave them one.

This is apoc­ryphal, as can be seen on the Wikipe­dia page for Por­tuguese loan­words into Ja­panese.

• I should add, though, that there are some sur­pris­ing ex­cep­tions to uni­ver­sal­ity out there, like the lack of cer­tain (prima fa­cie im­por­tant) colour terms or num­bers.

How­ever, as some­one who used to study lan­guages pas­sion­ately, I came to re­ject the stronger ver­sions of the Sapir-Whorf hy­poth­e­sis (lan­guage de­ter­mines thought). One grad­u­ally comes to see that, though differ­ences can be startling, the re­ally odd omis­sions are of­ten just got around by cir­cum­lo­cu­tions. For ex­am­ple, Rus­sian lacks the highly spe­cific past tenses English has (was, have been, had been, would have been, would be), but if there is any ac­tual con­fu­sion, they just get around it with a few sec­onds ex­pla­na­tion. Or in the Ja­panese ex­am­ple, even sup­pos­ing it were true, I would ex­pect some other for­mal­ized way of show­ing grat­i­tude; hence “thanks” the con­cept would live on even if there was no word used in similar con­texts to English “thanks.”

• In­deed; for an­other ex­am­ple, clas­si­cal Latin did not use words for “yes” and “no”. A ques­tion such as “Do you see it?” would have been an­swered with “I see it”/​”I don’t see it”.

• Ali­corn’s note about Chi­nese prob­a­bly ex­plains the ba­sis for the even­tual “Do not want!” meme, which came from a re­verse trans­la­tion of a crude Chi­nese trans­la­tion of Darth Vader say­ing “Noooooooooooooo!” Link

The Chi­nese trans­la­tor prob­a­bly looked up what “No” means. Trans­la­tion dic­tio­nar­ies, in turn, rec­og­nize that “No” doesn’t have a di­rect trans­la­tion, so they list sev­eral op­tions, given the con­text. In the case that “no” is a re­fusal of some­thing, the trans­la­tion in Chi­nese should take the form “[I] do not want [that]”. (If they have to list only one op­tion, they pick the most likely mean­ing, and that may have been it.)

Then, clum­sily us­ing this op­tion, the Chi­nese trans­la­tor picked some­thing that trans­lates back as “do not want”.

• Chi­nese does some­thing similar. “Do you see that?” would be an­swered af­fir­ma­tively by say­ing the word for “See”, or nega­tively by say­ing “Don’t see”. In some con­texts, the words for “cor­rect” and “in­cor­rect” can be used a bit like “yes” and “no”.

• The com­mon part is that in both Latin and Chi­nese the sub­ject can be/​is im­plic­itly in­cluded in the verb. Us­ing “I” ex­plic­itly, at least In Chi­nese, would em­pha­sis some­thing along the line “but you may not” (due to what­ever). (This is at least what I’ve been told, stan­dard dis­claimer on in­suffi­cient knowl­edge ap­plies).

• Or in the Ja­panese ex­am­ple, even sup­pos­ing it were true, I would ex­pect some other for­mal­ized way of show­ing grat­i­tude; hence “thanks” the con­cept would live on even if there was no word used in similar con­texts to English “thanks.”

Very true. “Thank you” doesn’t re­ally have a mean­ing the same way other words do, since it’s more of an in­ter­jec­tion. When find­ing the trans­la­tion for “thank you” in other lan­guages, you just look at what the re­cip­i­ent of a fa­vor says to ex­press ap­pre­ci­a­tion in that lan­guage, and call that their “thank you”.

Other­wise, you could ar­gue that “Span­ish doesn’t have a word for thank you—but hey, on an un­re­lated note, na­tive Span­ish speak­ers have this odd cus­tom of say­ing “grat­i­tude” (gra­cias) when­ever they want to thank some­one...”

• Ad­vis­ing peo­ple to have val­ues that are con­ve­nient for the pur­poses of cre­at­ing a so­cial util­ity func­tion ought to not move them. E.g. if you already dis­value the gen­i­tal mu­tila­tion of peo­ple out­side your so­cial web, it ought not weigh morally that such a value is less con­ve­nient for the ar­biters of over­all so­cial util­ity.

The math cer­tainly sounds in­ter­est­ing, though!

EDIT:

“But if we eat ba­bies the util­ity func­tion will be a perfect square! C’m on, it’ll make the math SO much eas­ier”

• The ap­proach in the pa­per I linked is differ­ent. We break the graph into over­lap­ping smaller sub­graphs, so small that they can only be em­bed­ded in one way (that’s called rigidity) and then “stitch” them to­gether con­sis­tently. The “stitch­ing” is done with a very handy trick in­volv­ing eigen­vec­tors of sparse ma­tri­ces. But the point I want to em­pha­size here is that you have to look at the small scale, and let all the lit­tle patches em­bed them­selves as they like, be­fore try­ing to rec­on­cile them globally.

For­get the con­text for a mo­ment—this note is a very gen­eral, very use­ful ob­ser­va­tion!

• 1 Jun 2010 2:29 UTC
4 points

The trou­ble is, not ev­ery­one wants to mind their own busi­ness, es­pe­cially be­cause there are in­cen­tives for cul­tures to en­gage in eco­nomic in­ter­ac­tion (which in­evitably leads to cul­tural ex­change and thus a con­flict of val­ues). Though it would the­o­ret­i­cally be benefi­cial if ev­ery­one cul­ti­vated their own gar­den, it seems nearly im­pos­si­ble in prac­tice. In­stead of keep­ing value sys­tems in iso­la­tion, which is an ex­tremely difficult task even for an AI, wouldn’t it be bet­ter to al­low cul­tures to in­ter­act un­til they ho­mog­e­nize?

• There are op­ti­miza­tion prob­lems where a bot­tom-up ap­proach works well, but some­times top-down or in most cases not so eas­ily la­beled meth­ods are nec­es­sary.

If math­e­mat­i­cal op­ti­miza­tion is a proper anal­ogy (or even frame­work) for solv­ing so­cial/​ethics etc. prob­lems, then the log­i­cal con­clu­sion would be: The ap­proach must de­pend heav­ily on the na­ture of the prob­lem at hand.

Lo­cal­ity has its very im­por­tant place, but I can’t see how one could ad­dress planet-wide “tragedy of the com­mons”-type is­sues by purely lo­cal meth­ods.

• This post, and the prior com­ment it refers to, have some­thing to say; but they’re at­tack­ing a straw-man ver­sion of util­i­tar­i­anism.

Utili­tar­i­anism doesn’t have to mean that you take ev­ery­body’s state­ments about what they pre­fer about ev­ery­thing in the world and add them to­gether lin­early. Any com­bin­ing func­tion is pos­si­ble. Utili­tar­i­anism just means that you have a func­tion, and you want to op­ti­mize it.

Max­i­miz­ing some­thing globally—say, hap­piness—can be a dead end. It can hit a lo­cal max­i­mum—the max­i­mum for those peo­ple who value hap­piness—but do noth­ing for the peo­ple whose high­est value is loy­alty to their fam­ily, or truth-seek­ing, or prac­tic­ing re­li­gion, or free­dom, or mar­tial valor.

Then you make a new util­ity func­tion that takes those val­ues into ac­count.

You might not be able to find a very good op­ti­mum us­ing util­i­tar­i­anism. But, by defi­ni­tion, not be­ing a util­i­tar­ian means not look­ing for an op­ti­mum, which means you won’t find any op­ti­mum at all un­less by chance.

If you still want to ar­gue against util­i­tar­i­anism, you need to come up with some other plau­si­ble way of op­ti­miz­ing. For in­stance, you could make a free-mar­ket or evolu­tion­ary ar­gu­ment, that it’s bet­ter to provide a free mar­ket with free agents (or an evolu­tion­ary ecosys­tem) than to con­struct a util­ity func­tion, be­cause the agents can op­ti­mize col­lec­tive util­ity bet­ter than a bu­reau­cracy can, with­out ever need­ing to know what the over­all util­ity func­tion is.

• I agree with the be­gin­ning of your com­ment. I would add that the au­thors may be­lieve they are at­tack­ing util­i­tar­i­anism, when in fact they are com­ment­ing on the proper meth­ods for im­ple­ment­ing util­i­tar­i­anism.

I dis­agree that at­tack­ing util­i­tar­i­anism in­volves ar­gu­ing for differ­ent op­ti­miza­tion the­ory. If a util­i­tar­ian be­lieved that the free mar­ket was more effi­cient at pro­duc­ing util­ity then the util­i­tar­ian would sup­port it: it doesn’t mat­ter by what means that free mar­ket, say, achieved that greater util­ity.

Rather, at­tack­ing util­i­tar­i­anism in­volves ar­gu­ing that we should op­ti­mize for some­thing else: for in­stance some­thing like the cat­e­gor­i­cal im­per­a­tive. A fa­mous ex­am­ple of this is Kant’s ar­gu­ment that one should never lie (since it could never be willed to be a uni­ver­sal law, ac­cord­ing to him), and the util­i­tar­ian philoso­pher loves to re­tort that ly­ing is es­sen­tial if one is hid­ing a Jewish fam­ily from the Nazis. But Kant would be un­moved (if you be­lieve his writ­ings), all that would mat­ter are these uni­ver­sal prin­ci­ples.

• If you’re op­ti­miz­ing, you’re a form of util­i­tar­ian. Even if all you’re op­ti­miz­ing is “min­i­mize the num­ber of times Kant’s prin­ci­ples X, Y, and Z are vi­o­lated”.

This makes the util­i­tar­ian/​non-util­i­tar­ian dis­tinc­tion use­less, which I think it is. Every­body is ei­ther a util­i­tar­ian of some sort, a nihilist, or a con­ser­va­tive, mys­tic, or gam­bler say­ing “Do it the way we’ve always done it /​ Leave it up to God /​ Roll the dice”. It’s im­por­tant to rec­og­nize this, so that we can get on with talk­ing about “util­ity func­tions” with­out some­one protest­ing that util­i­tar­i­anism is fun­da­men­tally flawed.

The dis­tinc­tion I was draw­ing could be phrased as be­tween ex­plicit util­i­tar­i­anism (try­ing to com­pute the util­ity func­tion) and im­plicit util­i­tar­i­anism (con­struct­ing mechanisms that you ex­pect will max­i­mize a util­ity func­tion that is im­plicit in the ac­tion of a sys­tem but not eas­ily ex­tracted from it and for­mal­ized).

• I think what you’re call­ing util­i­tar­i­anism is typ­i­cally called con­se­quen­tial­ism. Utili­tar­i­anism usu­ally con­notes some­thing like what Mill or Ben­tham had in mind—de­ter­mine each in­di­vi­d­ual’s util­ity func­tion, then con­truct a global util­ity func­tion that is the sum/​av­er­age of the in­di­vi­d­u­als. I say con­notes be­cause no mat­ter how you define the term, this seems to be what peo­ple think when they hear it, so they bring up the tired old cached ob­jec­tions to Mill’s util­i­tar­i­anism that just don’t ap­ply to what we’re typ­i­cally talk­ing about here.

• There is a mean­ingful dis­tinc­tion be­tween be­liev­ing that util­ity should be agent neu­tral and be­liev­ing that it should be agent rel­a­tive. I tend to as­sume peo­ple are ad­vo­cat­ing an agent neu­tral util­ity func­tion when they call them­selves util­i­tar­ian since as you point out it is rather a use­less dis­tinc­tion oth­er­wise. What ter­minol­ogy do you use to re­flect this dis­tinc­tion if not util­i­tar­ian/​non-util­i­tar­ian?

It’s the agent neu­tral util­i­tar­i­ans that I think are dan­ger­ous and wrong. The other kind (if you want to still call them util­i­tar­i­ans) are just say­ing the best way to max­i­mize util­ity is to max­i­mize util­ity which I have a hard time ar­gu­ing with.

• There is a mean­ingful dis­tinc­tion be­tween be­liev­ing that util­ity should be agent neu­tral and be­liev­ing that it should be agent rel­a­tive.

Yes; but I’ve never thought of util­i­tar­i­anism as be­ing on one side or the other of that choice. Very of­ten, when we talk about a util­ity func­tion, we’re talk­ing about an agent’s per­sonal, agent-cen­tric util­ity func­tion.

• As an eth­i­cal sys­tem it seems to me that util­i­tar­i­anism strongly im­plies agent neu­tral util­ity. See the wikipe­dia en­try for ex­am­ple. I get the im­pres­sion that this is what most peo­ple who call them­selves util­i­tar­i­ans mean.

• I would ar­gue that de­riv­ing prin­ci­ples us­ing the cat­e­gor­i­cal im­per­a­tive is a very difficult op­ti­miza­tion prob­lem and that there is a very mean­ingful sense in which one is a de­on­tol­o­gist and not a util­i­tar­ian. If one is a de­on­tol­o­gist then one needs to solve a se­ries of con­straint-satis­fac­tion prob­lems with hard con­straints (i.e. they can­not be vi­o­lated). In the Kan­tian ap­proach: given a situ­a­tion, one has to de­rive the con­straints un­der which one must act in that situ­a­tion via moral think­ing then one must ac­cord to those con­straints.

This is very closely re­lated to com­bi­na­to­rial op­ti­miza­tion prob­lems. I would ar­gue that of­ten there is a “moral dual” (in the sense of a dual pro­gram) where those con­straints are no longer treated as ab­solute and you can as­sign differ­ent costs to each vi­o­la­tion and you can then find a most moral strat­egy. I think very of­ten we have some­thing akin to strong du­al­ity where the util­i­tar­ian dual is equiv­a­lent to the de­on­tolog­i­cal prob­lem, but its an im­por­tant dis­tinc­tion to re­mem­ber that the de­on­tol­o­gist has hard con­straints and zero gra­di­ent on their ob­jec­tive func­tions (by some in­ter­pre­ta­tions).

The util­i­tar­ian performs a search over a con­tin­u­ous space for the great­est ex­pected util­ity, while the de­on­tol­o­gist (in an ex­treme case) has a dis­crete set of choices, from which the im­moral ones are suc­ces­sively weeded out.

Both are op­ti­miza­tion pro­ce­dures, and can be shown to pro­duce very similar out­put be­hav­ior but the ap­proach and philos­o­phy are very differ­ent. The pre­dic­tions of the be­hav­ior of the de­on­tol­o­gist and the util­i­tar­ian can be­come quite differ­ent un­der the sorts of situ­a­tions that moral philoso­phers love to come up with.

• If one is a de­on­tol­o­gist then one needs to solve a se­ries of con­straint-satis­fac­tion prob­lems with hard con­straints (i.e. they can­not be vi­o­lated).

If all you re­quire is to not vi­o­late any con­straints, and you have no prefer­ence be­tween wor­lds where equal num­bers of con­straints are vi­o­lated, and you can reg­u­larly achieve wor­lds in which no con­straints are vi­o­lated, then per­haps con­straint-satis­fac­tion is qual­i­ta­tively differ­ent.

In the real world, lin­ear pro­gram­ming typ­i­cally in­volves a com­bi­na­tion of hard con­straints and pe­nal­ized con­straints. If I say the hard-con­straint solver isn’t util­i­tar­ian, then what term would I use to de­scribe the mixed-case prob­lem?

The crit­i­cal thing to me is that both are for­mal­iz­ing the prob­lem and try­ing to find the best solu­tion they can. The ob­jec­tions com­monly made to util­i­tar­i­anism would ap­ply equally to moral ab­solutism phrased as a hard con­straint prob­lem.

There’s the ad­di­tional, com­pli­cat­ing prob­lem that non-util­i­tar­ian ap­proaches may sim­ply not be in­tel­ligible. A moral ab­solutist needs a lan­guage in which to spec­ify the morals; the lan­guage is so con­text-de­pen­dent that the morals can’t be ab­solute. Non-util­i­tar­ian ap­proaches break down when the agents are not re­stricted to a sin­gle species; they break down more when “agent” means some­thing like “set”.

• To be clear I see the de­on­tol­o­gist op­ti­miza­tion prob­lem as be­ing a pure “fea­si­bil­ity” prob­lem: one has hard con­straints and zero gra­di­ent (or ap­prox­i­mately zero gra­di­ent) on the moral ob­jec­tive func­tion given all de­ci­sions that one can make.

Of the many, many cri­tiques of util­i­tar­i­anism some ar­gue that its not sen­si­ble to ac­tu­ally talk about a “gra­di­ent” or marginal im­prove­ment in moral ob­jec­tive func­tions. Some ar­gue this on the ba­sis of com­pu­ta­tional con­straints: there’s no way that you could ever rea­son­ably com­pute a moral ob­jec­tive func­tion (be­cause the con­squences of any ac­tivity are much to com­pli­cated) to other cri­tiques that ar­gue the util­i­tar­ian no­tion of “util­ity” is ill-defined and in­co­her­ent (hence the moral ob­jec­tive func­tion has no mean­ing). Th­ese sorts of ar­gu­ments un­der­mine ar­gue against the pos­si­bil­ity of soft-con­straints and moral ob­jec­tive func­tions with gra­di­ents.

The de­on­tolog­i­cal op­ti­miza­tion prob­lem, on the other hand, is not sus­cep­ti­ble to such cri­tiques be­cause the ob­jec­tive func­tion is con­stant, and the satis­fac­tion of con­straints is a bi­nary event.

I would also ar­gue that the most hard-core util­i­tar­ian prac­ti­cally acts pretty similarly to a de­on­tol­o­gist. The rea­son is that we only con­sider a tiny sub­space of all pos­si­ble de­ci­sions, and our es­ti­mate of the moral gra­di­ent will be highly in­ac­cu­rate over most pos­si­ble de­ci­sion axis (I buy the com­pu­ta­tional-con­straint cri­tique), and its not clear that we have enough in­for­ma­tion about hu­man ex­pe­rience in or­der to com­pute those gra­di­ents. So, prac­ti­cally speak­ing: we only con­sider a small num­ber of differ­ent way to live our lives (hence we op­ti­mize over a limited range of axes) and the di­rec­tions we op­ti­mize over is not-ran­dom for the most part. Think about how most ac­tivists and most in­di­vi­d­u­als who perform any sort of ad­vo­cacy fo­cus on a sin­gle is­sue.

Also con­sider the fact that most peo­ple don’t mur­der or perform cer­tain forms of hor­ren­dous crimes. Th­ese sin­gle is­sue think­ing, law-abid­ing types may not think of them­selves as de­on­tol­o­gist but a de­on­tol­o­gist would be­have very similarly to them since nei­ther at­tempts to es­ti­mate moral gra­di­ents over de­ci­sions and treats many moral rules as bi­nary events.

The util­i­tar­ian and the de­on­tol­o­gist are dis­t­in­guished in prac­tice in that the util­i­tar­ian com­putes a noisy es­ti­mate of the moral gra­di­ent along a few axes of their po­ten­tial de­ci­sion-space: while ev­ery­where else we think of hard con­straints and no gra­di­ents on the moral ob­jec­tive. The pure util­i­tar­ian is at best a the­o­ret­i­cal con­cept that has no po­ten­tial ba­sis in re­al­ity.

• Some ar­gue this on the ba­sis of com­pu­ta­tional con­straints: there’s no way that you could ever rea­son­ably com­pute a moral ob­jec­tive func­tion (be­cause the con­squences of any ac­tivity are much to com­pli­cated)

This at­tacks a straw-man util­i­tar­i­anism, in which you need to com­pute pre­cise re­sults and get the one cor­rect an­swer. Func­tions can be ap­prox­i­mated; this ob­jec­tion isn’t even a prob­lem.

to other cri­tiques that ar­gue the util­i­tar­ian no­tion of “util­ity” is ill-defined and in­co­her­ent (hence the moral ob­jec­tive func­tion has no mean­ing).

A util­ity func­tion is more well-defined than any other ap­proach to ethics. How do a de­on­tol­o­gist’s rules fare any bet­ter? A util­ity func­tion /​pro­vides/​ mean­ing. A set of rules is just an in­com­plete util­ity func­tion, where some­one has picked out a set of val­ues, but hasn’t both­ered to pri­ori­tize them.

• This at­tacks a straw-man util­i­tar­i­anism, in which you need to com­pute pre­cise re­sults and get the one cor­rect an­swer. Func­tions can be ap­prox­i­mated; this ob­jec­tion isn’t even a prob­lem.

Not ev­ery func­tion can be ap­prox­i­mated effi­ciently, though. I see the scope of moral­ity as ad­dress­ing hu­man ac­tivity where hu­man ac­tivity is a func­tion space it­self. In this case the “moral gra­di­ent” that the con­se­quen­tial­ist is com­put­ing is based on a func­tional defined over a func­tion space. There are plenty of func­tion spaces and func­tion­als which are very hard to effi­ciently ap­prox­i­mate (the Bayes pre­dic­tors for speech recog­ni­tion and ma­chine vi­sion fall into this cat­e­gory) and of­ten naive ap­proaches will fail mis­er­ably.

I think the cri­tique of util­ity func­tions is not that they don’t provide mean­ing, but that they don’t nec­es­sar­ily cap­ture the mean­ing which we would like. The in­co­her­ence ar­gu­ment is that there is no util­ity func­tion which can rep­re­sent the thing we want to rep­re­sent. I don’t buy this ar­gu­ment mostly be­cause I’ve never seen a clear pre­sen­ta­tion of what it is that we would prefer­ably rep­re­sent, but many peo­ple do (and a lot of these peo­ple study de­ci­sion-mak­ing and be­hav­ior whereas I study speech sig­nals). I think it is fair to point out that there is only a very limited biolog­i­cal the­ory of “util­ity” and gen­er­ally we es­ti­mate “util­ity” phe­nomenolog­i­cally by study­ing what de­ci­sions peo­ple make (we build a model of util­ity and try to re­fine it so that it fits the data). There is a po­ten­tial that no util­ity model is ac­tu­ally go­ing to be a good pre­dic­tor (i.e. that there is some sys­tem­atic bias). So, I put a lot of weight on the opinions of de­ci­sion ex­perts in this re­gard: some think util­ity is co­her­ent and some don’t.

The de­on­tol­o­gist’s rules seem to do pretty well as many of them are cur­rently sit­ting in law books right now. They form the ba­sis for much of the moral­ity that par­ents teach their chil­dren. Most util­i­tar­i­ans fol­low most of them all the time, any­way.

My per­sonal view is to do what I think most peo­ple do: ac­cept many hard con­straints on one’s be­hav­ior and at­tempt to op­ti­mize over es­ti­mates of pro­jec­tions of a moral gra­di­ent along a few di­men­sions of de­ci­sion-space. I.e. I try to think about how my re­search may be able to benefit peo­ple, I also try to help out my fam­ily and friends, I try to sup­port things good for an­i­mals and the en­vi­ron­ment. Th­ese are ar­eas where I feel more cer­tain that I have some sense where some sort of moral ob­jec­tive func­tion points.

• What is the jus­tifi­ca­tion for the in­co­her­ence ar­gu­ment? Is there a rea­son, or is it just “I won’t be­lieve in your util­ity func­tion un­til I see it”?

• A moral ab­solutist needs a lan­guage in which to spec­ify the morals; the lan­guage is so con­text-de­pen­dent that the morals can’t be ab­solute.

Wait, that ap­plies equally to util­i­tar­i­anism, doesn’t it?

• I would like you to elab­o­rate on the in­co­her­ence of de­on­tol­ogy so I can test out how my op­ti­miza­tion per­spec­tive on moral­ity can han­dle the ob­jec­tions.

• Can you ex­plain the differ­ence be­tween de­on­tol­ogy and moral ab­solutism first? Be­cause I see it as de­on­tol­ogy = moral ab­solutism, and claims that they are not the same as based on blend­ing de­on­tol­ogy + con­se­quen­tial­ism and call­ing the blend de­on­tol­ogy.

• That is a strange com­ment. Con­se­quen­tial­ists, by defi­ni­tion, be­lieve that do­ing that ac­tion that leads to the best con­se­quences is a moral ab­solute. Why would de­on­tol­o­gists be any more moral ab­solutists?

• I think that this post has some­thing to say about poli­ti­cal philos­o­phy. The prob­lem as I see it is that we want to un­der­stand how our lo­cal de­ci­sion-mak­ing af­fects the global pic­ture and what con­straints should we put on our lo­cal de­ci­sions. This is ex­tremely im­por­tant be­cause, ar­guably, peo­ple make a lot of lo­cal de­ci­sions that make us globally worse off: such as pol­lu­tion (“ex­ter­nal­ities” in econo-speak). I don’t buy the au­thor’s be­lief that we should ig­nore these global con­straints: they are clearly im­por­tant—in­deed its the fear of the po­ten­tial global out­comes of care­less lo­cal de­ci­sion-mak­ing that ar­guably led to the cre­ation of this web­site.

How­ever, just like a com­put­ers we have a lot of trou­ble in­te­grat­ing the global con­straints into our de­ci­sion-mak­ing (which is nec­es­sar­ily a lo­cal op­er­a­tion), and we prob­a­bly have a great deal of bias in our es­ti­mates of what is the morally best set of choices for us to make. Just like the al­gorithm we would like to find some way to make the com­pu­ta­tional bur­den on us less in or­der to achieve these moral ends.

There is an ap­proach in eco­nomics to un­der­stand so­cial norms ad­vo­cated by Her­bert Gin­tis [PDF] that is able to an­a­lyze these sorts of sce­nar­ios. The es­sen­tial idea is this: agents can en­gage in mul­ti­ple cor­re­lated equil­ibria (these are a gen­er­al­ized ver­sion of Nash equil­ibria) pos­si­ble as a re­sult of var­i­ous so­cial norms. Th­ese cor­re­lated equil­ibria are, in a sense, patched to­gether by a so­cial norm from the “ra­tio­nal” (self-in­ter­ested, lo­cal ex­pected util­ity max­i­miz­ers) agents’ de­ci­sions. Hu­man rights could definitely be un­der­stood in this light (I think: I haven’t ac­tu­ally worked out the model).

Similar rea­son­ing may also be used to un­der­stand cer­tain types of laws and gov­ern­ment poli­cies. It is via these in­sti­tu­tions (norms, hu­man or­ga­ni­za­tions, etc.) that we may effi­ciently im­pose global con­straints on peo­ple’s lo­cal de­ci­sion-mak­ing. The karma sys­tem, for in­stance, on Less wrong prob­a­bly changes the way that peo­ple make their de­ci­sion to com­ment.

There is a prob­a­bly a com­puter sci­ence—eco­nomics crossover pa­per here that would de­scribe how in­sti­tu­tions can lower the com­pu­ta­tional bur­den on in­di­vi­d­u­als in their de­ci­sion-mak­ing: so that when in­di­vi­d­u­als make de­ci­sions in these sim­pler do­mains we can be sure that we will still be globally bet­ter off.

One word of cau­tion is that this is pre­cisely the ra­tio­nal be­hind “com­mand economies” and these didn’t work out so well dur­ing the 20th cen­tury. So choos­ing the “patch­ing to­gether” in­sti­tu­tion well is ab­solutely es­sen­tial.

• Could any AI be “friendly” enough to keep things lo­cal?

Any goal, any crite­rion for ac­tion, any eth­i­cal prin­ci­ple or de­ci­sion pro­ce­dure can be part of how an AI makes its choices. Whether it at­taches util­ity to GDP, na­tional se­cu­rity, or pa­per­clips, it will act ac­cord­ingly. If it is de­signed to re­gard lo­cal­ism as an ax­io­matic virtue, or if its traits oth­er­wise in­cline it to agree with Voltaire, then it will act ac­cord­ingly. The ques­tion for FAI de­sign­ers is not, could it be like that; the ques­tion is, should it be like that.

• I’m lost from the be­gin­ning be­cause I see a con­flict be­tween the pure math­e­mat­i­cal prob­lem and your ap­pli­ca­tion of it to the US. At a fine enough level, the road sys­tem in the US loses pla­narity—it has over­passes, di­rect con­nects in high­way in­ter­changes, etc. So if you were to en­code the map such that all paths were in­cluded, you definitely wouldn’t be able to make it a pla­nar graph.

On the other hand, un­der the sim­plify­ing as­sump­tion that there are no over­passes, you can just put a node at ev­ery road in­ter­sec­tion, and then ev­ery edge is a con­nec­tion be­tween in­ter­sec­tions. In that case, the syn­chro­niza­tion prob­lem is im­me­di­ately solved (per the syn­chro­niza­tion rules re­sult­ing from the cur­va­ture of the earth).

So I’m un­able to use this metaphor to gain in­sight on ethics. Can any­one help?

• I also am con­fused by the metaphor; how­ever it’s worth not­ing is that the prob­lem is not to em­bed a graph in the plane with­out cross­ings, but rather to em­bed a weighted graph in the plane (pos­si­bly with cross­ings) such that the dis­tances given are equal to the Eu­clidean dis­tances from the em­bed­ding. And adding nodes would be chang­ing the prob­lem, no?

• This is cor­rect.

Silas, the prob­lem isn’t a perfect match to the ac­tual US—it as­sumes straight-line high­ways, for ex­am­ple.

The graph in­deed doesn’t have to be pla­nar. We just want to em­bed it in the plane while pre­serv­ing dis­tances. And adding nodes does change the prob­lem.

• But if all high­ways are straight, and the graph can have crossovers, doesn’t the ex­ist­ing road map already pre­serve dis­tances, mean­ing your solu­tion can just copy the map?

I guess I’m not un­der­stand­ing the prob­lem con­straints.

• The prob­lem is to de­rive the map, based on the limited set of data you’re given. Copy­ing a map would be cheat­ing.

I think you’re try­ing to in­ter­pret this as a prac­ti­cal prob­lem of car­tog­ra­phy, when in re­al­ity its more of a com­puter sci­encey graph the­ory prob­lem.

• You don’t have a road map, to start with. You’re ONLY given a list of cities and the dis­tances be­tween some of them. From that list, draw a map. That is not a triv­ial task.

• Okay, that makes more sense then. Maybe you should have referred to an imag­i­nary land (in­stead of the US), so as not to im­ply peo­ple already know what it looks like from above.

• Here’s an equiv­a­lent prob­lem that may make more sense: you have a group of sol­diers on a bat­tlefield with­out ac­cess to GPS equip­ment, and they need to figure out where they are in re­la­tion to each other… they each have ra­dios, and can mea­sure prop­a­ga­tion la­tency be­tween each other, tel­ling them lin­ear dis­tance sep­a­rat­ing each of them, but tel­ling them noth­ing about di­rec­tion­al­ity, and from that data they need to con­struct a map of their lo­ca­tions.

• And adding nodes would be chang­ing the prob­lem, no?

It de­pends on whether the US cities bit was just an illus­tra­tive ex­am­ple, or a typ­i­cal con­straint on the prob­lem.

Does the prob­lem take for granted, e.g., that roads can be wind­ing so that the weight nec­es­sar­ily does not equal the Eu­clidean dis­tance (Rie­man­nian dis­tance, re­ally, on a curved pa­per, but what­ever), and you have to make a pla­nar map that lo­cates the nodes so that the weight is (pro­por­tional to) the Eu­clidean dis­tance?

• I don’t see how this is rele­vant to the state­ment that adding nodes would be chang­ing the prob­lem. You’re given a spe­cific graph of dis­tances, the challenge is to re­al­ize it in the plane. You can’t just add nodes and de­cide to re­al­ize a differ­ent graph in the plane in­stead; where would the dis­tances even come from, any­way, if you haven’t yet com­puted an em­bed­ding?

• SarahC cleared it up, so I un­der­stand what you do and don’t know in the prob­lem, and why I as­sumed cer­tain things were given that aren’t.

Though I agree with Roko’s com­ment that this doesn’t seem to provide in­sight on re­solv­ing eth­i­cal differ­ences.

• Lo­cal does not nec­es­sar­ily mean that you’re knowl­edge­able and free of val­ues con­flicts, and dis­tant does not nec­es­sar­ily mean that you’re ig­no­rant or that val­ues con­flict. Within a house­hold, a per­son might have val­ues dis­agree­ments with their spouse about re­li­gion, work/​fam­ily bal­ance, or how to raise their kids, or with their kids about their sex­u­al­ity or their ca­reer. Across the world, many efforts at act­ing morally for the benefit of for­eign­ers are aimed at shared val­ues like health & sur­vival, as with char­i­ties that provide food and med­i­cal as­sis­tance (there may be difficul­ties in im­ple­men­ta­tion, but not be­cause the re­cip­i­ents of the aid don’t similarly value their health & sur­vival).

• 1 Jun 2010 23:35 UTC
1 point

After read­ing the first two or three para­graphs, I pon­dered for a mo­ment. The first ap­proach I came up with was to start with a tri­an­gle and be­gin adding ev­ery­thing that has at least three links to ex­ist­ing things. The sec­ond I came up with was to start with a bunch of tri­an­gles on in­de­pen­dent “maps” and be­gin stitch­ing those maps to­gether.

And hey, the sec­ond one turns out to be the one they used, judg­ing by the fourth para­graph.

Given that I’m not con­sciously be­ing de­ceit­ful, what should I con­clude about my­self?

• Given that I’m not con­sciously be­ing de­ceit­ful, what should I con­clude about my­self?

Noth­ing. It’s ab­solutely nat­u­ral to do it that way. If there are any tri­an­gles in the graph.

• “Ab­solutism is as­so­ci­ated with a view of one­self as a small be­ing in­ter­act­ing with oth­ers in a large world. The jus­tifi­ca­tions it re­quires are pri­mar­ily in­ter­per­sonal.”

This makes no sense to me. If “ab­solutism” means moral ab­solutism, then I think the quote is not even wrong; it has no ob­vi­ous point of con­tact with ab­solutism.

• I think there is definitely po­ten­tial to the idea, but I don’t think you pushed the anal­ogy quite far enough. I can see an anal­ogy be­tween what is pre­sented here and hu­man rights and to Kan­tian moral philos­o­phy.

Essen­tially, we can think of hu­man rights as be­ing what many peo­ple be­lieve to be an es­sen­tial bare-min­i­mum con­di­tions on hu­man treat­ment. I.e. that the class of all “good and just” wor­lds ev­ery­body’s hu­man rights will be re­spected. Here hu­man rights cor­re­sponds to the “lo­cal rigidity” con­di­tion of the sub­graph. In gen­eral, too, hu­man rights are gen­er­ally only mean­ingful for peo­ple one im­me­di­ately in­ter­acts with in your so­cial net­work.

This does sim­plify the ques­tion of just gov­ern­ment and moral ac­tion in the world (as poli­ti­cal philoso­phers are so de­sirous of us­ing such ar­gu­ments). I don’t think, how­ever, that the lo­cal con­di­tions for hu­man ex­is­tence are as easy to spec­ify as in the case of a sen­sor net­work graph.

In some sense there is a tra­di­tion largely in­spired by Kant that at­tempts to do the moral equiv­a­lent of what you are talk­ing about: use global reg­u­lar­ity con­di­tions (on morals) to de­scribe lo­cal con­di­tions (on morals: say the abil­ity to will a moral de­ci­sion to a uni­ver­sal law). Kant gen­er­ally just as­sumed that these lo­cal con­di­tions would achieve the nec­es­sary global re­quire­ments for moral­ity (per­haps this is what he meant by a King­dom of Ends). For Kant the lo­cal con­di­tions on your de­ci­sion-mak­ing were nec­es­sary and suffi­cient con­di­tions for the global moral de­ci­sion-mak­ing.

In your dis­cus­sion (and in the ap­proach of the pa­per), how­ever, the lo­cal con­di­tions placed (on morals or on each patch) are not suffi­cient to achieve the global con­di­tions (for moral­ity, or on the em­bed­ding). So its a weak­en­ing of the ap­proach ad­vanced by Kant. The idea seems to be that once some as­pects (but not all) of the lo­cal con­di­tions have been worked out one can then piece to­gether the lo­cal de­ci­sion rules into some­thing co­he­sive.

Edit: I ram­bled, so I put my other idea into an­other commend

• This is an in­ter­est­ing idea, to be sure. It sounds like you might be sym­pa­thetic to virtue ethics (as in­deed am I, though I be­come a con­se­quen­tial­ist when the stakes get high).

Also, have you read Bernard Willi­ams’ cri­tique of util­i­tar­i­anism? Because

You’re free to choose those val­ues for your­self—you don’t have to drop them be­cause they’re per­haps not op­ti­mal for the world’s well-be­ing.

would definitely be one of his ar­gu­ments.