Cultivating our own gardens

This is a post about moral philos­o­phy, ap­proached with a math­e­mat­i­cal metaphor.

Here’s an in­ter­est­ing prob­lem in math­e­mat­ics. Let’s say you have a graph, made up of ver­tices and edges, with weights as­signed to the edges. Think of the ver­tices as US cities and the edges as roads be­tween them; the weight on each road is the length of the road. Now, know­ing only this in­for­ma­tion, can you draw a map of the US on a sheet of pa­per? In math­e­mat­i­cal terms, is there an iso­met­ric em­bed­ding of this graph in two-di­men­sional Eu­clidean space?

When you think about this for a minute, it’s clear that this is a prob­lem about rec­on­cil­ing the lo­cal and the global. Start with New York and all its neigh­bor­ing cities. You have a sort of star shape. You can cer­tainly draw this on the plane; in fact, you have many de­grees of free­dom; you can ar­bi­trar­ily pick one way to draw it. Now start adding more cities and more roads, and even­tu­ally the de­grees of free­dom diminish. If you made the wrong choices ear­lier on, you might paint your­self in a cor­ner and have no way to keep all the dis­tances con­sis­tent when you add a new city. This is known as a “syn­chro­niza­tion prob­lem.” Get­ting it to work lo­cally is easy; get­ting all the lo­cal pieces rec­on­ciled with each other is hard.

This is a lovely prob­lem and some ac­quain­tances of mine have writ­ten a pa­per about it. (http://​​www.math.prince­ton.edu/​​~mcu­curin/​​Sen­sors_ASAP_TOSN_fi­nal.pdf) I’ll pick out some in­sights that seem rele­vant to what fol­lows. First, some ob­vi­ous ap­proaches don’t work very well. It might be thought we want to op­ti­mize over all pos­si­ble em­bed­dings, pick­ing the one that has the low­est er­ror in ap­prox­i­mat­ing dis­tances be­tween cities. You come up with a “penalty func­tion” that’s some sort of sum of er­rors, and use stan­dard op­ti­miza­tion tech­niques to min­i­mize it. The trou­ble is, these ap­proaches tend to work spot­tily—in par­tic­u­lar, they some­times pick out lo­cal rather than global op­tima (so that the er­ror can be quite high af­ter all.)

The ap­proach in the pa­per I linked is differ­ent. We break the graph into over­lap­ping smaller sub­graphs, so small that they can only be em­bed­ded in one way (that’s called rigidity) and then “stitch” them to­gether con­sis­tently. The “stitch­ing” is done with a very handy trick in­volv­ing eigen­vec­tors of sparse ma­tri­ces. But the point I want to em­pha­size here is that you have to look at the small scale, and let all the lit­tle patches em­bed them­selves as they like, be­fore try­ing to rec­on­cile them globally.

Now, rather dar­ingly, I want to ap­ply this idea to ethics. (This is an ex­pan­sion of a post peo­ple seemed to like: http://​​less­wrong.com/​​lw/​​1xa/​​hu­man_val­ues_differ_as_much_as_val­ues_can_differ/​​1y )

The thing is, hu­man val­ues differ enor­mously. The di­ver­sity of val­ues is an em­piri­cal fact. The Ja­panese did not have a word for “thank you” un­til the Por­tuguese gave them one; this is a sim­ple ex­am­ple, but it ab­solutely shocked me, be­cause I thought “thank you” was a uni­ver­sal con­cept. It’s not. (ed­ited for lack of fact-check­ing.) And we do not all agree on what virtues are, or what the best way to raise chil­dren is, or what the best form of gov­ern­ment is. There may be no prin­ci­ple that all hu­mans agree on—dis­sen­ters who be­lieve that geno­cide is a good thing may be pretty awful peo­ple, but they un­doubt­edly ex­ist. Creat­ing the best pos­si­ble world for hu­mans is a syn­chro­niza­tion prob­lem, then—we have to figure out a way to bal­ance val­ues that in­evitably clash. Here, nodes are in­di­vi­d­u­als, each in­di­vi­d­ual is tied to its neigh­bors, and a choice of em­bed­ding is a par­tic­u­lar ac­tion. The worse the em­bed­ding near an in­di­vi­d­ual fits the “true” un­der­ly­ing man­i­fold, the greater the “penalty func­tion” and the more mis­er­able that in­di­vi­d­ual is, be­cause the ac­tion goes against what he val­ues.

If we can ex­tend the metaphor fur­ther, this is a prob­lem for util­i­tar­i­anism. Max­i­miz­ing some­thing globally—say, hap­piness—can be a dead end. It can hit a lo­cal max­i­mum—the max­i­mum for those peo­ple who value hap­piness—but do noth­ing for the peo­ple whose high­est value is loy­alty to their fam­ily, or truth-seek­ing, or prac­tic­ing re­li­gion, or free­dom, or mar­tial valor. We can’t re­ally op­ti­mize, be­cause a lot of peo­ple’s val­ues are other-re­gard­ing: we want Aunt Susie to stop smok­ing, be­cause of the prin­ci­ple of the thing. Or more se­ri­ously, we want peo­ple in for­eign coun­tries to stop perform­ing cli­toridec­tomies, be­cause of the prin­ci­ple of the thing. And Aunt Susie or the for­eign­ers may feel differ­ently. When you have a set of val­ues that ex­tends to the whole world, con­flict is in­evitable.

The analogue to break­ing down the graph is to keep val­ues lo­cal. You have a small star-shaped graph of peo­ple you know per­son­ally and ac­tions you’re per­son­ally ca­pa­ble of tak­ing. Within that star, you define your own val­ues: what you’re ready to cheer for, work for, or die for. You’re free to choose those val­ues for your­self—you don’t have to drop them be­cause they’re per­haps not op­ti­mal for the world’s well-be­ing. But be­yond that ra­dius, opinions are dan­ger­ous: both be­cause you’re more ig­no­rant about dis­tant is­sues, and be­cause you run into this prob­lem of globally rec­on­cil­ing con­flict­ing val­ues. Rec­on­cili­a­tion is only pos­si­ble if ev­ery­one’s mind­ing their own busi­ness. If things are re­ally bro­ken down into rigid com­po­nents. It’s some­thing akin to what Thomas Nagel said against util­i­tar­i­anism:

“Ab­solutism is as­so­ci­ated with a view of one­self as a small be­ing in­ter­act­ing with oth­ers in a large world. The jus­tifi­ca­tions it re­quires are pri­mar­ily in­ter­per­sonal. Utili­tar­i­anism is as­so­ci­ated with a view of one­self as a benev­olent bu­reau­crat dis­tribut­ing such benefits as one can con­trol to countless other be­ings, with whom one can have var­i­ous re­la­tions or none. The jus­tifi­ca­tions it re­quires are pri­mar­ily ad­minis­tra­tive.” (Mor­tal Ques­tions, p. 68.)

Any­how, try­ing to em­bed our val­ues on this dark con­ti­nent of a man­i­fold seems to re­quire break­ing things down into lit­tle lo­cal pieces. I think of that as “cul­ti­vat­ing our own gar­dens,” to quote Can­dide. I don’t want to be so con­fi­dent as to have uni­ver­sal ide­olo­gies, but I think I may be quite con­fi­dent and de­ci­sive in the lit­tle area that is mine: my per­sonal re­la­tion­ships; my ar­eas of ex­per­tise, such as they are; my own home and what I do in it; ev­ery­thing that I know I love and is worth my time and money; and bad things that I will not per­mit to hap­pen in front of me, so long as I can help it. Lo­cal val­ues, not global ones.

Could any AI be “friendly” enough to keep things lo­cal?