Degrees of Freedom

Link post

Some­thing I’ve been think­ing about for a while is the dual re­la­tion­ship be­tween op­ti­miza­tion and in­differ­ence, and the re­la­tion­ship be­tween both of them and the idea of free­dom.

Op­ti­miza­tion: “Of all the pos­si­ble ac­tions available to me, which one is best? (by some crite­rion). Ok, I’ll choose the best.”

In­differ­ence: “Mul­ti­ple pos­si­ble op­tions are equally good, or in­com­men­su­rate (by the crite­rion I’m us­ing). My de­ci­sion al­gorithm equally al­lows me to take any of them.”

To­tal in­differ­ence be­tween all op­tions makes op­ti­miza­tion im­pos­si­ble or vac­u­ous. An op­ti­miza­tion crite­rion which as­signs a to­tal or­der­ing be­tween all pos­si­bil­ities makes in­differ­ence van­ish­ingly rare. So these no­tions are dual in a sense. Every di­men­sion along which you op­ti­mize is in the do­main of op­ti­miza­tion; ev­ery di­men­sion you leave “free” is in the do­main of in­differ­ence.

Be­ing “free” in one sense can mean “free to op­ti­mize”. I choose the out­come that is best ac­cord­ing to an in­ter­nal crite­rion, which is not blocked by ex­ter­nal bar­ri­ers. A limit on free­dom is a con­straint that keeps me away from my fa­vorite choice. Either a nat­u­ral limit (“I would like to do that but the tech­nol­ogy doesn’t ex­ist yet”) or a man-made limit (“I would like to do that but it’s ille­gal.”)

There’s an am­bi­guity here, of course, when it comes to whether you count “I would like to do that, but it would have a con­se­quence I don’t like” as a limit on free­dom. Is that a bar­rier block­ing you from the op­ti­mal choice, or is it sim­ply an­other way of say­ing that it’s not an op­ti­mal choice af­ter all?

And, in the lat­ter case, isn’t that ba­si­cally equiv­a­lent to say­ing there is no such thing as a bar­rier to free choice? After all, “I would like to do that, but it’s ille­gal” is effec­tively the same thing as “I would like to do that, but it has a con­se­quence I don’t like, such as go­ing to jail.” You can get around this am­bi­guity in a poli­ti­cal con­text by dis­t­in­guish­ing nat­u­ral from so­cial bar­ri­ers, but that’s not a par­tic­u­larly prin­ci­pled dis­tinc­tion.

Another is­sue with free­dom-as-op­ti­miza­tion is that it’s com­pat­i­ble with quite tightly con­strained be­hav­ior, in a way that’s not con­sis­tent with our prim­i­tive in­tu­itions about free­dom. If you’re only “free” to do the op­ti­mal thing, that can mean you are free to do only one thing, all the time, as rigidly as a ma­chine. If, for in­stance, you are only free to “act in your own best in­ter­ests”, you don’t have the op­tion to act against your best in­ter­ests. Peo­ple in real life can feel con­strained by fol­low­ing a rigid al­gorithm even when they agree it’s “best”; “but what if I want to do some­thing that’s not best?” Or, they can ac­knowl­edge they’re free to do what they choose, but are dis­mayed to learn that their choices are “dic­tated” as rigidly by habit and con­di­tion­ing as they might have been by some hu­man dic­ta­tor.

An al­ter­na­tive no­tion of free­dom might be free­dom-as-ar­bi­trari­ness. Free­dom in the sense of “de­grees of free­dom” or “free group”, de­rived from the in­tu­ition that free­dom means breadth of pos­si­bil­ity rather than op­ti­miza­tion power. You are only free if you could equally do any of a num­ber of things, which ul­ti­mately means some­thing like in­differ­ence.

This is the in­tu­ition be­hind claims like Vik­tor Frankl’s: “Between stim­u­lus and re­sponse there is a space. In that space is our power to choose a re­sponse. In our re­sponse lies our growth and our free­dom.” If you always re­spond au­to­mat­i­cally to a given stim­u­lus, you have only one choice, and that makes you un­free in the sense of “de­grees of free­dom.”

Venkat Rao’s con­cept of free­dom is pretty much this free­dom-as-ar­bi­trari­ness, with some more spe­cific wrin­kles. He men­tions de­grees of free­dom (“di­men­sion­al­ity”) as well as “in­scrutabil­ity”, the in­abil­ity to pre­dict one’s mo­tion from the out­side.

Bud­dhists also of­ten speak of free­dom more liter­ally in terms of in­differ­ence, and there’s a very straight­for­ward logic to this; you can only choose equally be­tween A and B if you have been “liber­ated” from the at­trac­tions and aver­sions that con­strain you to choose A over B. Those who in­sist that Bud­dhism is com­pat­i­ble with a fairly nor­mal life say that af­ter Bud­dhist prac­tice you still will choose sys­tem­at­i­cally most of the time — your util­ity func­tion can­not fully flat­ten if you act like a liv­ing or­ganism — but that, like Vik­tor Frankl’s ideal hu­man, you will be able to re­flect with equina­mity and con­sider choos­ing B over A; you will be more “men­tally flex­ible.” Of course, some Bud­dhist texts sim­ply say that you be­come ac­tu­ally in­differ­ent, and that suffi­cient vipas­sana med­i­ta­tion will make you in­dis­t­in­guish­able from a corpse.

Free­dom-as-in­differ­ence, I think, is lurk­ing be­hind our in­tu­itions about things like “rights” or “own­er­ship.” When we say you have a “right” to free speech — even a right bounded with cer­tain limits, as it of course always is in prac­tice — we mean that within those limits, you may speak how­ever you want. Your rights define a space, within which you may be­have ar­bi­trar­ily. Not op­ti­mally. A right, if it’s not to be vac­u­ous, must mean the right to be­have “badly” in some way or other. To own a piece of prop­erty means that, within what­ever limits the con­cept of own­er­ship sets, you may make use of it in any way you like, even in sub­op­ti­mal ways.

This is very clearly illus­trated by Glen Weyl’s no­tion of rad­i­cal mar­kets, which neatly dis­as­so­ci­ates two con­cepts usu­ally both con­sid­ered rep­re­sen­ta­tive of free-mar­ket sys­tems: own­er­ship and eco­nomic effi­ciency. To own some­thing just is to be able to hang onto it even when it is eco­nom­i­cally in­effi­cient to do so. As Weyl says, “prop­erty is monopoly.” The owner of a piece of land can sit on it, mak­ing no im­prove­ments, while hold­ing out for a high price; the owner of in­tel­lec­tual prop­erty can sit on it with­out us­ing it; in ex­actly the same way that a mo­nop­o­list can sit on a fac­tory and de­press out­put while charg­ing higher prices than he could get away with in a com­pet­i­tive mar­ket.

For bet­ter or for worse, rights and own­er­ship define spaces in which you can de­stroy value. If your car was sub­ject to a per­pet­ual auc­tion and own­er­ship tax as Weyl pro­poses, bash­ing your car to bits with a ham­mer would cost you even if you didn’t per­son­ally need a car, be­cause it would hurt the rental or re­sale value and you’d still be pay­ing tax. On some psy­cholog­i­cal level, I think this means you couldn’t feel fully se­cure in your pos­ses­sions, only prob­a­bil­is­ti­cally likely to be able to provide for your needs. You only truly own what you have a right to wreck.

Free­dom-as-a-space-of-ar­bi­trary-ac­tion is also, I think, an in­tu­ition be­hind the fact that so­ciety (all so­cieties, but the US more than other rich coun­tries, I think) is shaped by peo­ple’s de­sire for more dis­cre­tion in de­ci­sion­mak­ing as op­posed to trans­par­ent rubrics. Col­lege ad­mis­sions, job ap­pli­ca­tions, or­ga­ni­za­tional codes of con­duct, laws and tax codes, all are de­signed de­liber­ately to al­low am­ple dis­cre­tion on the part of de­ci­sion­mak­ers rather than re­strict­ing them to fol­low­ing “op­ti­mal” or “ra­tio­nal”, sim­ple and leg­ible, rules. Some dis­cre­tion is nec­es­sary to en­sure good out­comes; a wise hu­man de­ci­sion­maker can always make the right de­ci­sion in some hard cases where a me­chan­i­cal check­list fails, sim­ply be­cause the hu­man has more cog­ni­tive pro­cess­ing power than the check­list. This phe­nomenon is as old as Plato’s Laws and as cur­rent as the de­bate over al­gorithms and au­toma­tion in medicine. How­ever, what we ob­serve in the world is more dis­cre­tion than would be nec­es­sary, for the afore­men­tioned rea­sons of cog­ni­tive com­plex­ity, to gen­er­ate so­cially benefi­cial out­comes. We have dis­cre­tion that en­ables cor­rup­tion and spe­cial priv­ileges in cases that pretty much no­body would claim to be ideal — rich par­ents buy­ing their not-so-com­pe­tent chil­dren Ivy League ad­mis­sions, fa­vored cor­po­ra­tions vot­ing them­selves gov­ern­ment sub­sidies. De­ci­sion­mak­ers want the “free­dom” to make illeg­ible choices, choices which would look “sub­op­ti­mal” by naively sen­si­ble met­rics like “perfor­mance” or “effi­ciency”, choices they would pre­fer not to re­veal or ex­plain to the pub­lic. De­ci­sion­mak­ers feel trapped when there’s too much “ac­countabil­ity” or “trans­parency”, and pre­fer a wider sphere of dis­cre­tion. Or, to put it more un­fa­vor­ably, they want to be free to de­stroy value.

And this is true at an in­di­vi­d­ual psy­cholog­i­cal level too, of course — we want to be free to “waste time” and re­sist pres­sure to ac­count for liter­ally ev­ery­thing we do. Pro­po­nents of op­ti­miza­tion in­sist that this is sim­ply a failure mode from pick­ing the wrong op­ti­miza­tion tar­get — rest, so­cial­iz­ing, and en­ter­tain­ment are also needs, the op­ti­mal amount of time to de­vote to them isn’t zero, and you don’t have to con­sider per­sonal time to be “stolen” or “wasted” or “bad”, you can, in prin­ci­ple, leg­i­bi­lize your en­tire life in­clud­ing your plea­sures. Any­thing you wish you could do “in the dark”, off the record, you could also do “in the light,” ex­plic­itly and fully ac­counted for. If your boss uses “op­ti­miza­tion” to mean over­work­ing you, the prob­lem is with your boss, not with op­ti­miza­tion per se.

The free­dom-as-ar­bi­trari­ness im­pulse in us is skep­ti­cal.

I see op­ti­miza­tion and ar­bi­trari­ness ev­ery­where now; I see in­tel­li­gent peo­ple who more or less take one or an­other as ide­olo­gies, and see them as ob­vi­ously cor­rect.

Venkat Rao and Eric We­in­stein are par­ti­sans of ar­bi­trari­ness; they speak out in fa­vor of “medi­ocrity” and against “ex­cel­lence” re­spec­tively. The ra­tio­nale be­ing, that be­ing highly op­ti­mized at some widely ap­pre­ci­ated met­ric — be­ing very in­tel­li­gent, or very effi­cient, or some­thing like that — is of­ten less valuable than be­ing cre­ative, gen­er­at­ing some­thing in a part of the world that is “dark” to the rest of us, that is not even on our map as some­thing to value and thus ap­pears as lack of value. Or­di­nary peo­ple be­ing “mediocre”, or tal­ented peo­ple be­ing “undis­ci­plined” or “dis­rep­utable”, may be more cre­ative than highly-op­ti­mized “top perform­ers”.

Robin Han­son, by con­trast, is a par­ti­san of op­ti­miza­tion; he speaks out against bias and un­prin­ci­pled fa­voritism and in fa­vor of sys­tems like pre­dic­tion mar­kets which would force the “best ideas to win” in a fair com­pe­ti­tion. Pro­po­nents of ideas like rad­i­cal mar­kets, uni­ver­sal ba­sic in­come, open bor­ders, in­come-shar­ing agree­ments, or smart con­tracts (I’d here in­clude, for in­stance, Vi­talik Bu­terin) are also op­ti­miza­tion par­ti­sans. Th­ese are leg­i­bi­liz­ing poli­cies that, if op­ti­mally im­ple­mented, can always be Pareto im­prove­ments over the sta­tus quo; “what­ever de­gree of wealth re­dis­tri­bu­tion you pre­fer”, pro­po­nents claim, “surely it is bet­ter to achieve it in what­ever way re­sults in the least dead­weight loss.” This is the very rea­son that they are not the poli­cies that pub­lic choice the­ory would pre­dict would emerge nat­u­rally in gov­ern­ments. Leg­i­bi­liz­ing poli­cies al­low lit­tle scope for dis­cre­tion, so they don’t let poli­cy­mak­ers give illeg­ible re­wards to al­lies and pun­ish­ments to en­e­mies. They re­duce the scope of the “poli­ti­cal”, i.e. that which is ne­go­ti­ated at the per­sonal or group level, and re­place it with an im­per­sonal set of rules within which in­di­vi­d­u­als are “free to choose” but not very “free to be­have ar­bi­trar­ily” since their ac­tions are trans­par­ent and they must bear the costs of be­ing in full view.

Op­ti­miza­tion par­ti­sans are against weakly en­forced rules — they say “if a rule is good, en­force it con­sis­tently; if a rule is bad, re­move it; but se­lec­tive en­force­ment is just an­other word for fa­voritism and cor­rup­tion.” Illeg­i­bil­ity par­ti­sans say that weakly en­forced rules are the only way to in­cor­po­rate valuable in­for­ma­tion — pre­cisely that in­for­ma­tion which en­forcers do not feel they can make ex­plicit, ei­ther be­cause it’s con­tro­ver­sial or be­cause it’s too com­plex to ver­bal­ize. “If you make ev­ery­thing ex­plicit, you’ll dumb ev­ery­thing in the world down to what the stupi­dest and most tru­cu­lent mem­bers of the pub­lic will ac­cept. Say good­bye to any cre­ative or challeng­ing in­no­va­tions!”

I see the value of ar­gu­ments on both sides. How­ever, I have pos­i­tive (as op­posed to nor­ma­tive) opinions that I don’t think ev­ery­body shares. I think that the world I see around me is mov­ing in the di­rec­tion of greater ar­bi­trari­ness and has been since WWII or so (when much of US so­ciety, in­clud­ing sci­en­tific and tech­nolog­i­cal re­search, was or­ga­nized along mil­i­tary lines). I see ar­bi­trari­ness as a thing that arises in “ma­ture” or “late” or­ga­ni­za­tions. Big­ger, older com­pa­nies are more “poli­ti­cal” and more mo­nop­o­lis­tic. Big­ger, older states and em­pires are more “cor­rupt” or “deca­dent.”

Ar­bi­trari­ness has a ten­dency to pro­tect those in power rather than out of power, though the cor­re­la­tion isn’t perfect. Zones that pro­tect your abil­ity to do “what­ever” you want with­out in­cur­ring costs (which in­clude zones of pri­vacy or prop­erty) are pro­tec­tive, con­ser­va­tive forces — they al­low peo­ple se­cu­rity. This of­ten means pro­tec­tion for those who already have a lot; ar­bi­trari­ness is of­ten “elitist”; but it can also pro­tect “un­der­dogs” on the grounds of tra­di­tion, or pro­tect them by shroud­ing them in se­crecy. (Scott thought “illeg­i­bil­ity” was a valuable defense of marginal­ized peo­ples like the Roma. Illeg­i­bil­ity is not always the province of the pow­er­ful and priv­ileged.) No; the peo­ple such zones of ar­bi­trary, illeg­ible free­dom sys­tem­at­i­cally harm are those who benefit from in­creased ac­countabil­ity and re­veal­ing of in­for­ma­tion. Whistle­blow­ers and ac­cusers; those who ex­pect their merit/​perfor­mance is good enough that dis­play­ing it will work to their ad­van­tage; those who call for change and want to dis­play in­for­ma­tion to jus­tify it; those who are new­com­ers or young and want a chance to demon­strate their value.

If your in­tu­ition is “you don’t know me, but you’ll like me if you give me a chance” or “you don’t know him, but you’ll be hor­rified when you find out what he did”, or “if you gave me a chance to ex­plain, you’d agree”, or “if you just let me com­pete, I bet I could win”, then you want more op­ti­miza­tion.

If your in­tu­ition is “I can’t ex­plain, you wouldn’t un­der­stand” or “if you knew what I was re­ally like, you’d see what an im­pos­tor I am”, or “mal­i­cious peo­ple will just use this in­for­ma­tion to take ad­van­tage of me and in­ter­pret ev­ery­thing in the worst pos­si­ble light” or “I’m not for pub­lic con­sump­tion, I am my own sovereign per­son, I don’t owe ev­ery­one an ex­pla­na­tion or jus­tifi­ca­tion for ac­tions I have a right to do”, then you’ll want less op­ti­miza­tion.

Of course, these aren’t so much static “per­son­al­ity traits” of a per­son as one’s as­sess­ment of the situ­a­tion around one­self. The lat­ter cluster is an as­sump­tion that you’re liv­ing in a so­cial en­vi­ron­ment where there’s very lit­tle con­cor­dance of in­ter­ests — peo­ple know­ing more about you will al­low them to more effec­tively harm you. The former cluster is an as­sump­tion that you’re liv­ing in an en­vi­ron­ment where there’s a great deal of con­cor­dance of in­ter­ests — peo­ple know­ing more about you will al­low them to more effec­tively help you.

For in­stance, be­ing “pre­dictable” is, in Venkat’s writ­ing, usu­ally a bad thing, be­cause it means you can be ex­ploited by ad­ver­saries. Free peo­ple are “in­scrutable.” In other con­texts, such as par­ent­ing, be­ing pre­dictable is a good thing, be­cause you want your kids to have an eas­ier time learn­ing how to “work” the house rules. You and your kid are not, most of the time, wily ad­ver­saries out­wit­ting each other; con­flicts are more likely to come from too much con­fu­sion or in­con­sis­tently en­forced bound­aries. Re­la­tion­ship ad­vice and man­age­ment ad­vice usu­ally recom­mends mak­ing your­self eas­ier for your part­ners and em­ploy­ees to un­der­stand, never more in­scrutable. (Sales ad­vice, how­ever, and oc­ca­sion­ally ad­vice for keep­ing ro­mance al­ive in a mar­riage, some­times recom­mends cul­ti­vat­ing an aura of mys­tery, per­haps be­cause it’s more ad­ver­sar­ial.)

A re­lated no­tion: want­ing to join dis­cus­sions is a sign of ex­pect­ing a more co­op­er­a­tive world, while try­ing to keep peo­ple from join­ing your (pri­vate or illeg­ible) com­mu­ni­ca­tions is a sign of ex­pect­ing a more ad­ver­sar­ial world.

As so­cial or­ga­ni­za­tions “ma­ture” and be­come larger, it be­comes harder to en­force uni­ver­sal and im­par­tial rules, harder to keep the larger pop­u­la­tion al­igned on similar goals, and harder to com­pre­hend the more com­plex phe­nom­ena in this larger group. . This means that there’s both mo­ti­va­tion and op­por­tu­nity to carve out “hid­den” and “spe­cial” zones where ar­bi­trary be­hav­ior can per­sist even when it would oth­er­wise come with nega­tive con­se­quences.

New or small or­ga­ni­za­tions, by con­trast, must gain/​cre­ate re­sources or die, so they have more mo­ti­va­tion to “op­ti­mize” for re­source pro­duc­tion; and they’re sim­ple, small, and/​or ho­mo­ge­neous enough that leg­ible op­ti­miza­tion rules and goals and trans­par­ent com­mu­ni­ca­tion are prac­ti­cal and widely em­braced. “Se­cu­rity” is not available to be­gin with, so peo­ple mostly seek op­por­tu­nity in­stead.

This the­ory ex­plains, for in­stance, why US pub­lic policy is more frag­mented, dis­cre­tionary, and spe­cial-case-y, and less effi­cient and tech­no­cratic, than it is in other de­vel­oped coun­tries: the US is more racially di­verse, which means, in a world where racism ex­ists, that US civil in­sti­tu­tions have evolved to al­low am­ple op­por­tu­ni­ties to “play fa­vorites” (giv­ing spe­cial le­gal priv­ileges to those with clout) in full gen­er­al­ity, be­cause a large pop­u­la­tion has his­tor­i­cally been highly mo­ti­vated to “play fa­vorites” on the ba­sis of race. Ho­mo­gene­ity makes a polity be­have more like a “smaller” one, while di­ver­sity makes a polity be­have more like a “larger” one.

Aes­thet­i­cally, I think of op­ti­miza­tion as cor­re­spond­ing to an “early” style, like Doric columns, or like Masac­cio; sim­ple, mar­tial, all form and prin­ci­ple. Ar­bi­trari­ness cor­re­sponds to a “late” style, like Cor­in­thian columns or like Rubens: elab­o­rate, sen­sual, full of de­tails and per­son­al­ity.

The ba­sic ar­gu­ment for op­ti­miza­tion over ar­bi­trari­ness is that it cre­ates growth and value while ar­bi­trari­ness cre­ates stag­na­tion.

Ar­bi­trari­ness can’t re­ally ar­gue for it­self as well, be­cause com­mu­ni­ca­tion it­self is on the other side. Ar­bi­trari­ness always looks illog­i­cal and in­con­sis­tent. It kind of is illog­i­cal and in­con­sis­tent. All it can say is “I’m go­ing to defend my right to be wrong, be­cause I don’t trust the world to un­der­stand me when I have a coun­ter­in­tu­itive or hard-to-ex­press or con­tro­ver­sial rea­son for my choice. I don’t think I can get what I want by ask­ing for it or ex­plain­ing my rea­sons or play­ing ‘fair’.” And from the out­side, you can’t always tell the differ­ence be­tween some­one who thinks (per­haps cor­rectly!) that the game is re­ally rigged against them a profound level, and some­body who just wants to cheat or who isn’t think­ing co­her­ently. Suffi­ciently ad­vanced cyn­i­cism is in­dis­t­in­guish­able from mal­ice and stu­pidity.

For a fairly sym­pa­thetic ex­am­ple, you see some­thing like Dark­ness at Noon, where the pro­tag­o­nist thinks, “Logic in­ex­orably points to Stal­inism; but Stal­inism is awful! There­fore, let me in­sist on some space free from the depre­da­tions of logic, some space where jus­tice can be tem­pered by mercy and rea­son by emo­tion.” From the dis­tance of many years, it’s easy to say that’s silly, that of course there are rea­sons not to sup­port Stalin’s purges, that it’s to­tally un­nec­es­sary to re­ject logic and jus­tice in or­der to ob­ject to kil­ling in­no­cents. But from in­side the sys­tem, if all the ar­gu­ments you know how to for­mu­late are Stal­inist, if all the “shoulds” and “oughts” around you are Stal­inist, per­haps all you can ar­tic­u­late at first is “I know all this is right, of course, but I don’t like it.”

Not ev­ery­thing peo­ple call rea­son, logic, jus­tice, or op­ti­miza­tion, is in fact rea­son­able, log­i­cal, just, or op­ti­mal; so, a per­son needs some defenses against those claims of su­pe­ri­or­ity. In par­tic­u­lar, defenses that can shelter them even when they don’t know what’s wrong with the claims. And that’s the clos­est thing we get to an ar­gu­ment in fa­vor of ar­bi­trari­ness. It’s ac­tu­ally not a bad point, in many con­texts. The coun­ter­ar­gu­ment usu­ally has to boil down to hope — to a sense of “I bet we can do bet­ter.”