Steelmanning Inefficiency

When con­sid­er­ing writ­ing a hy­po­thet­i­cal apos­tasy or steel­man­ning an opinion I dis­agreed with, I looked around for some­thing worth­while, both for me to write and oth­ers to read. Yvain/​Scott has already steel­manned Time Cube, which can­not be beaten as an in­tel­lec­tual challenge, but prob­a­bly didn’t teach us much of gen­eral use (ex­cept in in­ter­est­ing din­ner par­ties). I wanted some­thing hard, but po­ten­tially in­struc­tive.

So I de­cided to steel­man one of the anti-sa­cred cows (sa­cred anti-cows?) of this com­mu­nity, namely in­effi­ciency. It was in­ter­est­ing to find that it was a lit­tle eas­ier than I thought; there are a lot of ar­gu­ments already out there (though they gen­er­ally don’t come out ex­plic­itly in favour of “in­effi­ciency”), it was a ques­tion of col­lect­ing them, stretch­ing them be­yond their do­mains of val­idity, and adding a few rhetor­i­cal tricks.

The strongest argument

Let’s start strong: effi­ciency is the sin­gle most dan­ger­ous thing in the en­tire uni­verse. Then we can work down from that:

A su­per­in­tel­li­gent AI could go out of con­trol and op­ti­mise the uni­verse in ways that are con­trary to hu­man sur­vival. Some peo­ple are very wor­ried about this; you may have en­coun­tered them at some point. One big prob­lem seems to be that there is no such thing as a “re­duced im­pact AI”: if we give a su­per­in­tel­li­gent AI a seem­ingly in­nocu­ous goal such as “cre­ate more pa­per­clips”, then it would turn the en­tire uni­verse into pa­per­clips. Even if it had a more limited goal such as “cre­ate X pa­per­clips”, then it would turn the en­tire uni­verse into re­dun­dant pa­per­clips, meth­ods for count­ing the pa­per­clips it has, or meth­ods for defend­ing the pa­per­clips it has—all be­cause these mas­sive trans­for­ma­tions al­low it to squeeze just a lit­tle bit more ex­pected util­ity from the uni­verse.

The prob­lem is one of effi­ciency: of always choos­ing the max­i­mal out­come. The prob­lem would go away if the AI could be con­tent with al­most ac­com­plish­ing its goal, or of be­ing al­most cer­tain that its goal was ac­com­plished. Un­der those cir­cum­stances, “cre­ate more pa­per­clips” could be a vi­able goal. It’s only be­cause a self-mod­ify­ing AI drives to­wards effi­ciency, that we have the prob­lem in the first place. If the AI ac­cepted be­ing in­effi­cient in its ac­tions, even a lit­tle bit, the world would be much safer.

So the first strike against effi­ciency is that it’s the most likely thing to de­stroy the world, hu­man­ity, and ev­ery­thing of worth and value in the uni­verse. This could pos­si­bly give us some pause.

The mea­sure­ment problem

The prin­ci­pal prob­lem with effi­ciency is the mea­sure­ment prob­lem. In or­der to prop­erly max­imise effi­ciency, we have to mea­sure how well we’re do­ing. So we have to con­struct some sys­tem of mea­sure­ment, and then we max­imise that.

And the prob­lem with for­mal mea­sure­ment sys­tems is that they’re always im­perfect. They’re al­most never ex­actly what we re­ally want to max­imise. First of all, they’re con­structed from the map, not the ter­ri­tory, so they de­pend on us hav­ing a perfect model of re­al­ity (lit­tle-know fact: we do not, in fact, have a perfect model of re­al­ity). This can have dra­matic con­se­quences—see, for in­stance, the var­i­ous failures of cen­tral plan­ers (in gov­ern­ments and in cor­po­ra­tions) when their cho­sen mea­sure­ment scale turned out to not cor­re­spond with what they truly wanted.

This could hap­pen if we mix up cor­re­la­tions and cau­sa­tions—sad­ness can­not be pre­vented by ban­ning frowns. But it’s also true if a true cau­sa­tion stops be­ing true in new cir­cum­stances—ex­er­cise can pre­vent sad­ness, but only up to a point. Each com­po­nent of the mea­sure­ment scale has a “do­main of val­idity”, a set of cir­cum­stances in which it cor­re­sponds truly to some­thing de­sir­able. Ex­cept that we don’t know the do­main of val­idity ahead of time, we don’t know how badly it fails out­side that do­main, and we have only a very hazy and ap­prox­i­mate im­pres­sion of what “de­sir­able” is in the first place.

On that last point, there’s of­ten a mixup be­tween in­stru­men­tal and ter­mi­nal goals. Many things that are seen as “in­trin­si­cally valuable” also have great in­stru­men­tal ad­van­tages (eg free­dom of speech, democ­racy, free­dom of re­li­gion). As we learn, we may re­al­ise that we’ve over­es­ti­mated the in­trin­sic value of that goal, and that we’d be satis­fied with the in­stru­men­tal ad­van­tages. This can be best illus­trated by look­ing at the past: there were pe­ri­ods when “hon­our”, “rep­u­ta­tion”, or “be­ing a man of one’s word” were in­cred­ibly im­por­tant and valuable goals. With the ad­vent of mod­ern polic­ing, con­tract law, and reg­u­la­tions, this is far less im­por­tant, and a once-crit­i­cal ter­mi­nal goal has been re­duced to a slightly de­sir­able hu­man fea­ture.

That was just a par­tic­u­lar ex­am­ple of the gen­eral point that moral learn­ing and moral progress have be­come im­pos­si­ble, once a mea­sure­ment sys­tem has been fixed. So we bet­ter get it perfect the first time, or we’re go­ing in the wrong di­rec­tion. And—I hope I’m not stretch­ing your cred­i­bil­ity too far here—we won’t get it perfect the first time. Even if we al­low a scale to be up­dated as we go along, note that this up­dat­ing is not hap­pen­ing ac­cord­ing to effi­ciency crite­ria (we don’t have a meta-scale that pro­vides the effi­cient way of up­dat­ing value scales). So the most im­por­tant part of safe effi­ciency comes from non-effi­cient ap­proaches.

The proof the im­perfec­tion of mea­sure­ment sys­tems can be found by look­ing through the his­tory of philos­o­phy: many philoso­phers have come up with scales of value that they thought were perfect. Then these were sub­ject to philo­soph­i­cal cri­tiques that pointed out cer­tain patholo­gies (re­pug­nant con­clu­sion! lev­el­ling down ob­jec­tion! 10100 var­i­ants of the trol­ley prob­lem!). The sys­tem’s cre­ators can choose to ac­cept these patholo­gies into their sys­tem, but they gen­er­ally didn’t think of be­fore­hand. Thus any for­mal mea­sure­ment sys­tem will con­tain un­planned for patholo­gies.

Most crit­i­cally, what can­not be mea­sured (or what can only be mea­sured badly) gets shunted aside. GDP, for in­stance, is well known to cor­re­spond poorly with any­thing of value, yet it’s of­ten tar­geted be­cause it can be mea­sured much bet­ter than things we do care about, such as the hap­piness and prefer­ence satis­fac­tion of in­di­vi­d­ual hu­mans. So the pro­cess of build­ing a scale in­tro­duces un­countable dis­tor­tions.

So effi­ciency re­lies of max­imis­ing a for­mal mea­sure­ment sys­tem, while we know that max­imis­ing ev­ery sin­gle past for­mal sys­tem would have been a dis­aster. But don’t worry—we’ve cer­tainly got it right, this time.

Ineffi­cient effi­ciency implementation

Once the im­perfect, sim­plified, and pathol­ogy-filled mea­sure­ment sys­tem has been de­cided upon, then comes the ques­tion of effi­ciently max­imis­ing it. We can’t always mea­sure ex­actly each com­po­nent of the sys­tem, so we’ll of­ten have to ap­prox­i­mate or es­ti­mate the in­puts—adding yet an­other layer of dis­tor­tion.

More crit­i­cally, if the task is hard, it’s un­likely that one per­son can im­ple­ment it on their own. So the sys­tem of mea­sure­ment must pass out of the hands of those that de­signed it, those that are aware of (some) of its limi­ta­tions, to those that have noth­ing but the sys­tem to go on. They’ll no doubt mis­in­ter­pret some of it (adding more dis­tor­tions), but, more crit­i­cally, they’re likely to im­ple­ment it blindly, with­out un­der­stand­ing what it’s for. This might be be­cause they don’t un­der­stand it, but the most likely op­tion is be­cause the in­cen­tives are mis­al­igned: they are re­warded for effi­ciently max­imis­ing the mea­sure­ment sys­tem, not the un­der­ly­ing prin­ci­ple. The pur­pose of the ini­tial mea­sure­ment sys­tem has been lost.

And it’s not just that in­sti­tu­tions tend to have bad in­cen­tives (which is a given), it’s that any for­mal mea­sure­ment sys­tem is ex­cep­tion­ally likely to pro­duce bad in­cen­tives. It’s be­cause it offers a seem­ingly ob­jec­tive mea­sure of what must be op­ti­mised, so the temp­ta­tion is ex­cep­tion­ally strong to just use the mea­sure, and for­get about its sub­tleties. This re­duces perfor­mance to a se­ries of box tick­ing, of “teach­ing to the test” and other equiv­a­lents. There’s no use protest­ing that this was not in­tended: it’s a gen­eral trend for all for­mal mea­sure­ment sys­tems, when ac­tu­ally im­ple­mented in an or­gani­sa­tion staffed by ac­tual hu­mans.

In­deed, Camp­bell’s law (or Good­hart’s law) re­volve around this is­sue: when a mea­sure be­comes a tar­get, it ceases to be a good mea­sure. A for­mal stan­dard of effi­ciency will not suc­ceed in its goals, as it will be­come cor­rupted in the pro­cess of im­ple­men­ta­tion. If it were easy to im­ple­ment effi­ciency in a way that offered gen­uine gains, Camp­bell’s law would not ex­ist. This strongly cor­re­lates with ex­pe­rience as well: how of­ten have some effi­ciency im­prove­ments achieved their stated goals, with­out caus­ing un­ex­pected losses? This al­most never hap­pens, whether they are im­ple­mented by gov­ern­ments or com­pa­nies, in­di­vi­d­u­als or in­sti­tu­tions. Effi­ciency gains are never as strong as es­ti­mated ahead of time.

A fur­ther prob­lem is that once the mea­sure­ment sys­tem has been around for some time, it starts to be­come the stan­dard. Rather than GDP/​un­em­ploy­ment/​equal­ity be­ing a proxy for hu­man util­ity, it be­comes a tar­get in its own right, with many peo­ple com­ing to see it as an goal worth max­imis­ing/​min­imis­ing in its own right. Not only has the im­ple­men­ta­tion been man­gled, but that man­gling has ended up chang­ing fu­ture val­ues in per­ni­cious di­rec­tions.

Effi­ciency is not as effi­cient as it seems (a point we will re­turn to again and again), which un­der­mines its whole paradigm that things can be im­proved through mea­sure­ments. Em­piri­cally, if we looked at the pre­dic­tions made by effi­ciency ad­vo­cates, we would con­clude that they have failed, and that effi­ciency is a strange pseudo-sci­ence with far more cred­i­bil­ity that it de­serves. And in prac­tice, it leads to wasted and pointless efforts by those im­ple­ment­ing it.

A fully gen­eral counterargument

For­tu­nately, those es­pous­ing effi­ciency have a fully gen­eral coun­ter­ar­gu­ment. If effi­ciency doesn’t work, the an­swer is… more effi­ciency! If effi­ciency falls short, then we must es­ti­mate the amount to which it falls short, analyse the im­ple­men­ta­tion, im­prove in­cen­tives, etc… Do you see what’s go­ing on there? The solu­tion to a badly im­ple­mented sys­tem of mea­sure­ment, is to add ex­tra com­pli­ca­tions to the sys­tem, to mea­sure even more things, the add more con­straints, more boxes to tick.

The beauty of the ar­gu­ment is that it can­not be wrong. If any­thing fails, then you weren’t effi­cient enough! Plug the whole thing into a new prob­a­bil­ity dis­tri­bu­tion, go up one level of meta if you need to, es­ti­mate the new pa­ram­e­ters, and you’re off again. Effi­ciency can never fail, it can only be failed. It’s an in­finite regress that never leads to ques­tion­ing its foun­da­tional as­sump­tions.

Effi­ciency, man­age­ment, and un­der­min­ing things that work

Another sneaky trick that effi­ciency pro­po­nents use is to sneak in any im­prove­ment un­der the ban­ner of effi­ciency. Did some mea­sure fail to im­prove out­comes? Then bring in some com­pe­tent man­ager to over­see its im­ple­men­ta­tion, with pow­ers to put things right. If this fails, then more effi­ciency is needed (see above); maybe we should start es­ti­mat­ing the effi­ciency of man­age­ment? If this suc­ceeds, then this is a triumph of effi­ciency.

But it isn’t. It’s likely a triumph of man­age­ment. Most likely, there was no com­pli­cated cost-benefit es­ti­mate that good man­age­ment would im­prove things; this is a gen­er­ally known fact. There are many sen­si­ble pro­ce­dures that can bring great good to or­gani­sa­tions, or im­prove im­ple­men­ta­tions; gen­er­ally speak­ing, the effects of these pro­ce­dures can’t be prop­erly mea­sured, but we do them any­way. This is a triumph of anti-effi­ciency, not of effi­ciency.

In fact, effi­ciency of­ten wors­ens things in or­gani­sa­tions, by un­der­min­ing un-mea­sured ad­van­tages were caus­ing it to func­tion smoothly (see also the Burkean cri­tique, be­low). If an or­gani­sa­tional cul­ture is de­stroyed by ad­her­ence to rigid ob­jec­tives, then that cul­ture is lost, no mat­ter how many dis­asters the ob­jec­tives end up caus­ing in prac­tice. Con­sider, for in­stance, recog­ni­tion-primed de­ci­sion the­ory, used suc­cess­fully by naval ship com­man­der, tank pla­toon lead­ers, fire com­man­ders, de­sign en­g­ineers, offshore oil in­stal­la­tion man­agers, in­fantry officers, com­mer­cial avi­a­tion pi­lots, and chess play­ers. By its na­ture, it is in­effi­cient (it doesn’t have a proper mea­sure to max­imise, it doesn’t com­pare enough op­tions, etc...). So we have great perfor­mance, through in­effi­cient means.

Yet if we in­sisted on effi­ciency (by, for in­stance, get­ting each of those pro­fes­sion­als to fill out de­tailed pa­per­work jus­tify­ing their de­ci­sions, or more train­ing in clas­si­cal de­ci­sion the­ory), we would dra­mat­i­cally re­duce perfor­mance. As more and more ex­perts would get trapped in the new way of think­ing (or of ac­count­ing for their think­ing), the old ex­per­tise would whither away from di­suse, and the perfor­mance of the whole field would de­grade.

Every­thing else be­ing equal...

Effi­ciency ad­vo­cates have a few paradig­matic ex­am­ples of effi­ciency. For in­stance, they set up a situ­a­tion in which you can save one child for $100, or two for $50 each, con­clude you should do the sec­ond, and then pat them­selves on the back for be­ing ra­tio­nal and kind. Fair enough.

But where in the world are these peo­ple who are stand­ing in front of rows of chil­dren with $100 or $50 cures in their hands, se­ri­ously con­sid­er­ing go­ing for the first op­tion? They don’t ex­ist; in­stead the prob­lem is built by as­sum­ing “ev­ery­thing else be­ing equal”. But ev­ery­thing else is not equal; if it were, there wouldn’t be a de­bate. It’s pre­cisely be­cause so many things are not equal, that we can ar­gue that, say cur­ing AIDS in a Ugan­dan ten-month old whose mother was raped, is not di­rectly com­pa­rable to cur­ing malaria in two Brazilian teenagers who picked it up on a trip abroad. This is a par­tic­u­larly egre­gious type of mea­sure­ment prob­lem: only one as­pect of the situ­a­tion (maybe the num­ber of lives saved, maybe the years of life gained, maybe the qual­ity-ad­justed years of life gained… no­tice how the mea­sure is con­tinu­ally get­ting more com­plex?) is deemed wor­thy of con­sid­er­a­tion. And all other as­pects of the prob­lem are deemed un­wor­thy of mea­sure­ment, and thus ig­nored. And the judge­ment of those clos­est to the prob­lem—those with the best ap­pre­ci­a­tion of the whole is­sues—is sus­pect, over­ruled by the ab­stract statis­tics de­cided upon by those far away.

Effi­ciency for evul!

Now, we might want effi­ciency in our own pet cause, but we’re prob­a­bly pretty in­differ­ent to effi­ciency gains for causes we don’t care about, and we’d be op­posed to effi­ciency gains to causes that are an­ti­thet­i­cal to our own. Or let’s be hon­est, and re­place “an­ti­thet­i­cal” with “evil”. There are, for in­stance, many groups ded­i­cated to build­ing AGIs with (in the view of many on this list) a dra­matic lack of safe­guards. We cer­tainly wouldn’t want them to in­crease their effi­ciency! Espe­cially since it’s quite likely that they would be far more suc­cess­ful at in­creas­ing their “build an AGI” effi­ciency than their safety effi­ciency.

Thus, even if effi­ciency worked well, it is very de­bat­able as to whether we want it gen­er­ally spread. Just like in a pris­oner’s dilemma, we might want in­creased effi­ciency for us, but not for oth­ers; and the best equil­ibrium might be that we don’t in­crease our own effi­ciency, and in­stead ac­cept the sta­tus quo. If op­po­nents sud­denly start break­ing out the effi­ciency guns, we can always fol­low suit and re­tal­i­ate.

At this point, peo­ple might ar­gue that effi­ciency, like sci­ence and knowl­edge it­self, is a neu­tral force, that can be used for good or evil, and that how it is used is a sep­a­rate prob­lem. But I hope that peo­ple on this list have a slightly smarter un­der­stand­ing of the situ­a­tion than that. There are such things as in­for­ma­tion haz­ards. If some­one pub­lishes de­tailed plans for build­ing atomic weapons or weapon­is­ing an­thrax or bird flu, we don’t buy the defence that “they’re just pro­vid­ing in­for­ma­tion; it’s up to oth­ers to de­cide how it is used”. Similarly, we can’t go around pro­mot­ing a cul­ture of effi­ciency with­out a clear view of the en­tire con­se­quences of such a cul­ture on the world.

In prac­tice it seems that a gen­eral lack of effi­ciency cul­ture could be of benefit for ev­ery­one. This was the part of the es­say where I was go­ing to break out the aliena­tion ar­gu­ment, and start bring­ing out the Marx­ist cri­tiques. But that proved to be un­nec­es­sary. We can stick with Adam Smith:

The man whose whole life is spent in perform­ing a few sim­ple op­er­a­tions, of which the effects are per­haps always the same, or very nearly the same, has no oc­ca­sion to ex­ert his un­der­stand­ing or to ex­er­cise his in­ven­tion in find­ing out ex­pe­di­ents for re­mov­ing difficul­ties which never oc­cur. He nat­u­rally loses, there­fore, the habit of such ex­er­tion, and gen­er­ally be­comes as stupid and ig­no­rant as it is pos­si­ble for a hu­man crea­ture to be­come. The tor­por of his mind ren­ders him not only in­ca­pable of rel­ish­ing or bear­ing a part in any ra­tio­nal con­ver­sa­tion, but of con­ceiv­ing any gen­er­ous, no­ble, or ten­der sen­ti­ment, and con­se­quently of form­ing any just judge­ment con­cern­ing many even of the or­di­nary du­ties of pri­vate life… But in ev­ery im­proved and civ­i­lized so­ciety this is the state into which the labour­ing poor, that is, the great body of the peo­ple, must nec­es­sar­ily fall, un­less gov­ern­ment takes some pains to pre­vent it.

An In­quiry into the Na­ture and Causes of the Wealth of Na­tions (1776), Adam Smith

This is not un­ex­pected. The are some ways of in­creas­ing effi­ciency that also in­creases the ex­pe­rience of the em­ploy­ees (Google seems to man­age that). But gen­er­ally speak­ing, effi­ciency is tar­geted at some mea­sure that is not em­ployee satis­fac­tion (most likely the tar­get is profit). When you change some­thing with­out op­ti­mis­ing for fea­ture X, it is likely that fea­ture X will do worse. This is par­tially be­cause fea­ture X is gen­er­ally care­fully con­structed and low en­tropy, so any ran­dom change is per­ni­cious, and par­tially be­cause of limited re­sources: effort that goes away from X will re­duce X. At the lower-to-mid level of the in­come scale, it seems that this pat­tern has been fol­lowed ex­actly: more and more jobs are be­com­ing lousy, even as eco­nomic effi­ciency is ris­ing. In­deed, I would ar­gue that they are be­com­ing lousy pre­cisely be­cause eco­nomic effi­ciency is ris­ing. The amount of low in­come jobs with dig­nity is in sharp de­cline.

The con­trast can be seen in the differ­ence be­tween GDP (easy to mea­sure and op­ti­mise for) and hap­piness (hard to mea­sure and op­ti­mise for). The mod­ern econ­omy has been trans­formed by effi­ciency drives, dou­bling ev­ery 35 years or so. But it’s clear that hu­man hap­piness has not been dou­bling ev­ery 35 years or so. The cult of effi­ciency has re­sulted in a lot of effort be­ing put in in­effi­cient di­rec­tions in terms of what we truly value, with per­verse re­sults on the lower in­comes. A lit­tle less effi­ciency, or at least a halt to the drive for ever greater effi­ciency, is cer­tainly called for.

The proof can be seen in the sta­tus of differ­ent jobs. High sta­tus em­ploy­ees are much more likely to have flex­ible sched­ules or work pat­terns, to work with­out micro­manag­ing su­pe­ri­ors, and so on. Thus, as soon as they get enough power, peo­ple move away from im­posed effi­ciency and fight to defend their in­de­pen­dence and their right to not con­stantly be mea­sured and di­rected. Often, this turns out to be bet­ter for their em­ploy­ers or clients as well. Re­duc­ing the drive to­wards effi­ciency re­sults in out­comes that are bet­ter for ev­ery­one.

Self-im­prove­ment, fun, and games

Let’s turn for a mo­ment from the large scale to the small. Would you want more effi­ciency in your life? Many peo­ple on this list have made (or claimed) great im­prove­ments through im­proved effi­ciency. But did they im­ple­ment these af­ter a care­ful cost-benefit anal­y­sis, keep­ing care­ful track of their effects, and only mak­ing the changes that could be strictly jus­tified? Of course not: most of the de­tails of the im­ple­men­ta­tion was done through per­sonal judge­ment, honed through years of (in­effi­cient) ex­pe­rience (no one tried ham­mer­ing rusty nails into their hands to im­prove con­cen­tra­tion—and not be­cause of a “Rusty Nail self-ham­mer­ing and con­cen­tra­tion: a placebo con­trol­led ran­domised trial” pub­li­ca­tion).

How do we know most of the pro­cess wasn’t effi­ciency based? For the same rea­son that its so hard to teach a com­puter to do any­thing sub­tle: most of what we do is im­ple­mented by com­pli­cated sys­tems we do not con­sciously con­trol. “Effi­ciency” is a tiny con­scious tweak that we add to a pro­cess that re­lies on mas­sive un­con­scious pro­cesses, as well as skills and judge­ments that we de­vel­oped in in­effi­cient ways. Those who have tried to do more than that—for in­stance, those who have tried to use an ex­plicit util­ity func­tion as their de­ci­sion crite­ria—have gen­er­ally failed.

For ex­am­ple, imag­ine you were play­ing a game, and wanted to op­ti­mise this. One solu­tion is to pro­gram a com­pli­cated al­gorithm to spit out the perfect play, guaran­tee­ing your vic­tory. But this would be bor­ing; the game is no longer a challenge. Ac­tu­ally, what we wanted to op­ti­mise is fun. We could try to mea­sure this (see the fully gen­eral coun­ter­ar­gu­ment above), but the mea­sure would cer­tainly fail, as we for­got to in­clude challenge, or ca­ma­raderie, or long term re-playa­bil­ity, or what­ever. It’s the usual prob­lem with effi­ciency—we just can’t list all the im­por­tant fac­tors. And no­body re­ally tries. In­stead, we can do what we always do: try differ­ent things (differ­ent games, differ­ent ways of play­ing, etc...), get a feel for what works for us, and grad­u­ally im­prove our play­ing ex­pe­rience, with­out need­ing effi­ciency crite­ria. Once again, this demon­strates that great im­prove­ments are pos­si­ble, with­out them be­ing “effi­ciency gains”.

In many peo­ple, there is a cer­tain vi­car­i­ous plea­sure to see some pro­ject fail, if it was over-effi­cient, over-mar­keted, perfectly-de­signed-to-ex­ploit-com­mon-fea­tures. Bland and tar­geted Hol­ly­wood movies are like this, as are some sports teams; the triumph of the quirky, of the spon­ta­neous, of the un­ex­pected un­der­dog, is some­thing we value and en­joy. By defi­ni­tion, a com­pli­cated sys­tem that mea­sures the “al­low­able spon­tane­ity and un­der­dog triumph” is not go­ing to give us this en­joy­ment. Effi­ciency can never cap­ture our many anti-effi­ciency de­sires, mean­ing it can never cap­ture our de­sires, and op­ti­mis­ing it would lose us much of what we value.

Burke, so­ciety, and co-evolution

Let’s get Burkean. One of Burke’s key in­sights was that the power and effec­tive­ness of a given so­ciety are not found only in the ex­plicit rules and re­sources. A lot of the strength is in the im­plicit or­gani­sa­tion of so­ciety—in in­sti­tu­tional knowl­edge and tra­di­tions. Con­sti­tu­tional rules about free­dom of ex­pres­sion are of limited use, with­out a strong civil so­ciety that ap­pre­ci­ates free­dom of ex­pres­sion and pushes back at at­tempts to quash or un­der­mine it. The le­gal sys­tem can’t work effi­ciently with­out a cul­ture of law-abid­ing among most cit­i­zens. Peo­ple have set-up their lives, their debts, their in­ter­ac­tions, in ways that best benefit them­selves, given the so­cial cir­cum­stances they find them­selves in. Thus we should sus­pect that there is a cer­tain “wis­dom” in the way that so­ciety has or­ganised it­self; a cer­tain re­silience and adap­ta­tion. Coun­tries like the US have so many laws we don’t know how to count them; nev­er­the­less, the coun­try con­tinues to func­tion be­cause we have reached an un­der­stand­ing as to which laws are en­forced in which cir­cum­stances (so that the po­lice in­ves­ti­gates mur­der with more as­si­duity that sus­pected jay­walk­ing, for in­stance). Without this un­der­stand­ing, nei­ther the pop­u­la­tion nor the po­lice could do any­thing, paralysed by un­cer­tainty as to what was al­lowed and what wasn’t. And this un­der­stand­ing is an im­plicit, de­cen­tral­ised ob­ject: its writ­ten down nowhere, but is con­tained in peo­ple’s knowl­edge and ex­pec­ta­tions across the coun­try.

Sweep away all these so­cial struc­tures in the name of effi­cient change, and a lot of value is de­stroyed—per­haps per­ma­nently. Trans­form the teach­ing pro­fes­sion into a chase for box tick­ing and test re­sults, and the cul­ture of good teach­ing is slowly erad­i­cated, never to re­turn even if the changes are re­versed. Con­sider bankers, for in­stance. There have been le­gal changes in the last decades, but the most im­por­tant ones were cul­tural, trans­form­ing bank­ing from a staid and dull pro­fes­sion into a high risk cas­ino (and this change was of­ten jus­tified in the name of eco­nomic effi­ciency).

The so­cial cap­i­tal of so­cieties is be­ing drained by change, and the faster the change (thus, the more strict we are at pur­su­ing effi­ciency), the less time it has to re­con­sti­tute it­self. Chang­ing ab­solutely ev­ery­thing in the name of higher ideals (as hap­pened in early com­mu­nist Rus­sia) is a recipe for dis­aster.

Hav­ing been Marx­ist/​Adam Smithist be­fore, let’s also be so­cial con­ser­va­tive for a mo­ment. Drives for effi­ciency, whether di­rect or in­di­rectly through cap­i­tal­is­tic com­pe­ti­tion, tend to un­der­mine the stan­dard struc­tures of so­ciety. Even with­out the Burkean ar­gu­ment above, these struc­tures provide some value to many peo­ple. Some peo­ple ap­pre­ci­ate be­ing in cer­tain hi­er­ar­chies, in hav­ing so­ciety or­ganised a par­tic­u­lar way, in the sta­bil­ity of re­la­tion­ships within it. When you cre­ate change, some of these struc­tures are de­stroyed and the new struc­tures al­most never provide equal value—at least at first. Even if you dis­agree with the so­cial con­ser­va­tive val­ues here, they are gen­uine val­ues held by gen­uine peo­ple, who gen­uinely suffer when these struc­tures are de­stroyed. And we all share these val­ues to some ex­tent: hu­mans are risk averse, so that if you ex­changed the po­si­tions of the av­er­age billion­aire and the av­er­age beg­gar, the lost value from the billion­aire would dwarf the gain for the beg­gar. A propo­si­tion to ran­domise the po­si­tion of peo­ple in so­ciety, would never pass by ma­jor­ity vote.

Hu­mans are com­pli­cated be­ings [cita­tion needed], with com­pli­cated de­sires shaped by the so­ciety we find our­selves in. Our de­sires, our cap­i­tal (of all kinds), our habits, all these have co-evolved with the so­cial cir­cum­stances we find our­selves in. Similarly, our for­mal and in­for­mal in­sti­tu­tions have co-evolved with the tech­nolog­i­cal, so­cial and le­gal facts of our so­ciety. As has been of­ten demon­strated, if you take co-evolved traits and “im­prove” one of them, the re­sult can of­ten be dis­as­trous. But effi­ciency seeks to do just that. You can best make change my mak­ing it less effi­cient, by slow­ing it down, and let­ting so­ciety and in­sti­tu­tions catch up and adapt to the trans­for­ma­tions.

The case for in­creas­ing inefficiency

So far, we have seen strong ar­gu­ments for avoid­ing an in­crease in effi­ciency; but this does not trans­late into a case for in­creased in­effi­ciency.

But it seems that this must be the case. First of all, we must avoid a mis­lead­ing sta­tus quo bias. It is ex­traor­di­nar­ily un­likely that we are cur­rently at the “op­ti­mum level of effi­ciency”. Thus, if effi­ciency is sus­pect, its just as likely that we would need to de­crease it as we need to in­crease it.

But we can make five pos­i­tive points in favour of in­crease in­effi­ciency. The first is that in­creased in­effi­ciency gives more scope for de­vel­op­ing the cul­tural and so­cial struc­tures that Bruke val­ued and that blunt the sharp edge of changes. Such struc­tures can never evolve if ev­ery­thing one does is weighed and mea­sured.

Se­condly there is the effi­ciency-re­silience trade­off. Effi­cient sys­tems tend to be brit­tle, with ev­ery effort bent to­wards op­ti­mis­ing, and none left in re­serve (as it is a car­di­nal sin to leave any re­source un­der-util­ised). Thus when dis­aster strikes, there is lit­tle left over to cope, and the care­fully op­ti­mised, in­tri­cate ma­chin­ery, is at risk of col­laps­ing all at once. A more in­effi­cient sys­tem, on the other hand, has more re­serves, more ex­tras to draw upon, more room to adapt.

Thirdly, in­creased in­effi­ciency can al­low a greater scope for moral com­pro­mises. Differ­ent sys­tems of moral­ity can differ strongly on what the best course of ac­tion is; that means that in an “effi­cient” so­ciety, the stan­dard by which effi­ciency is mea­sured is the tar­get of an all out war. Gain con­trol of that mea­sure of effi­ciency, and you have gained con­trol of the en­tire moral frame­work. Less effi­cient so­cieties al­low more com­pro­mise, by leav­ing aside many is­sues around which there is no con­sen­sus: since we know that the sta­tus quo has a large in­er­tia, the fight to con­trol the di­rec­tion of change is less crit­i­cal. We gen­er­ally see it as a pos­i­tive thing that poli­ti­cal par­ties lack the power to com­pletely re­or­ganise so­ciety ev­ery time they win an elec­tion. Similarly, a less effi­cient so­ciety might be a less un­equal so­ciety, since it seems that gains in strongly effi­cient so­cieties are dis­tributed much more un­evenly than in less.

Fourthly, in­effi­ciency adds more fric­tion in the sys­tem, and hence more sta­bil­ity. Peo­ple value the sta­bil­ity, and a bit more fric­tion in many do­mains—such as fi­nan­cial trades—is widely seen as de­sir­ably.

Fi­nally, in­effi­ciency al­lows more ex­plo­ra­tion, more fo­cus on spec­u­la­tive ideas. In a world where ev­ery­thing must reach the same rate of re­turn, and do so quickly, there is much less tol­er­ance of va­ri­ety or differ­ence in ap­proaches. Long term R&D in­vest­ments, for one, are made prin­ci­pally by gov­ern­ments and by mo­nop­o­lies, se­cure in their po­si­tions. Blue sky think­ing and tin­ker­ing are lux­u­ries that effi­ciency sel­dom tol­er­ates.

Conclusion

I hope you’ve taken the time to read this. En­joyed it. Maybe while tak­ing a bath, or listen­ing to soft mu­sic. Savoured it, or savoured the many mis­takes within it. That it has added some­thing to your day, to your wis­dom and un­der­stand­ing. That you started it at one point, then grew bored, then re­turned later, or not. But, above all else, that you haven’t zoomed through it, seek­ing key ideas, analysing them and cor­rect­ing them in a spirit of… you know. That thing. That “e”-word.

Real conclusion

I learnt quite a few things along the way of writ­ing this apos­tasy, which was the point. Most valuable in­sight: the worth of “effi­ciency” is crit­i­cally de­pen­dent on what im­prove­ments gets counted un­der that head­ing—at its not always clear, at all. We do have a gen­eral ten­dency to la­bel far too many im­prove­ments as effi­ciency gains. If some­one smart ap­plies effi­ciency and gets bet­ter, was the smart­ness or the effi­ciency the key? I also think the “ex­plo­ra­tion vs ex­ploita­tion” and the var­i­ous prob­lems with strict mod­els and blind im­ple­men­ta­tion are very valid, in­clud­ing the effect mea­sure­ment can have on ex­per­tise.

I won’t cri­tique my own apos­tasy; I think oth­ers will learn more from the challenge of tak­ing it apart them­selves. As to whether I be­lieve this ar­gu­ment—it’s an apos­tasy, so… Of course I do. In some ways: I just found the bits in me that agreed with what I was writ­ing, and gave them free reign for once. Though I had to dig very deep to find some of those bits (eg so­cial con­ser­vatism).

EDIT: What, no-one taken the es­say apart yet? Please go ahead!