Learning-Intentions vs Doing-Intentions

Epistemic Sta­tus: In truth, only a slight repack­ag­ing of fa­mil­iar ideas with a new han­dle I’ve found my­self want­ing. See The Lean Startup and Riskiest As­sump­tion Test­ing for other re­sources.

Sup­pose you are Bob Steele, struc­tural en­g­ineer ex­traor­di­naire, and you’ve re­cently com­pleted your doc­torate the­sis in ad­vanced bridge aero­dy­nam­ics. You see how a new gen­er­a­tion of bridge tech­nol­ogy could sig­nifi­cantly im­prove hu­man welfare. Bridges are not as di­rect as bed nets or cash trans­fers, but im­proved trans­port in­fras­truc­ture in de­vel­op­ing re­gions boosts eco­nomic pro­duc­tivity flow­ing through to health­care, ed­u­ca­tion, and other life-im­prov­ing ser­vices. There’s no time to waste. You found Bridgr.io, put the hard in hard­ware startup, and get to work bring­ing your rev­olu­tion­ary tech­nolo­gies to the world.

Com­mon ad­vice is that star­tups should have a few core met­rics which cap­ture their goals, help them track their progress, and en­sure they stay fo­cused. For Bridgr.io that rea­son­ably might be rev­enue, clients, and num­ber of bridges built. There is a dan­ger in this, how­ever.

Although Bridgr.io’s ul­ti­mate goal is to have built bridges in the right place, the most press­ing tasks are not con­struc­tion tasks. They’re re­search tasks. Refin­ing the de­signs and con­struc­tion pro­cess. Un­til Bridgr.io hits on a de­sign which works and can be scaled, there is no point sourc­ing steel and con­struc­tion work­ers for a thou­sand bridges. The first step should be build­ing a suffi­cient num­ber of test and pro­to­type-bridges (or simu­la­tions) not with goal that these bridges will trans­port any­one, just with the goal of learn­ing.

Phase 1: Figure what to do and how to do it.

Phase 2: Do it.

It’s true that if Bridgr.io tries to build as many bridges as pos­si­ble as quickly as pos­si­ble that they will learn along the way what works and what doesn’t, that R&D will au­to­mat­i­cally hap­pen. But I claim that this kind of learn­ing that hap­pens as a product of try­ing to do the thing (pre­ma­turely) is of­ten in­effi­cient, in­effec­tive, and pos­si­bly lethal to your ven­ture.

Su­perfi­cially, build­ing bridges to have bridges and build­ing bridges to figure out which bridges to build both in­volve build­ing bridges. Yet in the de­tails they di­verge. If you’re try­ing to do the thing, you of­ten spend your time try­ing to mo­bi­lize enough re­sources for the all-out effort. You throw ev­ery­thing you’ve got at it, be­cause that’s what it would take to build a thou­sand bridges all over the globe. Act­ing to learn is differ­ent. Rather than scale, it’s about tak­ing care­fully-se­lected, tar­geted ac­tions to re­duce un­cer­tainty. You don’t seek a con­tract for fifty bridges, in­stead you fo­cus on build­ing three crazy differ­ent de­signs to help you test your as­sump­tions.

Among other things, value of in­for­ma­tion can de­cline rapidly with scale. If you can build five bridges, as far as the fun­da­men­tals go, you can build fifty. And scal­ing your cur­rent pro­cess doesn’t nec­es­sar­ily test the un­cer­tainty which mat­ters. Per­haps build­ing fifty bridges in the United States doesn’t test the vi­a­bil­ity of build­ing them in Cen­tral Africa. If you were build­ing to learn, you’d build a cou­ple here and a cou­ple there.

The mis­take I see, and the mo­ti­va­tion for this post, is many peo­ple skip­ping over the learn­ing phase, or try­ing to smush it into the ac­tual do­ing. They seek to max­i­mize their met­rics now rather than first in­vest­ing figur­ing out what is they re­ally should be do­ing. What will work at all. The mis­take is always op­er­at­ing with a do­ing-in­ten­tion when re­ally a learn­ing-in­ten­tion is needed first.

Do­ing-Intention

You’re build­ing a bridge be­cause you want a bridge. You want a phys­i­cal out­come in the world. You’re do­ing the ac­tual thing.

Learn­ing-Intention

You’re build­ing a bridge be­cause you’re try­ing to un­der­stand bridges bet­ter. It’s true that ul­ti­mately you want an ac­tual phys­i­cal bridge, but this bridge isn’t for that. This bridge is just about gain­ing in­for­ma­tion about what doesn’t fall down.

In the con­text of Effec­tive Altruism

I have some con­cern that this er­ror is com­mon among those do­ing di­rectly al­tru­is­tic work. If, like Bob Steele, you be­lieve that your in­ter­ven­tion could be helping peo­ple right now, then it’s tempt­ing to want to ramp up pro­duc­tion and just do the good thing. Every de­lay might re­sult in the loss of lives. When the work is very real, it’s hard to step back and treat it like an ab­stract in­for­ma­tion prob­lem. (Pos­si­bly the pres­sures are no weaker in the startup world, but that realm might benefit from stronger cul­tural wis­dom ex­hort­ing peo­ple not scale be­fore they have “product-mar­ket fit.“)

Pos­si­ble causes of this er­ror-mode

Why do peo­ple make this class of mis­take? A few guesses:

  • The pres­sure to pre­sent re­sults now. Donors, fun­ders, and em­ploy­ees es­pe­cially want to see some­thing for time and money in­vested.

  • The dis­like of un­cer­tainty. It’s more com­fortable to de­cide to fully run with plau­si­bly good Plan A, whose like­li­hoods of suc­cess you can trump up, than stay in limbo while you test Plans A, B, and C.

  • The un­der­es­ti­ma­tion of how much un­cer­tainty re­mains even af­ter early ev­i­dence sug­gests a plan or di­rec­tion might be a good idea. As an ex­am­ple, a com­pany I once worked for spent over a year pur­su­ing a mis­guided strat­egy be­cause us­ing it they landed one large deal with what turned out to be an atyp­i­cal client.

  • Although peo­ple have the no­tion of an ex­per­i­men­tal mind­set and value of in­for­ma­tion, there’s a failure to adopt an ex­per­i­men­tal/​re­search mind­set if cer­tainty is above a cer­tain level. Peo­ple think of con­duct­ing ex­per­i­ments when they don’t know whether some­thing will work at all, but not when the over­all pic­ture looks promis­ing and what re­mains is im­ple­men­ta­tion de­tails. For in­stance, if I have a pro­gram to dis­tribute bed nets, I might have 75% cre­dence this will do a lot of good, even if I’m un­cer­tain about just how much good, what my op­por­tu­nity costs are, and the true best way to im­ple­ment. At the point of 75% con­fi­dence (or much less), I might stop think­ing my pro­gram as ex­per­i­men­tal and fall into a max­i­miz­ing, do­ing-in­ten­tion. Show ev­ery­one them big re­sults.

    • This is lethal if your goals are ex­tremely long-term with min­i­mal feed­back, e.g. long-ter­mist effec­tive al­tru­ists. There will be many plau­si­bly good things to do, but if you scale up pre­ma­turely by turn­ing your ex­per­i­ments into all-out in­ter­ven­tions, then you might ei­ther miss far greater op­por­tu­ni­ties or fail to im­ple­ment your in­ter­ven­tion in a way that works at all on the long-term scale.

    • Com­mu­nity-feed­back can also push in the wrong di­rec­tion. Peo­ple look­ing in from the out­side into an EA pro­ject will ap­prove of efforts to do good backed by a de­cent plau­si­bil­ity story for effec­tive­ness. After that, scale and cer­tainty are prob­a­bly per­ceived as more im­pres­sive than an ar­ray of small-scale ex­per­i­ments and a list of un­cer­tain­ties.

Fi­nal caveat: the per­ils of Learn­ing-Intention

As much as I’m ad­vo­cat­ing for them here, there are of course a great many per­ils as­so­ci­ated with learn­ing-in­ten­tions too. Learn­ing can eas­ily be­come di­vorced from real-world goals and pick­ing the right ac­tions to learn the in­for­ma­tion you ac­tu­ally need is no small challenge. Faced be­tween a choice be­tween de­gen­er­ate do­ing-in­ten­tion and de­gen­er­ate learn­ing-in­ten­tion, I think I would pick the former given is more likely to have em­piri­cism on its side.