An Equilibrium of No Free Energy

Fol­low-up to: Inad­e­quacy and Modesty

I am now go­ing to in­tro­duce some con­cepts that lack es­tab­lished names in the eco­nomics liter­a­ture—though I don’t be­lieve that any of the ba­sic ideas are new to eco­nomics.

First, I want to dis­t­in­guish be­tween the stan­dard eco­nomic con­cept of effi­ciency (as in effi­cient pric­ing) and the re­lated but dis­tinct con­cepts of in­ex­ploita­bil­ity and ad­e­quacy, which are what usu­ally mat­ter in real life.


Depend­ing on the strength of your filter bub­ble, you may have met peo­ple who be­come an­gry when they hear the phrase “effi­cient mar­kets,” tak­ing the ex­pres­sion to mean that hedge fund man­agers are par­tic­u­larly wise, or that mar­kets are par­tic­u­larly just.1

Part of where this in­ter­pre­ta­tion ap­pears to be com­ing from is a mis­con­cep­tion that mar­ket prices re­flect a judg­ment on any­one’s part about what price would be “best”—fairest, say, or kind­est.

In a pre-mar­ket econ­omy, when you offer some­body fifty car­rots for a roasted an­telope leg, your offer says some­thing about how im­pressed you are with their work hunt­ing down the an­telope and how much re­ward you think that de­serves from you. If they’ve dealt gen­er­ously with you in the past, per­haps you ought to offer them more. This is the only in­stinc­tive no­tion peo­ple start with for what a price could mean: a per­sonal in­ter­ac­tion be­tween Alice and Bob re­flect­ing past friend­ships and a bal­ance of so­cial judg­ments.

In con­trast, the eco­nomic no­tion of a mar­ket price is that for ev­ery loaf of bread bought, there is a loaf of bread sold; and there­fore ac­tual de­mand and ac­tual sup­ply are always equal. The mar­ket price is the in­put that makes the de­creas­ing curve for de­mand as a func­tion of price meet the in­creas­ing curve for sup­ply as a func­tion of price. This price is an “is” state­ment rather than an “ought” state­ment, an ob­ser­va­tion and not a wish.

In par­tic­u­lar, an effi­cient mar­ket, from an economist’s per­spec­tive, is just one whose av­er­age price move­ment can’t be pre­dicted by you.

If that way of putting it sounds odd, con­sider an anal­ogy. Sup­pose you asked a well-de­signed su­per­in­tel­li­gent AI sys­tem to es­ti­mate how many hy­dro­gen atoms are in the Sun. You don’t ex­pect the su­per­in­tel­li­gence to pro­duce an an­swer that is ex­actly right down to the last atom, be­cause this would re­quire mea­sur­ing the mass of the Sun more finely than any mea­sur­ing in­stru­ment you ex­pect it to pos­sess. At the same time, it would be very odd for you to say, “Well, I think the su­per­in­tel­li­gence will un­der­es­ti­mate the num­ber of atoms in the Sun by 10%, be­cause hy­dro­gen atoms are very light and the AI sys­tem might not take that into ac­count.” Yes, hy­dro­gen atoms are light, but the AI sys­tem knows that too. Any rea­son you can de­vise for how a su­per­in­tel­li­gence could un­der­es­ti­mate the amount of hy­dro­gen in the Sun is a pos­si­bil­ity that the su­per­in­tel­li­gence can also see and take into ac­count. So while you don’t ex­pect the sys­tem to get the an­swer ex­actly right, you don’t ex­pect that you your­self will be able to pre­dict the av­er­age value of the er­ror—to pre­dict that the sys­tem will un­der­es­ti­mate the amount by 10%, for ex­am­ple.

This is the prop­erty that an economist thinks an “effi­cient” price has. An effi­cient price can up­date sharply: the com­pany can do worse or bet­ter than ex­pected, and the stock can move sharply up or down on the news. In some cases, you can ra­tio­nally ex­pect volatility; you can pre­dict that good news might ar­rive to­mor­row and make the stock go up, bal­anced by a counter-pos­si­bil­ity that the news will fail to ar­rive and the stock will go down. You could think the stock is 30% likely to rise by $10 and 20% likely to drop by $15 and 50% likely to stay the same. But you can’t pre­dict in ad­vance the av­er­age value by which the price will change, which is what it would take to make an ex­pected profit by buy­ing the stock or short-sel­l­ing it.2

When an economist says that a mar­ket price is effi­cient over a two-year time hori­zon, they mean: “The cur­rent price that bal­ances the sup­ply and de­mand of this fi­nan­cial in­stru­ment well re­flects all pub­lic in­for­ma­tion af­fect­ing a bound­edly ra­tio­nal es­ti­mate of the fu­ture sup­ply-de­mand bal­anc­ing point of this fi­nan­cial in­stru­ment in two years.” They’re re­lat­ing the pre­sent in­ter­sec­tion of these two curves to an ideal­ized cog­ni­tive es­ti­mate of the curves’ fu­ture in­ter­sec­tion.

But this is a long sen­tence in the lan­guage of a hunter-gath­erer. If some­body doesn’t have all the terms of that sen­tence pre­com­piled in their head, then they’re likely to in­ter­pret the sen­tence in the idiom of or­di­nary hu­man life and or­di­nary hu­man re­la­tion­ships.

Peo­ple have an in­nate un­der­stand­ing of “true” in the sense of a map that re­flects the ter­ri­tory, and they can imag­ine pro­cesses that pro­duce good maps; but prob­a­bil­ity and microe­co­nomics are less in­tu­itive.3 What peo­ple hear when you talk about “effi­cient prices” is that a cold-blooded ma­chine has de­ter­mined that some peo­ple ought to be paid $9/​hour. And they hear the economist say­ing nice things about the ma­chine, prais­ing it as “effi­cient,” im­ply­ing that the ma­chine is right about this $9/​hour price be­ing good for so­ciety, that this price well re­flects what some­one’s efforts are justly worth. They hear you agree­ing with this pitiless ma­chine’s judg­ment about how the in­tu­itive web of obli­ga­tions and in­cen­tives and rep­u­ta­tion ought prop­erly to cash out for a hu­man in­ter­ac­tion.

And in the do­main of stocks, when stock prices are ob­served to swing widely, this in­tu­itive view says that the mar­ket can’t be that smart af­ter all. For if it were smart, would it keep turn­ing out to be “wrong” and need to change its mind?

I once read a rather clue­less mag­a­z­ine ar­ti­cle that made fun of a poli­ti­cal pre­dic­tion mar­ket on the ba­sis that when a new poll came out, the price of the pre­dic­tion mar­ket moved. “It just tracks the polls!” the au­thor pro­claimed. But the point of the pre­dic­tion mar­ket is not that it knows some fixed, ob­jec­tive chance with high ac­cu­racy. The point of a pre­dic­tion mar­ket is that it sum­ma­rizes all the in­for­ma­tion available to the mar­ket par­ti­ci­pants. If the poll moved prices, then the poll was new in­for­ma­tion that the mar­ket thought was im­por­tant, and the mar­ket up­dated its be­lief, and this is just the way things should be.

In a liquid mar­ket, “price moves whose av­er­age di­rec­tion you can pre­dict in ad­vance” cor­re­spond to both “places you can make a profit” and “places where you know bet­ter than the mar­ket.” A mar­ket that knows ev­ery­thing you know is a mar­ket where prices are “effi­cient” in the con­ven­tional eco­nomic sense—one where you can’t pre­dict the net di­rec­tion in which the price will change.

This means that the effi­ciency of a mar­ket is as­sessed rel­a­tive to your own in­tel­li­gence, which is fine. In­deed, it’s pos­si­ble that the con­cept should be called “rel­a­tive effi­ciency.” Yes, a su­per­in­tel­li­gence might be able to pre­dict price trends that no mod­ern hu­man hedge fund man­ager could; but economists don’t think that to­day’s mar­kets are effi­cient rel­a­tive to a su­per­in­tel­li­gence.

To­day’s mar­kets may not be effi­cient rel­a­tive to the smartest hedge fund man­agers, or effi­cient rel­a­tive to cor­po­rate in­sid­ers with se­cret knowl­edge that hasn’t yet leaked. But the stock mar­kets are effi­cient rel­a­tive to you, and to me, and to your Un­cle Albert who thinks he tripled his money through his in­cred­ible acu­men in buy­ing


Not ev­ery­thing that in­volves a fi­nan­cial price is effi­cient. There was re­cently a startup called Color Labs, aka, whose pu­ta­tive pur­pose was to let peo­ple share pho­tos with their friends and see other pho­tos that had been taken nearby. They closed $41 mil­lion in fund­ing, in­clud­ing $20 mil­lion from the pres­ti­gious Se­quoia Cap­i­tal.

When the news of their fund­ing broke, prac­ti­cally ev­ery­one on the on­line Hacker News fo­rum was rol­ling their eyes and pre­dict­ing failure. It seemed like a nitwit me-too idea to me too. And then, yes, Color Labs failed and the 20-per­son team sold them­selves to Ap­ple for $7 mil­lion and the ven­ture cap­i­tal­ists didn’t make back their money. And yes, it sounds to me like the pres­ti­gious Se­quoia Cap­i­tal bought into the wrong startup.

If that’s all true, it’s not a co­in­ci­dence that nei­ther I nor any of the other on­look­ers could make money on our ad­vance pre­dic­tion. The startup equity mar­ket was in­effi­cient (a price un­der­went a pre­dictable de­cline), but it wasn’t ex­ploitable.4 There was no way to make a profit just by pre­dict­ing that Se­quoia had over­paid for the stock it bought. Be­cause, at least as of 2017, the mar­ket lacks a cer­tain type and di­rec­tion of liquidity: you can’t short-sell startup equity.5

What about houses? Millions of res­i­den­tial houses change hands ev­ery year, and they cost more than stock shares. If we ex­pect the stock mar­ket to be well-priced, shouldn’t we ex­pect the same of houses?

The an­swer is “no,” be­cause you can’t short-sell a house. Sure, there are some ways to bet against ag­gre­gate hous­ing mar­kets, like short­ing real es­tate in­vest­ment trusts or home man­u­fac­tur­ers. But in the end, hedge fund man­agers can’t make a syn­thetic fi­nan­cial in­stru­ment that be­haves just like the house on 6702 West St. and sell it into the same hous­ing mar­ket fre­quented by con­sumers like you. Which is why you might do very well to think for your­self about whether the price seems sen­si­ble to you be­fore buy­ing a house: be­cause you might know bet­ter than the mar­ket price, even as a non-spe­cial­ist rely­ing only on pub­li­cly available in­for­ma­tion.

Let’s imag­ine there are 100,000 houses in Boomville, of which 10,000 have been for sale in the last year or so. Sup­pose there are 20,000 fools who think that hous­ing prices in Boomville can only go up, and 10,000 ra­tio­nal hedge fund man­agers who think that the shale-oil busi­ness may col­lapse and lead to a pre­dictable de­cline in Boomville house prices. There’s no way for the hedge fund man­agers to short Boomville house prices—not in a way that satis­fies the op­ti­mistic de­mand of 20,000 fools for Boomville houses, not in a way that causes house prices to ac­tu­ally de­cline. The 20,000 fools just bid on the 10,000 available houses un­til the sky­rock­et­ing price of the houses makes 10,000 of the fools give up.

Some smarter agents might de­cline to buy, and so some­what re­duce de­mand. But the smarter agents can’t ac­tu­ally visit Boomville and make hun­dreds of thou­sands of dol­lars off of the over­priced houses. The price is too high and will pre­dictably de­cline, rel­a­tive to pub­lic in­for­ma­tion, but there’s no way you can make a profit on know­ing that. An in­di­vi­d­ual who owns an ex­ist­ing house can ex­ploit the in­effi­ciency by sel­l­ing that house, but ra­tio­nal mar­ket ac­tors can’t crowd around the in­effi­ciency and ex­ploit it un­til it’s all gone.

Whereas a pre­dictably un­der­priced house, put on the mar­ket for pre­dictably much less than its fu­ture price, would be an as­set that any of a hun­dred thou­sand ra­tio­nal in­vestors could come in and snap up.

So a frothy hous­ing mar­ket may see many over­priced houses, but few un­der­priced ones.

Thus it will be easy to lose money in this mar­ket by buy­ing stupidly, and much harder to make money by buy­ing clev­erly. The mar­ket prices will be in­effi­cient—in a cer­tain sense stupid—but they will not be ex­ploitable.

In con­trast, in a thickly traded mar­ket where it is easy to short an over­priced as­set, prices will be effi­cient in both di­rec­tions, and any day is as good a day to buy as any other. You may end up ex­posed to ex­cess volatility (an as­set with a 50% chance of dou­bling and a 50% chance of go­ing bankrupt, for ex­am­ple), but you won’t ac­tu­ally have bought any­thing over­priced—if it were pre­dictably over­priced, it would have been short-sold.6

We can see the no­tion of an in­ex­ploitable mar­ket as gen­er­al­iz­ing the no­tion of an effi­cient mar­ket as fol­lows: in both cases, there’s no free en­ergy in­side the sys­tem. In both mar­kets, there’s a horde of hun­gry or­ganisms mov­ing around try­ing to eat up all the free en­ergy. In the effi­cient mar­ket, ev­ery pre­dictable price change cor­re­sponds to free en­ergy (easy money) and so the equil­ibrium where hun­gry or­ganisms have eaten all the free en­ergy cor­re­sponds to an equil­ibrium of no pre­dictable price changes. In a merely in­ex­ploitable mar­ket, there are pre­dictable price changes that don’t cor­re­spond to free en­ergy, like an over­priced house that will de­cline later, and so the no-free-en­ergy equil­ibrium can still in­volve pre­dictable price changes.7

Our abil­ity to say, within the con­text of the gen­eral the­ory of “effi­cient mar­kets,” that houses in Boomville may still be over­priced—and, ad­di­tion­ally, to say that they are much less likely to be un­der­priced—is what makes this style of rea­son­ing pow­er­ful. It doesn’t just say, “Prices are usu­ally right when lots of money is flow­ing.” It gives us de­tailed con­di­tions for when we should and shouldn’t ex­pect effi­ciency. There’s an un­der­ly­ing logic about pow­er­fully smart or­ganisms, any sin­gle one of which can con­sume free en­ergy if it is available in worth­while quan­tities, in a way that pro­duces a global equil­ibrium of no free en­ergy; and if one of the premises is in­val­i­dated, we get a differ­ent pre­dic­tion.


At one point dur­ing the 2016 pres­i­den­tial elec­tion, the Pre­dic­tIt pre­dic­tion mar­ket—the only one legally open to US cit­i­zens (and only US cit­i­zens)—had Hillary Clin­ton at a 60% prob­a­bil­ity of win­ning the gen­eral elec­tion. The big­ger, in­ter­na­tional pre­dic­tion mar­ket BetFair had Clin­ton at 80% at that time.

So I looked into buy­ing Clin­ton shares on Pre­dic­tIt—but dis­cov­ered, alas, that Pre­dic­tIt charged a 10% fee on prof­its, a 5% fee on with­drawals, had an $850 limit per con­tract bet… and on top of all that, I’d also have to pay 28% fed­eral and 9.3% state in­come taxes on any gains. Which, in sum, meant I wouldn’t be get­ting much more than $30 in ex­pected re­turn for the time and has­sle of buy­ing the con­tracts.

Oh, if only Pre­dic­tIt didn’t charge that 10% fee on prof­its, that 5% fee on with­drawals! If only they didn’t have the $850 limit! If only the US didn’t have such high in­come taxes, and didn’t limit par­ti­ci­pa­tion in over­seas pre­dic­tion mar­kets! I could have bought Clin­ton shares at 60 cents on Pre­dic­tIt and Trump shares at 20 cents on Bet­fair, win­ning a dol­lar ei­ther way and get­ting a near-guaran­teed 25% re­turn un­til the prices were in line! Curse those silly rules, pre­vent­ing me from pick­ing up that free money!

Does that com­plaint sound rea­son­able to you?

If so, then you haven’t yet fully in­ter­nal­ized the no­tion of an in­effi­cient-but-in­ex­ploitable mar­ket.

If the taxes, fees, and bet­ting limits hadn’t been there, the Pre­dic­tIt and BetFair prices would have been the same.


Sup­pose it were the case that some cases of Sea­sonal Affec­tive Di­sor­der proved re­sis­tant to sit­ting in front of a 10,000-lux light­box for 30 min­utes (the stan­dard treat­ment), but would nonethe­less re­spond if you bought 130 or so 60-watt-equiv­a­lent high-CRI LED bulbs, in a mix of 5000K and 2700K color tem­per­a­tures, and strung them up over your two-bed­room apart­ment.

Would you ex­pect that, sup­pos­ing this were true, there would already ex­ist a jour­nal re­port some­where on it?

Would you ex­pect that, sup­pos­ing this were true, it would already be widely dis­cussed (or at least ru­mored) on the In­ter­net?

Would you ex­pect that, sup­pos­ing this were true, doc­tors would already know about it and it would be on stan­dard med­i­cal pages about Sea­sonal Affec­tive Di­sor­der?

And would you, failing to ob­serve any­thing on the sub­ject af­ter a cou­ple of hours of Googling, con­clude that your civ­i­liza­tion must have some un­known good rea­son why not ev­ery­one was do­ing this already?

To an­swer a ques­tion like this, we need an anal­y­sis not of the world’s effi­ciency or in­ex­ploita­bil­ity but rather of its ad­e­quacy—whether all the low-hang­ing fruit have been plucked.

A duly mod­est skep­ti­cism, trans­lated into the terms we’ve been us­ing so far, might say some­thing like this: “Around 7% of the pop­u­la­tion has se­vere Sea­sonal Affec­tive Di­sor­der, and an­other 20% or so has weak Sea­sonal Affec­tive Di­sor­der. Around 50% of tested cases re­spond to stan­dard light­boxes. So if the in­ter­ven­tion of string­ing up a hun­dred LED bulbs ac­tu­ally worked, it could provide a ma­jor im­prove­ment to the lives of 3% of the US pop­u­la­tion, cost­ing on the or­der of $1000 each (with­out economies of scale). Many of those 9 mil­lion US cit­i­zens would be rich enough to af­ford that as a treat­ment for ma­jor win­ter de­pres­sion. If you could prove that your sys­tem worked, you could cre­ate a com­pany to sell SAD-grade light­ing sys­tems and have a large mar­ket. So by pos­tu­lat­ing that you can cure SAD this way, you’re pos­tu­lat­ing a world in which there’s a huge quan­tity of metaphor­i­cal free en­ergy—a big en­ergy gra­di­ent that so­ciety hasn’t tra­versed. There­fore, I’m skep­ti­cal of this med­i­cal the­ory for more or less the same rea­son that I’m skep­ti­cal you can make money on the stock mar­ket: it pos­tu­lates a $20 bill ly­ing around that no­body has already picked up.”

So the dis­tinc­tion is:

  • Effi­ciency: “Microsoft’s stock price is nei­ther too low nor too high, rel­a­tive to any­thing you can pos­si­bly know about Microsoft’s stock price.”

  • In­ex­ploita­bil­ity: “Some houses and hous­ing mar­kets are over­priced, but you can’t make a profit by short-sel­l­ing them, and you’re un­likely to find any sub­stan­tially un­der­priced houses—the mar­ket as a whole isn’t ra­tio­nal, but it con­tains par­ti­ci­pants who have money and un­der­stand hous­ing mar­kets as well as you do.”

  • Ad­e­quacy: “Okay, the med­i­cal sec­tor is a wildly crazy place where differ­ent in­ter­ven­tions have or­ders-of-mag­ni­tude differ­ences in cost-effec­tive­ness, but at least there’s no well-known but un­used way to save ten thou­sand lives for just ten dol­lars each, right? Some­body would have picked up on it! Right?!”

Let’s say that within some slice through so­ciety, the ob­vi­ous low-hang­ing fruit that save more than ten thou­sand lives for less than a hun­dred thou­sand dol­lars to­tal have, in fact, been picked up. Then I pro­pose the fol­low­ing ter­minol­ogy: let us say that that part of so­ciety is ad­e­quate at sav­ing 10,000 lives for $100,000.

And if there’s a con­vinc­ing case that this prop­erty does not hold, we’ll say this sub­sec­tor is in­ad­e­quate (at sav­ing 10,000 lives for $100,000).

To see how an in­ad­e­quate equil­ibrium might arise, let’s start by fo­cus­ing on one tiny sub­fac­tor of the hu­man sys­tem, namely aca­demic re­search.

We’ll even fur­ther over­sim­plify our model of academia and pre­tend that re­search is a two-fac­tor sys­tem con­tain­ing aca­demics and grant­mak­ers, and that a pro­ject can only hap­pen if there’s both a par­ti­ci­pat­ing aca­demic and a par­ti­ci­pat­ing grant­maker.

We next sup­pose that in some aca­demic field, there ex­ists a pop­u­la­tion of re­searchers who are in­di­vi­d­u­ally ea­ger and col­lec­tively op­por­tunis­tic for pub­li­ca­tions—pa­pers ac­cepted to jour­nals, es­pe­cially high-im­pact jour­nal pub­li­ca­tions that con­sti­tute strong progress to­ward tenure. For any clearly visi­ble op­por­tu­nity to get a suffi­ciently large num­ber of cita­tions with a small enough amount of work, there are col­lec­tively enough aca­demics in this field that some­body will snap up the op­por­tu­nity. We could say, to make the ex­am­ple more pre­cise, that the field is col­lec­tively op­por­tunis­tic in 2 cita­tions per work­day—if there’s any clearly visi­ble op­por­tu­nity to do 40 days of work and get 80 cita­tions, some­body in the field will go for it.

This level of op­por­tunism might be much more than the av­er­age pa­per gets in cita­tions per day of work. Maybe the av­er­age is more like 10 cita­tions per year of work, and lots of re­searchers work for a year on a pa­per that ends up gar­ner­ing only 3 cita­tions. We’re not try­ing to ask about the av­er­age price of a cita­tion; we’re try­ing to ask how cheap a cita­tion has to be be­fore some­body some­where is vir­tu­ally guaran­teed to try for it.

But aca­demic pa­per-writ­ers are only half the equa­tion; the other half is a pop­u­la­tion of grant­mak­ers.

In this model, can we sup­pose for ar­gu­ment’s sake that grant­mak­ers are mo­ti­vated by the pure love of all sen­tient life, and yet we still end up with an aca­demic sys­tem that is in­ad­e­quate?

I might naively re­ply: “Sure. Let’s say that those self­ish aca­demics are col­lec­tively op­por­tunis­tic at two cita­tions per work­day, and the blame­less and benev­olent grant­mak­ers are col­lec­tively op­por­tunis­tic at one qual­ity-ad­justed life-year (QALY) per $100.8 Then ev­ery­thing which pro­duces one QALY per $100 and two cita­tions per work­day gets funded. Which means there could be an ob­vi­ous, clearly visi­ble pro­ject that would pro­duce a thou­sand QALYs per dol­lar, and so long as it doesn’t pro­duce enough cita­tions, no­body will work on it. That’s what the model says, right?”

Ah, but this model has a frag­ile equil­ibrium of in­ad­e­quacy. It only takes one re­searcher who is op­por­tunis­tic in QALYs and will­ing to take a hit in cita­tions to snatch up the biggest, low­est-hang­ing al­tru­is­tic fruit if there’s a pop­u­la­tion of grant­mak­ers ea­ger to fund pro­jects like that.

As­sume the most al­tru­is­ti­cally ne­glected pro­ject pro­duces 1,000 QALYs per dol­lar. If we add a sin­gle ra­tio­nal and al­tru­is­tic re­searcher to this model, then they will work on that pro­ject, where­upon the equil­ibrium will be ad­e­quate at 1,000 QALYs per dol­lar. If there are two ra­tio­nal and al­tru­is­tic re­searchers, the sec­ond one to ar­rive will start work on the next-most-ne­glected pro­ject—say, a pro­ject that has 500 QALYs/​$ but wouldn’t gar­ner enough cita­tions for what­ever rea­son—and then the field will be ad­e­quate at 500 QALYs/​$. As this free en­ergy gets eaten up (it’s tasty en­ergy from the per­spec­tive of an al­tru­ist ea­ger for QALYs), the whole field be­comes less in­ad­e­quate in the rele­vant re­spect.

But this as­sumes the grant­mak­ers are ea­ger to fund highly effi­cient QALY-in­creas­ing pro­jects.

Sup­pose in­stead that the grant­mak­ers are not cause-neu­tral scope-sen­si­tive effec­tive al­tru­ists as­sess­ing QALYs/​$. Sup­pose that most grant­mak­ers pur­sue, say, pres­tige per dol­lar. (Robin Han­son offers an el­e­men­tary ar­gu­ment that most grant­mak­ing to academia is about pres­tige.9 In any case, we can pro­vi­sion­ally as­sume the pres­tige model for pur­poses of this toy ex­am­ple.)

From the per­spec­tive of most grant­mak­ers, the ideal grant is one that gets their in­di­vi­d­ual name, or their boss’s name, or their or­ga­ni­za­tion’s name, in news­pa­pers around the world in close vicinity to phrases like “Stephen Hawk­ing” or “Har­vard pro­fes­sor.” Let’s say for the pur­pose of this thought ex­per­i­ment that the pop­u­la­tion of grant­mak­ers is col­lec­tively op­por­tunis­tic in 20 microHawk­ings per dol­lar, such that at least one of them will definitely jump on any clearly visi­ble op­por­tu­nity to af­fili­ate them­selves with Stephen Hawk­ing for $50,000. Then at equil­ibrium, ev­ery­thing that pro­vides at least 2 cita­tions per work­day and 20 microHawk­ings per dol­lar will get done.

This doesn’t quite fol­low log­i­cally, be­cause the stock mar­ket is far more effi­cient at match­ing bids be­tween buy­ers and sel­l­ers than academia is at match­ing re­searchers to grant­mak­ers. (It’s not like any­one in our civ­i­liza­tion has put as much effort into ra­tio­nal­iz­ing the aca­demic match­ing pro­cess as, say, OkCupid has put into their soft­ware for hook­ing up dates. It’s not like any­one who did pro­duce this pub­lic good would get paid more than they could have made as a Google pro­gram­mer.)

But even if the ar­gu­ment is still miss­ing some pieces, you can see the gen­eral shape of this style of anal­y­sis. If a piece of re­search will clearly visi­bly yield lots of cita­tions with a rea­son­able amount of la­bor, and make the grant­mak­ers on the com­mit­tee look good for not too much money com­mit­ted, then a re­searcher ea­ger to do it can prob­a­bly find a grant­maker ea­ger to fund it.

But what if there’s some in­ter­ven­tion which could save 100 QALYs/​$, yet pro­duces nei­ther great cita­tions nor great pres­tige? Then if we add a few al­tru­is­tic re­searchers to the model, they prob­a­bly won’t be able to find a grant­maker to fund it; and if we add a few al­tru­is­tic grant­mak­ers to the model, they prob­a­bly won’t be able to find a qual­ified re­searcher to work on it.

One sys­temic prob­lem can of­ten be over­come by one al­tru­ist in the right place. Two sys­temic prob­lems are an­other mat­ter en­tirely.

Usu­ally when we find trillion-dol­lar bills ly­ing on the ground in real life, it’s a symp­tom of (1) a cen­tral-com­mand bot­tle­neck that no­body else is al­lowed to fix, as with the Euro­pean Cen­tral Bank wreck­ing Europe, or (2) a sys­tem with enough mov­ing parts that at least two parts are si­mul­ta­neously bro­ken, mean­ing that sin­gle ac­tors can­not defy the sys­tem. To mod­ify an old apho­rism: usu­ally, when things suck, it’s be­cause they suck in a way that’s a Nash equil­ibrium.

In the same way that in­effi­cient mar­kets tend sys­tem­at­i­cally to be in­ex­ploitable, grossly in­ad­e­quate sys­tems tend sys­tem­at­i­cally to be un­fix­able by in­di­vi­d­ual non-billion­aires.

But then you can some­times still in­sert a wedge for your­self, even if you can’t save the whole sys­tem. Some­thing that’s sys­tem­i­cally hard to fix for the whole planet is some­times pos­si­ble to fix in your own two-bed­room apart­ment. So in­ad­e­quacy is even more im­por­tant than ex­ploita­bil­ity on a day-to-day ba­sis, be­cause it’s in­ad­e­quacy-gen­er­at­ing situ­a­tions that lead to low-hang­ing fruits large enough to be worth­while at the in­di­vi­d­ual level.


A crit­i­cal anal­ogy be­tween an in­ad­e­quate sys­tem and an effi­cient mar­ket is this: even sys­tems that are hor­ribly in­ad­e­quate from our own per­spec­tive are still in a com­pet­i­tive equil­ibrium. There’s still an equil­ibrium of in­cen­tives, an equil­ibrium of sup­ply and de­mand, an equil­ibrium where (in the cen­tral ex­am­ple above) all the re­searchers are vi­gor­ously com­pet­ing for pres­ti­gious pub­li­ca­tions and us­ing up all available grant money in the course of do­ing so. There’s no free en­ergy any­where in the sys­tem.

I’ve seen a num­ber of novice ra­tio­nal­ists com­mit­ting what I shall term the Free En­ergy Fal­lacy, which is some­thing along the lines of, “This sys­tem’s pur­pose is sup­posed to be to cook omelettes, and yet it pro­duces ter­rible omelettes. So why don’t I use my amaz­ing skills to cook some bet­ter omelettes and take over?”

And gen­er­ally the an­swer is that maybe the sys­tem from your per­spec­tive is bro­ken, but ev­ery­one within the sys­tem is in­tensely com­pet­ing along other di­men­sions and you can’t keep up with that com­pe­ti­tion. They’re all chas­ing what­ever things peo­ple in that sys­tem ac­tu­ally pur­sue—in­stead of the lost pur­poses they wist­fully re­mem­ber, but don’t have a chance to pur­sue be­cause it would be ca­reer suicide. You won’t be­come com­pet­i­tive along those di­men­sions just by cook­ing bet­ter omelettes.

No re­searcher has any spare at­ten­tion to give your im­proved omelette-cook­ing idea be­cause they are already us­ing all of their la­bor to try to get pub­li­ca­tions into high-im­pact jour­nals; they have no free work hours.

The jour­nals won’t take your omelette-cook­ing pa­per be­cause they get lots of at­tempted sub­mis­sions that they screen, for ex­am­ple, by look­ing for whether the re­searcher is from a high-pres­tige in­sti­tu­tion or whether the pa­per is writ­ten in a style that makes it look tech­ni­cally difficult. Be­ing good at cook­ing omelettes doesn’t make you the best com­peti­tor at writ­ing pa­pers to ap­peal to pres­ti­gious jour­nals—any pub­li­ca­tion slot would have to be given to you rather than some­one else who is in­tensely try­ing to get it. Your good omelette tech­nique might be a bonus, but only if you were already do­ing ev­ery­thing else right (which you’re not).

The grant­mak­ers have no free money to give you to run your omelette-cook­ing ex­per­i­ment, be­cause there are thou­sands of re­searchers com­pet­ing for their money, and you are not com­pet­i­tive at con­vinc­ing grant­mak­ing com­mit­tees that you’re a safe, rep­utable, pres­ti­gious op­tion. Maybe they feel wist­fully fond of the ideal of bet­ter omelettes, but it would be ca­reer suicide for them to give money to the wrong per­son be­cause of that.

What in­ad­e­quate sys­tems and effi­cient mar­kets have in com­mon is the lack of any free en­ergy in the equil­ibrium. We can see the equil­ibrium in both cases as defined by an ab­sence of free en­ergy. In an effi­cient mar­ket, any pre­dictable price change cor­re­sponds to free en­ergy, so thou­sands of hun­gry or­ganisms try­ing to eat the free en­ergy pro­duce a lack of pre­dictable price changes. In a sys­tem like academia, the com­pe­ti­tion for free en­ergy may not cor­re­spond to any­thing good from your own stand­point, and as a re­sult you may la­bel the out­come “in­ad­e­quate”; but there is still no free en­ergy. Try­ing to feed within the sys­tem, or do any­thing within the sys­tem that uses a re­source the other com­pet­ing or­ganisms want—money, pub­li­ca­tion space, pres­tige, at­ten­tion—will gen­er­ally be as hard for you as it is for any other or­ganism.

In­deed, if the sys­tem gave pri­or­ity to re­ward­ing bet­ter perfor­mance along the most use­ful or so­cially benefi­cial di­men­sions over all com­pet­ing ways of feed­ing, the sys­tem wouldn’t be in­ad­e­quate in the first place. It’s like wish­ing Pre­dic­tIt didn’t have fees and bet­ting limits so that you could snap up those mis­priced con­tracts.

In a way, it’s this very lack of free en­ergy, this in­tense com­pe­ti­tion with­out space to draw a breath, that keeps the in­ad­e­quacy around and makes it non-frag­ile. In the case of US sci­ence, there was a brief pe­riod af­ter World War II where there was new fund­ing com­ing in faster than uni­ver­si­ties could cre­ate new grad stu­dents, and sci­en­tists had a chance to pur­sue ideas that they liked. To­day Malthus has re­asserted him­self, and it’s no longer gen­er­ally fea­si­ble for peo­ple to achieve ca­reer suc­cess while go­ing off and just pur­su­ing the re­search they most en­joy, or just go­ing off and pur­su­ing the re­search with the largest al­tru­is­tic benefits. For any ac­tor to do the best thing from an al­tru­is­tic stand­point, they’d need to ig­nore all of the sys­tem’s in­ter­nal in­cen­tives point­ing some­where else, and there’s no free en­ergy in the sys­tem to feed some­one who does that.10


Since the idea of civ­i­liza­tional ad­e­quacy seems fairly use­ful and gen­eral, I ini­tially won­dered whether it might be a known idea (un­der some other name) in eco­nomics text­books. But my friend Robin Han­son, a pro­fes­sional economist at an aca­demic in­sti­tu­tion well-known for its economists, has writ­ten a lot of ma­te­rial that I see (from this the­o­ret­i­cal per­spec­tive) as do­ing back­wards rea­son­ing from in­ad­e­quacy to in­cen­tives.11 If there were a wide­spread eco­nomic no­tion of ad­e­quacy that he were in­vok­ing, or stan­dard mod­els of aca­demic in­cen­tives and aca­demic in­ad­e­quacy, I would ex­pect him to cite them.

Now look at the above para­graph. Can you spot the two im­plicit ar­gu­ments from ad­e­quacy?

The first sen­tence says, “To the ex­tent that this way of gen­er­al­iz­ing the no­tion of an effi­cient mar­ket is con­cep­tu­ally use­ful, we should ex­pect the field of eco­nomics to have been ad­e­quate to have already ex­plored it in pa­pers, and ad­e­quate at the task of dis­sem­i­nat­ing the re­sult­ing knowl­edge to the point where my economist friends would be fa­mil­iar with it.”

The sec­ond and third sen­tences say, “If some­thing like in­ad­e­quacy anal­y­sis were already a well-known idea in eco­nomics, then I would ex­pect my smart economist friend Robin Han­son to cite it. Even if Robin started out not know­ing, I ex­pect his other economist friends would tell him, or that one of the many economists read­ing his blog would com­ment on it. I ex­pect the pop­u­la­tion of economists read­ing Robin’s blog and pa­pers to be ad­e­quate to the task of tel­ling Robin about an ex­ist­ing field here, if one already ex­isted.”

Ad­e­quacy ar­gu­ments are ubiquitous, and they’re much more com­mon in ev­ery­day rea­son­ing than ar­gu­ments about effi­ciency or ex­ploita­bil­ity.


Re­turn­ing to that busi­ness of string­ing up 130 light bulbs around the house to treat my wife’s Sea­sonal Affec­tive Di­sor­der:

Be­fore I started, I tried to Google whether any­one had given “put up a ton of high-qual­ity lights” a shot as a treat­ment for re­sis­tant SAD, and didn’t find any­thing. Where­upon I shrugged, and started putting up LED bulbs.

Ob­serv­ing these choices of mine, we can in­fer that my in­ad­e­quacy anal­y­sis was some­thing like this: First, I did spend a fair amount of time Googling, and tried harder af­ter the first search terms failed. This im­plies I started out think­ing my civ­i­liza­tion might have been ad­e­quate to think of the more light treat­ment and test it.

Then when I didn’t find any­thing on Google, I went ahead and tested the idea my­self, at con­sid­er­able ex­pense. I didn’t as­sign such a high prob­a­bil­ity to “if this is a good idea, peo­ple will have tested it and prop­a­gated it to the point where I could find it” that in the ab­sence of Google re­sults, I could in­fer that the idea was bad.

I ini­tially tried or­der­ing the cheap­est LED lights from Hong Kong that I could find on eBay. I didn’t feel like I could rely on the US light­ing mar­ket to equal­ize prices with Hong Kong, and so I wasn’t con­fi­dent that the pre­mium price for US LED bulbs rep­re­sented a qual­ity differ­ence. But when the cheap lights fi­nally ar­rived from Hong Kong, they were dim, in­effi­cient, and of visi­bly low color qual­ity. So I de­cided to buy the more ex­pen­sive US light bulbs for my next de­sign iter­a­tion.

That is: I tried to save money based on a pos­si­ble lo­cal in­effi­ciency, but it turned out not to be in­effi­cient, or at least not in­effi­cient enough to be eas­ily ex­ploited by me. So I up­dated on that ob­ser­va­tion, dis­carded my pre­vi­ous be­lief, and changed my be­hav­ior.

Some­time af­ter putting up the first 100 light bulbs or so, I was work­ing on an ear­lier draft of this chap­ter and there­fore re­flect­ing more in­ten­sively on my pro­cess than I usu­ally do. It oc­curred to me that some­times the best aca­demic con­tent isn’t on­line and that it might not be ex­pen­sive to test that. So I or­dered a used $6 ed­ited vol­ume on Sea­sonal Affec­tive Di­sor­der, in case my Google-fu had failed me, hop­ing that a stan­dard col­lec­tion of pa­pers would men­tion a light-in­ten­sity re­sponse curve that went past “stan­dard light­box.”

Well, I’ve flipped through that vol­ume, and so far it doesn’t seem to con­tain any ac­count of any­one hav­ing ever tried to cure re­sis­tant SAD us­ing more light, ei­ther sub­stan­tially higher-in­ten­sity or sub­stan­tially higher-du­ra­tion. I didn’t find any table of re­sponse curves to light lev­els above 10,000 lux, or any ex­per­i­ments with all-day ar­tifi­cial light lev­els com­pa­rable to my apart­ment’s roughly 2,000-lux lu­mi­nance.

I say this to em­pha­size that I didn’t lock my­self into my at­tempted rea­son­ing about ad­e­quacy when I re­al­ized it would cost $6 to perform a fur­ther ob­ser­va­tional check. And to be clear, or­der­ing one book still isn’t a strong check. It wouldn’t sur­prise me in the least to learn that at least one re­searcher some­where on Earth had tested the ob­vi­ous thought of more light and pub­lished the re­sponse curve. But I’d also hes­i­tate to bet at odds very far from 1:1 in ei­ther di­rec­tion.

And the higher-in­ten­sity light ther­apy does seems to have mostly cured Brienne’s SAD. It wasn’t cheap, but it was cheaper than send­ing her to Chile for 4 months.

If more light re­ally is a sim­ple and effec­tive treat­ment for a large per­centage of oth­er­wise re­sis­tant pa­tients, is it truly plau­si­ble that no aca­demic re­searcher out there has ever con­ducted the first in­ves­ti­ga­tion to cross my own mind? “Well, since the Sun it­self clearly does work, let’s try more light through­out the whole house—never mind these dinky light­boxes or 30-minute ex­po­sure times—and then just keep adding more light un­til it frickin’ works.” Is that re­ally so non-ob­vi­ous? With so many peo­ple around the world suffer­ing from se­vere or sub­clini­cal SAD that re­sists light­boxes, with whole coun­tries in the far North or South where the syn­drome is com­mon, could that ex­per­i­ment re­ally have never been tried in a for­mal re­search set­ting?

On my model of the world? Sure.

Am I run­ning out and try­ing to get a SAD re­searcher in­ter­ested in my anec­do­tal data? No, be­cause when some­thing like this doesn’t get done, there’s usu­ally a deeper rea­son than “no­body thought of it.”

Even if no­body did think of it, that says some­thing about a lack of in­cen­tives to be cre­ative. If aca­demics ex­pected work­ing solu­tions to SAD to be re­warded, there would already be a much larger body of liter­a­ture on weird things re­searchers had tried, not just light­box var­i­ant af­ter light­box var­i­ant. Inad­e­quate sys­tems tend sys­tem­at­i­cally to be sys­tem­i­cally un­fix­able; I don’t know the ex­act de­tails in this case, but there’s prob­a­bly some­thing some­where.

So I don’t ex­pect to get rich or fa­mous, be­cause I don’t ex­pect the sys­tem to be that ex­ploitable in dol­lars or es­teem, even though it is ex­ploitable in per­son­al­ized SAD treat­ments. Em­piri­cally, lots of peo­ple want money and ac­claim, and base their short- and long-term ca­reer de­ci­sions around its pur­suit; so achiev­ing it in un­usu­ally large quan­tities shouldn’t be as sim­ple as hav­ing one bright idea. But there aren’t large groups of com­pe­tent peo­ple visi­bly or­ga­niz­ing their day-to-day lives around pro­duc­ing out­side-the-box new light­box al­ter­na­tives with the same in­ten­sity we can ob­serve peo­ple or­ga­niz­ing their lives around pay­ing the bills, win­ning pres­tige or the ac­claim of peers, etc.

Peo­ple pre­sum­ably care about cur­ing SAD—if they could effortlessly push a but­ton to in­stantly cure SAD, they would do so—but there’s a big differ­ence be­tween “car­ing” and “car­ing enough to pri­ori­tize this over nearly ev­ery­thing else I care about,” and it’s the lat­ter that would be needed for re­searchers to be will­ing to per­son­ally trade away non-small amounts of ex­pected money or es­teem for new treat­ment ideas.12

In the case of Ja­pan’s mon­e­tary policy, it wasn’t a co­in­ci­dence that I couldn’t get rich by un­der­stand­ing macroe­co­nomics bet­ter than the Bank of Ja­pan. Ja­panese as­set mar­kets shot up as soon as it be­came known that the Bank of Ja­pan would cre­ate more money, with­out any need to wait and see—so it turns out that the mar­kets also un­der­stood macroe­co­nomics bet­ter than the Bank of Ja­pan. Part of our civ­i­liza­tion was be­ing, in a cer­tain sense, stupid: there were trillion-dol­lar bills ly­ing around for the tak­ing. But they weren’t trillion-dol­lar bills that just any­one could walk over and pick up.

From the stand­point of a sin­gle agent like my­self, that ecol­ogy didn’t con­tain the par­tic­u­lar kind of free en­ergy that lots of other agents were com­pet­ing to eat. I could be un­usu­ally right about macroe­co­nomics com­pared to the PhD-bear­ing pro­fes­sion­als at the Bank of Ja­pan, but that weirdly low-hang­ing epistemic fruit wasn’t a low-hang­ing fi­nan­cial fruit; I couldn’t use the ex­cess knowl­edge to eas­ily get ex­cess money de­liv­er­able the next day.

Where re­ward doesn’t fol­low suc­cess, or where not ev­ery­one can in­di­vi­d­u­ally pick up the re­ward, in­sti­tu­tions and coun­tries and whole civ­i­liza­tions can fail at what is usu­ally imag­ined to be their tasks. And then it is very much eas­ier to do bet­ter in some di­men­sions than to profit in oth­ers.

To state all of this more pre­cisely: Sup­pose there is some space of strate­gies that you’re com­pe­tent enough to think up and ex­e­cute on. In­ex­ploita­bil­ity has a sin­gle unit at­tached, like “$” or “effec­tive SAD treat­ments,” and says that you can’t find a strat­egy in this space that know­ably gets you much more of the re­source in ques­tion than other agents. The kind of in­ex­ploita­bil­ity I’m in­ter­ested in typ­i­cally arises when a large ecosys­tem of com­pet­ing agents is gen­uinely try­ing to get the re­source in ques­tion, and has ac­cess to strate­gies at least as good (for ac­quiring that re­source) as the best op­tions in your strat­egy space.

Inad­e­quacy with re­spect to a strat­egy space has two units at­tached, like “effec­tive SAD treat­ments /​ re­search hours” or “QALYs /​ $,” and says that there is some set of strate­gies a large ecosys­tem of agents could pur­sue that would con­vert the de­nom­i­na­tor unit into the nu­mer­a­tor unit at some de­sired rate, but the agents are pur­su­ing strate­gies that in fact re­sult in a lower con­ver­sion rate. The kind of in­ad­e­quacy I’m most in­ter­ested in arises when many of the agents in the ecosys­tem would pre­fer that the con­ver­sion oc­cur at the rate in ques­tion, but there’s some sys­temic block­age pre­vent­ing this from hap­pen­ing.

Sys­tems tend to be in­ex­ploitable with re­spect to the re­sources that large ecosys­tems of com­pe­tent agents are try­ing their hard­est to pur­sue, like fame and money, re­gard­less of how ad­e­quate or in­ad­e­quate they are. And if there are other re­sources the agents aren’t ad­e­quate at con­vert­ing fame, money, etc. into at a widely de­sired rate, it will of­ten be due to some sys­temic block­age. In­so­far as agents have over­lap­ping goals, it will there­fore of­ten be harder than it looks to find real in­stances of ex­ploita­bil­ity, and harder than it looks to out­perform an in­ad­e­quate equil­ibrium. But more lo­cal goals tend to over­lap less: there isn’t a large com­mu­nity of spe­cial­ists speci­fi­cally try­ing to im­prove my wife’s well-be­ing.

The aca­demic and med­i­cal sys­tem prob­a­bly isn’t that easy to ex­ploit in dol­lars or es­teem, but so far it does look like maybe the sys­tem is ex­ploitable in SAD in­no­va­tions, due to be­ing in­ad­e­quate to the task of con­vert­ing dol­lars, es­teem, re­searcher hours, etc. into new SAD cures at a rea­son­able rate—in­ad­e­quate, for ex­am­ple, at in­ves­ti­gat­ing some SAD cures that Ran­dall Mun­roe would have con­sid­ered ob­vi­ous,13 or at do­ing the ba­sic in­ves­tiga­tive ex­per­i­ments that I would have con­sid­ered ob­vi­ous. And when the world is like that, it’s pos­si­ble to cure some­one’s crip­pling SAD by think­ing care­fully about the prob­lem your­self, even if your civ­i­liza­tion doesn’t have a main­stream an­swer.


There’s a whole lot more to be said about how to think about in­ad­e­quate sys­tems: com­mon con­cep­tual tools in­clude Nash equil­ibria, com­mons prob­lems, asym­met­ri­cal in­for­ma­tion, prin­ci­pal-agent prob­lems, and more. There’s also a whole lot more to be said about how not to think about in­ad­e­quate sys­tems.

In par­tic­u­lar, if you re­lax your self-skep­ti­cism even slightly, it’s triv­ial to come up with an a pri­ori in­ad­e­quacy ar­gu­ment for just about any­thing. Talk about “effi­cient mar­kets” in any less than stel­lar fo­rum, and you’ll soon get half a dozen com­ments from peo­ple de­rid­ing the stu­pidity of hedge fund man­agers. And, yes, the fi­nan­cial sys­tem is bro­ken in a lot of ways, but you still can’t dou­ble your money trad­ing S&P 500 stocks. “Find one thing to de­ride, con­clude in­ad­e­quacy” is not a good rule.

At the same time, lots of real-world so­cial sys­tems do have in­ad­e­quate equil­ibria and it is im­por­tant to be able to un­der­stand that, es­pe­cially when we have clear ob­ser­va­tional ev­i­dence that this is the case. A blan­ket dis­trust of in­ad­e­quacy ar­gu­ments won’t get us very far ei­ther.

This is one of those ideas where other cog­ni­tive skills are re­quired to use it cor­rectly, and you can shoot off your own foot by think­ing wrongly. So if you’ve read this far, it’s prob­a­bly a good idea to keep read­ing.

Next: Moloch’s Toolbox part 1.

The full book will be available Novem­ber 16th. You can go to equil­ibri­ to pre-or­der the book, or sign up for no­tifi­ca­tions about new chap­ters and other de­vel­op­ments.

  1. If the per­son gets an­gry and starts talk­ing about lack of liquidity, rather than about the pit­falls of cap­i­tal­ism, then that is an en­tirely sep­a­rate class of dis­pute.

  2. You can of­ten pre­dict the likely di­rec­tion of a move in such a mar­ket, even though on av­er­age your best guess for the change in price will always be 0. This is be­cause the me­dian mar­ket move will usu­ally not equal the mean mar­ket move. For similar rea­sons, a ra­tio­nal agent can usu­ally pre­dict the di­rec­tion of a fu­ture Bayesian up­date, even though the av­er­age value by which their prob­a­bil­ity changes should be 0. A high prob­a­bil­ity of a small up­date in the ex­pected di­rec­tion can be offset by a low prob­a­bil­ity of a larger up­date in the op­po­site di­rec­tion.

  3. Any­one who tries to spread prob­a­bil­ity liter­acy quickly runs into the prob­lem that a weather fore­cast giv­ing an 80% chance of clear skies is deemed “wrong” on the 1-in–5 oc­ca­sions when it in fact rains, prompt­ing peo­ple to won­der what mis­take the weather fore­caster made this time around.

  4. More pre­cisely, I would say that the mar­ket was in­ex­ploitable in money, but in­effi­ciently priced.

  5. To short-sell is to bor­row the as­set, sell it, and then buy it back later af­ter the price de­clines; or some­times to cre­ate a syn­thetic copy of an as­set, so you can sell that. Short­ing an as­set al­lows you to make money if the price goes down in the fu­ture, and has the effect of low­er­ing the as­set’s price by in­creas­ing sup­ply.

  6. Though be­ware that even in a stock mar­ket, some stocks are harder to short than oth­ers—like stocks that have just IPOed. Drech­sler and Drech­sler found that cre­at­ing a broad mar­ket fund of only as­sets that are easy to short in re­cent years would have pro­duced 5% higher re­turns (!) than in­dex funds that don’t kick out hard-to-short as­sets. Un­for­tu­nately, I don’t know of any in­dex fund that ac­tu­ally tracks this strat­egy, or it’s what I’d own as my main fi­nan­cial as­set.

  7. Robert Shiller cites Ed­ward Miller as hav­ing ob­served in 1977 that effi­ciency re­quires short sales, and ei­ther Shiller or Miller ob­serves that houses can’t be shorted. But I don’t know of any stan­dard eco­nomic term for mar­kets that are in­effi­cient but “in­ex­ploitable” (as I termed it). It’s not a new idea, but I don’t know if it has an old name.

    I men­tion par­en­thet­i­cally that a reg­u­la­tor that gen­uinely and deeply cared about pro­tect­ing re­tail fi­nan­cial cus­tomers would just con­cen­trate on mak­ing ev­ery­thing in that mar­ket easy to short-sell. This is the ob­vi­ous and only way to en­sure the as­set is not over­priced. If the Very Se­ri­ous Peo­ple be­hind the JOBS Act to en­able crowd­funded star­tups had hon­estly wanted to pro­tect nor­mal peo­ple and un­der­stood this phe­nomenon, they would man­date that all equity sales go through an ex­change where it was easy to bet against the equity of dumb star­tups, and then de­clare their work done and go on per­ma­nent va­ca­tion in Aruba. This is the easy and only way to pro­tect con­sumers from over­priced fi­nan­cial as­sets.

  8. “Qual­ity-ad­justed life year” is a mea­sure used to com­pare the effec­tive­ness of med­i­cal in­ter­ven­tions. QALYs are a pop­u­lar way of re­lat­ing the costs of death and dis­ease, though they’re gen­er­ally defined in ways that ex­clude non-health con­trib­u­tors to qual­ity of life.

  9. Han­son, “Academia’s Func­tion.”

  10. This is also why, for ex­am­ple, you can’t get your pro­ject funded by ap­peal­ing to Bill Gates. Every minute of Bill Gates’s time that Bill Gates makes available to philan­thropists is a highly prized and fought-over re­source. Every dol­lar of Gates’s that he makes available to philan­thropy is already highly fought over. You won’t even get a chance to talk to him. Bill Gates is sur­rounded by a cloud of money, but you’re very naive if you think that cor­re­sponds to him be­ing sur­rounded by a cloud of free en­ergy.

  11. Robin of­ten says things like, for ex­am­ple: “X doesn’t use a pre­dic­tion mar­ket, so X must not re­ally care about ac­cu­rate es­ti­mates.” That is to say: “If sys­tem X were driven mainly by in­cen­tive Y, then it would have a Y-ad­e­quate equil­ibrium that would pick low-hang­ing fruit Z. But sys­tem X doesn’t do Z, so X must not be driven mainly by in­cen­tive Y.”

  12. Even the at­ten­tion and aware­ness needed to ex­plic­itly con­sider the op­tion of mak­ing such a trade­off, in an en­vi­ron­ment where such trade­offs aren’t already nor­mally made or dis­cussed, is a limited re­source. Re­searchers will not be mo­ti­vated to take the time to think about pur­su­ing more so­cially benefi­cial re­search strate­gies if they’re cur­rently pour­ing all their at­ten­tion and strate­gic think­ing into find­ing ways to achieve more of the other things they want in life.

    Con­ven­tional cyn­i­cal eco­nomics doesn’t re­quire us to posit Machi­avel­lian re­searchers who ex­plic­itly con­sid­ered pur­su­ing bet­ter strate­gies for treat­ing SAD and de­cided against them for self­ish rea­sons; they can just be too busy and dis­tracted pur­su­ing more ob­vi­ous and im­me­di­ate re­wards, and never have a per­cep­ti­ble near-term in­cen­tive to even think very much about some other con­sid­er­a­tions.

  13. See: What If? Laser Poin­ter.