# Blind Spot: Malthusian Crunch

In an un­re­lated thread, one thing led to an­other and we got onto the sub­ject of over­pop­u­la­tion and car­ry­ing ca­pac­ity. I think this topic needs a post of its own.

TLDR mathy ver­sion:

let f(m,t) be the pop­u­la­tion that can be sup­ported us­ing the frac­tion of Earth’s the­o­ret­i­cal re­source limit m we can ex­ploit at tech­nol­ogy level t

let t = k(x) be the tech­nol­ogy level at year x

let p(x) be pop­u­la­tion at year x

What con­di­tions must con­stant m and func­tions f(m,k(x)), k(x), and p(x) satisfy in or­der to in­sure that p(x) - f(m,t) > 0 for all x > to­day()? What em­piri­cal data are rele­vant to es­ti­mat­ing the prob­a­bil­ity that these con­di­tions are all satis­fied?

Long ver­sion:

Here I would like to ex­plore the ev­i­dence for and against the pos­si­bil­ity that the fol­low­ing as­ser­tions are true:

1. Without hu­man in­ter­ven­tion, the car­ry­ing ca­pac­ity of our en­vi­ron­ment (broadly defined1) is finite while there are no *in­trin­sic* limits on pop­u­la­tion growth.

2. There­fore, if the car­ry­ing ca­pac­ity of our en­vi­ron­ment is not ex­tended at a suffi­cient rate to out­pace pop­u­la­tion growth and/​or pop­u­la­tion growth does not slow to a suffi­cient level that car­ry­ing ca­pac­ity can keep up, car­ry­ing ca­pac­ity will even­tu­ally be­come the limit on pop­u­la­tion growth.

3. Abun­dant data from zo­ol­ogy show that the mechanisms by which car­ry­ing ca­pac­ity limits pop­u­la­tion growth in­clude star­va­tion, epi­demics, and vi­o­lent com­pe­ti­tion for re­sources. If the mo­men­tum of pop­u­la­tion growth car­ries it past the car­ry­ing ca­pac­ity an over­shoot oc­curs, mean­ing that the pop­u­la­tion size doesn’t just re­main at a sus­tain­able level but rather plum­mets dras­ti­cally, some­times to the point of ex­tinc­tion.

4. The above three as­ser­tions im­ply that hu­man in­ter­ven­tion (by ex­pand­ing the car­ry­ing ca­pac­ity of our en­vi­ron­ment in var­i­ous ways and by limit­ing our birth-rates in var­i­ous ways) are what have to rely on to pre­vent the above sce­nario, let’s call it the Malthu­sian Crunch.

5. Just as the Nazis have dis­cred­ited eu­gen­ics, main­stream en­vi­ron­men­tal­ists have dis­cred­ited (at least among ra­tio­nal­ists) the con­cept of finite car­ry­ing ca­pac­ity by giv­ing it a cultish stigma. More­over, solu­tions that rely on sweep­ing, heavy-handed reg­u­la­tion have re­cieved so much at­ten­tion (per­haps be­cause the chain of causal­ity is eas­ier to un­der­stand) that to many peo­ple they seem like the *only* solu­tions. Find­ing these solu­tions un­palat­able, they in­stead re­ject the prob­lem it­self. And by they, I mean us.

6. The al­ter­na­tive most en­vi­ron­men­tal­ists ei­ther ig­nore or out­right op­pose is de­liber­ately try­ing to ac­cel­er­ate the rate of tech­nolog­i­cal ad­vance­ment to in­crease the “safety zone” be­tween ex­pan­sion of car­ry­ing ca­pac­ity and pop­u­la­tion growth. More­over, we are close to a level of tech­nol­ogy that would al­low us to start coloniz­ing the rest of the so­lar sys­tem. Ob­vi­ously any given niche within the so­lar sys­tem will have its own finite car­ry­ing ca­pac­ity, but it will be many or­ders of mag­ni­tude higher than that of Earth alone. Ex­pand­ing into those niches won’t pre­vent die-offs on Earth, but will at least be a par­tial hedge against to­tal ex­tinc­tion and a nec­es­sary step to­ward even­tual ex­pan­sion to other star sys­tems.

Please note: I’m not propos­ing that the above as­ser­tions must be true, only that they have a high enough prob­a­bil­ity of be­ing cor­rect that they should be taken as se­ri­ously as, for ex­am­ple, grey goo:

Pre­dic­tions about the dan­gers of nan­otech made in the 1980′s shown no signs of com­ing true. Yet, there is no known log­i­cal or phys­i­cal rea­son why they can’t come true, so we don’t ig­nore it. We cal­ibrate how much effort should be put into miti­gat­ing the risks of nan­otech­nol­ogy by ask­ing what ob­ser­va­tions should make us up­date the like­li­hood we as­sign to a grey-goo sce­nario. We ap­proach miti­ga­tion strate­gies from an en­g­ineer­ing mind­set rather than a poli­ti­cal one.

Shouldn’t we hold our­selves to the same stan­dard when dis­cussing pop­u­la­tion growth and over­shoot? Sub­sti­tute in some other ex­is­ten­tial risks you take se­ri­ously. Which of them have an ex­pec­ta­tion2 of oc­cur­ing be­fore a Malthu­sian Crunch? Which of them have an ex­pec­ta­tion of oc­cur­ing af­ter?

Foot­notes:

1: By car­ry­ing ca­pac­ity, I mean finite re­sources such as eas­ily ex­tractable ores, wa­ter, air, EM spec­trum, and land area. Cer­tain very slowly re­plen­ish­ing re­sources such as fos­sil fuels and bio­di­ver­sity also be­have like finite re­sources on a hu­man timescale. I also in­clude non-finite re­sources that ex­pand or re­plen­ish at a finite rate such as use­ful plants and an­i­mals, potable wa­ter, arable land, and breath­able air. Tech­nol­ogy ex­pands car­ry­ing ca­pac­ity by al­low­ing us to ex­ploit all re­source more effi­ciently (pa­per­less offices, telecom­mut­ing, fuel effi­ciency), open up re­serves that were pre­vi­ously not eco­nom­i­cally fea­si­ble to ex­ploit (shale oil, methane clathrates, high-rise build­ings, seast­eading), and ac­cel­er­ate the re­newal of non-finite re­sources (agri­cul­ture, land recla­ma­tion pro­jects, toxic waste re­me­di­a­tion, de­sal­iniza­tion plants).

2: This is a hard ques­tion. I’m not ask­ing which catas­tro­phe is the mostly likely to hap­pen ever while hold­ing ev­ery­thing else con­stant (the pos­si­ble ones will be tied for 1 and the im­pos­si­ble ones will be tied for 0). I’m ask­ing you to men­tally (or phys­i­cally) draw a set of sur­vival curves, one for each catas­tro­phe, with the x-axis rep­re­sent­ing time and the y-axis rep­re­sent­ing frac­tion of Everett branches where that catas­tro­phe has not yet oc­cured. Now, which curves are the up­per bound on the curve rep­re­sent­ing Malthu­sian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an ag­ing re­searcher and bio­statis­ti­cian for what­ever that’s worth) you think about haz­ard func­tions, in­clud­ing those for ex­is­ten­tial haz­ards. Keep in mind that some haz­ard func­tions change over time be­cause they are con­di­tioned on other events or be­cause they are cyclic in na­ture. This means that the thing most likely to wipe us out in the next 50 years is not nec­es­sar­ily the same as the thing most likely to wipe us out in the 50 years af­ter that. I don’t have a for­mal an­swer for how to trans­form that into op­ti­mal al­lo­ca­tion of re­sources be­tween miti­ga­tion efforts but that would be the next step.

• Re­sources in­side a light cone go ac­cord­ing to T cubed, pop­u­la­tion growth is ex­po­nen­tial: thus we see re­source limi­ta­tion ubiquitously: Malthus was (es­sen­tially) cor­rect.

Maybe “T cubed” will turn out to be com­pletely wrong, and there will be some way of get­ting hold of ex­po­nen­tial re­sources—but few will be hold­ing their breath for news of this.

• pop­u­la­tion growth is exponential

Stein’s Law: If some­thing can­not go on for­ever, it will stop.

On a more ba­sic note, pop­u­la­tion growth is ex­po­nen­tial only un­der cer­tain con­di­tions which tend to be rare and do not per­sist.

• It’s not just pop­u­la­tion growth. It’s re­source growth. Our en­tire mod­ern sys­tem of econ­omy is based on the idea of ex­po­nen­tial growth. This sys­tem must col­lapse even­tu­ally, it’s only a ques­tion of how many planets we con­sume be­fore that hap­pens.

• Our en­tire mod­ern sys­tem of econ­omy is based on the idea of ex­po­nen­tial growth.

I’ve heard this phrase be­fore. I see no rea­son to be­lieve it’s true. Ja­pan has been at ba­si­cally zero growth for more than 20 years by now and seems to be do­ing fine. Sure, it could be do­ing bet­ter but it’s not like its sys­tem of econ­omy col­lapsed.

Now, gov­ern­ment so­cial pro­grams tend to be based on the hope of ex­po­nen­tial growth, but that’s a differ­ent prob­lem al­to­gether.

• Poli­ti­ci­ans and other de­ci­sion mak­ers base their eco­nomic de­ci­sions on the as­sump­tion of growth. You are cor­rect that con­tinued ex­po­nen­tial growth is not nec­es­sary for a healthy so­ciety. In fact we must even­tu­ally learn to live with zero or near-zero growth, so we had best start do­ing it now and ad­just­ing our poli­cies ac­cord­ingly.

• it’s only a ques­tion of how many planets we con­sume be­fore that hap­pens.

Hope­fully more than one. There are a lot of un­der­uti­lized planets out there, even within our own so­lar sys­tem.

• Upvote.

This just illus­trates the craz­i­ness. You pre­sent a fact of ba­sic alge­bra in the ab­stract, and no­body has a prob­lem with it. Even though it’s a di­rer pre­dic­tion be­cause it is fully gen­eral in its rele­vance.

I say the same thing but on a lo­cal scale, and get a very vi­gor­ous re­ac­tion.

• Ad­vo­cat­ing that pop­u­la­tion con­trol as the most im­por­tant pri­or­ity that there is dam­ages efforts at vac­ci­na­tion.

If it’s plau­si­ble that your morals are okay with giv­ing vac­ci­na­tions in a way to dam­age hu­man re­pro­duc­tive ca­pac­ity your effort of vac­ci­na­tion peo­ple against im­por­tant dis­eases runs into trou­ble.

There are enough con­spir­acy the­o­rists out there that claim that the UN cares about pop­u­la­tion con­trol enough to vac­ci­nate in a way re­duces re­pro­duc­tion ca­pac­ity that’s an is­sue. It’s valuable to sig­nal that you care more about sav­ing lives than you can about pop­u­la­tion con­trol when you want that a Afri­can na­tion wel­comes your help at vac­ci­nat­ing it’s pop­u­la­tion to get rid of nasty dis­eases.

The poli­tics of go­ing to an Afri­can na­tion and say­ing: “We come with an en­g­ineer­ing solu­tion to re­duce your pop­u­la­tion growth are just ter­rible.”

An Afri­can com­mu­nity is less likely to take your con­doms when they think that you want to re­duce their pop­u­la­tion growth than when they think you care about pro­tect­ing them from AIDS.

Poli­tics mat­ter. Try­ing to tackle the is­sue of pop­u­la­tion growth by ig­nor­ing poli­tics has the dan­ger that you make a lot of poli­ti­cal mis­takes that hurt your course.

• Yes, it’s a so­cially tough ques­tion. It might be so tough that the bulk of miti­ga­tion efforts might have to be put into the tech­nolog­i­cal ad­vance­ment side of the equa­tion, and that seems to be what’s hap­pen­ing, though it’s un­clear how de­liber­ate this is.

But just be­cause pub­li­cly ac­knowl­edg­ing the na­ture of a prob­lem will make one un­pop­u­lar doesn’t mean that one should pri­vately start to deny it. On the con­trary, one should cor­rect for the Koolaid by pri­vately re­mind­ing one’s self what the real prob­lem is, and that a so­cially ac­cept­able fram­ing of the prob­lem has to be part of any solu­tion that one ex­pects to work.

• The al­ter­na­tive most en­vi­ron­men­tal­ists ei­ther ig­nore or out­right op­pose is de­liber­ately try­ing to ac­cel­er­ate the rate of tech­nolog­i­cal ad­vance­ment to in­crease the “safety zone” be­tween ex­pan­sion of car­ry­ing ca­pac­ity and pop­u­la­tion growth.

The Jevons para­dox: tech­nolog­i­cal im­prove­ments make each unit of nat­u­ral re­sources more use­ful, in­creas­ing the rate at which they are used up. (Though I’m not con­vinced that most en­vi­ron­men­tal­ists ac­tu­ally are op­posed to all rele­vant tech­nolog­i­cal im­prove­ments. I’ve definitely never heard any com­plain about so­lar en­ergy re­search, for ex­am­ple.) Ad­di­tion­ally, a safety mar­gin main­tained through an ever-in­creas­ing rate of tech­nolog­i­cal ad­vance­ment is brit­tle and seems like it should in­crease catas­trophic risk. An anal­ogy: “let’s not build be­low sea level” is more ro­bust than “in­tri­cate dyke sys­tem vuln­er­a­ble to catas­trophic failure.”

I like the idea of space coloniza­tion, but it’s not clear that it’s a prac­ti­cal, let alone ro­bust, way to get our eggs into more bas­kets.

On ex­is­ten­tial risk over­all, my read­ing on AI has been push­ing me to­wards the point of view that ac­tu­ally global warm­ing → civ­i­liza­tional col­lapse may be our best hope for the fu­ture, if it can only hap­pen fast enough to pre­vent the de­vel­op­ment of a su­per­in­tel­li­gence.

• An anal­ogy: “let’s not build be­low sea level” is more ro­bust than “in­tri­cate dyke sys­tem vuln­er­a­ble to catas­trophic failure.”

I am not sure that’s true. Con­sider a similar anal­ogy: “let’s not de­velop agri­cul­ture” is more ro­bust than “de­pen­dence on fickle weather or in­tri­cate ir­ri­ga­tion sys­tem”. Is that so? Not likely—you just get hit by a differ­ent set of risks. One day a lot of pale peo­ple with thun­der­sticks ap­pear, they kill your men and herd women and chil­dren into reser­va­tions to die.

Given the fate of the so­cieties which did not climb the tech­nolog­i­cal tree suffi­ciently fast, I’d say throt­tling down progress sure doesn’t look like a wise choice.

• Given the fate of the so­cieties which did not climb the tech­nolog­i­cal tree suffi­ciently fast, I’d say throt­tling down progress sure doesn’t look like a wise choice.

I com­pletely agree, that’s a great point! The sixth one, to be ex­act.

• Be­ing will­ing to take on greater ex­is­ten­tial risks will definitely help in short- or medium-term com­pe­ti­tion, as your ex­am­ple shows (greater risk of famine in ex­change for abil­ity to con­quer other so­cieties). So no, I don’t think we can nec­es­sar­ily co­or­di­nate to avoid a “brit­tle” situ­a­tion in which we are vuln­er­a­ble to catas­trophic failure. That doesn’t mean it’s not de­sir­able.

• I like the idea of space coloniza­tion, but it’s not clear that it’s a prac­ti­cal, let alone ro­bust, way to get our eggs into more bas­kets.

I read some­where that to cal­ibrate the lo­gis­tics of get­ting ev­ery­one off Earth, you should con­sider how much it would cost and how long it would take to load ev­ery hu­man onto a pas­sen­ger jet and fly them all to the same con­ti­nent. I wish I could find that es­say. Long story short, it would take a loooot of re­sources. So, it prob­a­bly won’t be our eggs in par­tic­u­lar get­ting into more bas­kets, but at least the eggs of some fel­low hu­mans.

On ex­is­ten­tial risk over­all, my read­ing on AI has been push­ing me to­wards the point of view that ac­tu­ally global warm­ing → civ­i­liza­tional col­lapse may be our best hope for the fu­ture, if it can only hap­pen fast enough to pre­vent the de­vel­op­ment of a su­per­in­tel­li­gence.

I see two out­comes: ei­ther there are enough ex­ploitable re­sources left to re­build a tech­nolog­i­cal civ­i­liza­tion, in which case some­one will get back to pur­su­ing su­per­in­tel­li­gence, or there are not enough ex­ploitable re­sources left to re­build a tech­nolog­i­cal civ­i­liza­tion in which case we piss away our last days throw­ing spears and dy­ing of dysen­tery. Or maybe we evolve into non tool-us­ing crea­tures like in Gala­pa­gos. In any case, the left of the Drake Equa­tion re­mains at zero. Break­ing out of the over­shoot/​col­lapse cy­cle means the risk of go­ing out with a bang, but the al­ter­na­tive is the cer­tainty of go­ing out with a whim­per.

• As far as x-risk is con­cerned, we all have the same eggs.

• The al­ter­na­tive most en­vi­ron­men­tal­ists ei­ther ig­nore or out­right op­pose is de­liber­ately try­ing to ac­cel­er­ate the rate of tech­nolog­i­cal advancement

In­di­vi­d­ual tech­nolog­i­cal ad­vances can in­crease the effi­ciency of re­source uti­liza­tion, but presently and his­tor­i­cally higher lev­els of tech­nolog­i­cal de­vel­op­ment are cor­re­lated with higher per cap­ita re­source con­sump­tion.
Any­way, even if fu­ture tech­nolo­gies could lower per cap­ita re­source con­sump­tion, how do you ac­cel­er­ate the rate of tech­nolog­i­cal ad­vance­ment?

More­over, we are close to a level of tech­nol­ogy that would al­low us to start coloniz­ing the rest of the so­lar sys­tem.

That’s a pretty strong claim. How do you sup­port it?

• how do you ac­cel­er­ate the rate of tech­nolog­i­cal ad­vance­ment?

That should be a new dis­cus­sion.

That’s a pretty strong claim. How do you sup­port it?

The fact that all se­ri­ous crit­i­cisms of Mars 1 have to do with whether or not they’ll raise enough money to send a pri­vate mis­sion to Mars in 2023 rather than any ques­tion than tech­nolog­i­cal fea­si­bil­ity.

By coloniz­ing, I don’t mean Dyson Cloud within our life­times, ob­vi­ously. Just a per­ma­nent foothold out­side Earth from which to start.

• That should be a new dis­cus­sion.

You claimed that peo­ple ig­nore or out­right op­pose try­ing to ac­cel­er­ate the rate of tech­nolog­i­cal ad­vance­ment. Could it be in­stead that no­body has any idea how to do it?

The fact that all se­ri­ous crit­i­cisms of Mars 1 have to do with whether or not they’ll raise enough money to send a pri­vate mis­sion to Mars in 2023 rather than any ques­tion than tech­nolog­i­cal fea­si­bil­ity.

I’m un­der the im­pres­sion that Mars 1 is a hoax, most likely in­tended to be the premise of a sur­vivor-like “re­al­ity” tv show about the se­lec­tion of the prospec­tive “colon­ists”.

If I un­der­stand cor­rectly, even a manned flyby mis­sion to Mars is con­sid­ered tech­nolog­i­cally difficult, mainly due to ioniz­ing ra­di­a­tion con­cerns.
Set­ting up a set­tle­ment that con­stantly de­pended on Earth for sup­plies (a Mar­tian ver­sion of the ISS, es­sen­tially) might be tech­nolog­i­cally pos­si­ble but only at enor­mous costs, many or­ders of mag­ni­tudes more what claimed by Mars 1.
An in­de­pen­dent set­tle­ment seems quite be­yond the pos­si­bil­ities of pre­sent and fore­see­able tech­nol­ogy.

• many or­ders of mag­ni­tudes more what claimed by Mars 1.

And global GDP is about four or­ders of mag­ni­tude greater than NASA’s bud­get. What re­quire­ments do you see as be­ing difficult for an in­de­pen­dent set­tle­ment? I find both a so­lar ar­ray ca­pa­ble of de­liv­er­ing sev­eral ter­awatts, and a sys­tem that, given enough en­ergy, can re­cy­cle all the air, food, and wa­ter used by a colony, to be well within the “fore­see­able tech­nol­ogy” cat­e­gory, es­pe­cially if we were to start pour­ing in sev­eral billion dol­lars a year in re­search.

• An in­de­pen­dent set­tle­ment has to lo­cally man­u­fac­ture all its food, con­sum­able sup­plies and bro­ken equip­ment. Since it can’t re­al­is­ti­cally trade any­thing with Earth, it must have a self-sus­tain­ing closed econ­omy.

Most of stuff we con­sume in our ev­ery­day lives, even food, is the product of com­plex in­dus­trial pro­cesses, in­volv­ing large fac­to­ries that use lots of en­ergy, many differ­ent kinds of re­sources that come from ev­ery cor­ner of the world and lots of labour, ex­ploit­ing economies of scale.
The type of stuff that would be needed on a Mar­tian set­tle­ment would be even much more hi-tech. There is no prac­ti­cal way to do all this hi-tech man­u­fac­tur­ing on a small scale in a hos­tile, re­source starved en­vi­ron­ment with cur­rent tech­nol­ogy.

Keep in mind that even most of the Earth sur­face is un­in­hab­ited. There are no per­ma­nent set­tle­ments in the mid­dle of the Sa­hara desert, or at the South Pole, or in the oceans. Any­thing like that would be way more tech­nolog­i­cally fea­si­ble that a space set­tle­ment, it wouldn’t even need to be fully in­de­pen­dent, yet we don’t set­tle there.

EDIT:

if we were to start pour­ing in sev­eral billion dol­lars a year in re­search.

For refer­ence, the ISS already costs sev­eral billion dol­lars a year, and it’s far from in­de­pen­dent. NASA es­ti­mates that a manned mis­sion to Mars would cost about 100 billion dol­lars.

• That should be a new dis­cus­sion.

You claimed that peo­ple ig­nore or out­right op­pose try­ing to ac­cel­er­ate the rate of tech­nolog­i­cal ad­vance­ment. Could it be in­stead that no­body has any idea how to do it?

Very, very pos­si­ble.

An in­de­pen­dent set­tle­ment seems quite be­yond the pos­si­bil­ities of pre­sent and fore­see­able tech­nol­ogy.

I’m not say­ing its easy. I guess I cal­ibrate my con­cept of fore­see­able tech­nol­ogy as sleeker, faster mo­bile de­vices be­ing triv­ially pre­dictable, fu­sion as pos­si­ble, and gen­eral-pur­pose nanofac­to­ries as spec­u­la­tive.

On that scale, I would place per­ma­nent off-world set­tle­ments as closer than nanofac­to­ries, around the same prox­im­ity as fu­sion. Closer, since no new dis­cov­er­ies are re­quired, only an enor­mous out­pour­ing of re­sources into ex­ist­ing tech­nolo­gies.

• If the per­ma­nent Mar­tian set­tle­ments are to do their own man­u­fac­tur­ing, it seems that they would need both fu­sion power and nanofac­to­ries, or some­thing equiv­a­lent. The type of en­ergy sources and re­source ores we use on Earth for man­u­fac­tur­ing would prob­a­bly not be available in any suffi­cient amount.

• You might be right. I hope not, though, be­cause that means it will take even longer to es­cape from the plane­tary cy­cle of over­shoot and col­lapse.

Then again, it’s good to be ready for the worst and be pleas­antly sur­prised if things turn out bet­ter than ex­pected.

1. If eu­gen­ics = Nazi, it’s time to re-eval­u­ate all this talk of FAI and tran­shu­man­ism.

2. Eu­gen­ics can be nega­tive (breed out) or pos­i­tive (breed in /​ main­tain), and it can be state run or in­di­vi­d­ual run. The line be­tween birth con­trol /​ fam­ily plan­ning and eu­gen­ics is like the line be­tween erot­ica and porn; the good things are good be­cause they are good, not any quan­tifiale thing in the thing.

Your as­sump­tions and ques­tions point to a de­sire for fu­ture gen­er­a­tions to be as or more healthy and happy as we are to­day, and there is a name for that. A name that is out of fash­ion, but what the name de­scribes is older than 1930s-40s Ger­many and which is more prac­ticed and dis­cussed than ever. The power may be in the state (China’s child limits) or in­di­vi­d­u­als (In­dia’s use of abor­tion and birth con­trol to have more boy chil­dren) but it’s here.

I fa­vor ac­cess to birth con­trol by in­di­vi­d­u­als and I am against state de­ci­sions on fam­ily plan­ning and health.

• What’s the con­nec­tion to re-eval­u­at­ing FAI and tran­shu­man­ism?

I didn’t say I think eu­gen­ics = Nazi. I just said Nazis ad­vo­cated a par­tic­u­larly mur­der­ous and ar­bi­trary form of eu­gen­ics, so now that’s all that comes to mind for most peo­ple to­day when they think about eu­gen­ics, if they do at all.

With a lot of work, though, we may even­tu­ally make that is­sue moot through in-vivo gene ther­apy.

• I have en­coun­tered a severely limited abil­ity in oth­ers to ac­cu­rately un­der­stand that, when speak­ing on be­half of oth­ers, you are not speak­ing your own opinion. I recom­mend try­ing to be as ex­plicit as pos­si­ble in ex­plain­ing pub­lic per­cep­tion.

• I fa­vor ac­cess to birth con­trol by in­di­vi­d­u­als and am against state de­ci­sions on fam­ily plan­ning and health.

So do I. But, I bet I can come up with a de­mo­graphic trend or two that would make the above po­si­tion a difficult one to defend.

• Eu­gen­ics can be nega­tive (breed out) or pos­i­tive (breed in /​ main­tain), and it can be state run or in­di­vi­d­ual run. The line be­tween birth con­trol /​ fam­ily plan­ning and eu­gen­ics is like the line be­tween erot­ica and porn; the good things are good be­cause they are good, not any quan­tifiale thing in the thing.

The word “eu­gen­ics” has gen­er­ally been used for in­ven­tory at­tempts to “elimi­nate un­de­sir­able traits”, ei­ther by state-run top-down efforts, or in some cases by pres­sure from the med­i­cal com­mu­nity, in or­der to make gen­eral long-term changes in the hu­man gene pool as a whole.

It re­ally has noth­ing to do with in­di­vi­d­ual mak­ing de­ci­sions that have an effect on the ge­netic health of their chil­dren (for ex­am­ple, women choos­ing sperm donars with col­lege de­grees in the hopes of hav­ing smarter chil­dren, peo­ple us­ing pre-im­plan­ta­tion ge­netic se­lec­tion in IVF, ect.) Pos­i­tive long-term effects on the hu­man genome in gen­eral may be pos­i­tive side effects of that, but they are not the main goal.

In any case, I think that eu­gen­ics (try­ing to make long-term changes in the hu­man geno­type through se­lec­tive breed­ing or forced ster­il­iza­tion ect) is a fool­ish idea at this point. Even if you had some kind of species-wide eu­gen­ics pro­gram, it would take many, many gen­er­a­tions for it to have any real effect, and long be­fore then we should be se­lect­ing our genes di­rectly (even with­out any kind of sin­gu­lar­ity or GAI, ge­netic sci­ence alone should do that quite soon.)

Peo­ple who are in fa­vor of tran­shu­man­ism shouldn’t talk about it in terms of eu­gen­ics. Any eu­gen­ics effects (pos­i­tive or nega­tive) are un­likely to be sig­nifi­cant in ei­ther the short run or the long run, and eu­gen­ics has a well-de­served rep­u­ta­tion for to­tal­i­tar­i­anism, abuse, and tak­ing away peo­ple’s fun­da­men­tal free­doms.

• What con­di­tions must func­tions f(m,t), k(x), and p(x) satisfy in or­der to in­sure that p(x) - f(m,t) > 0 for all x > to­day()?

Did you mean to ask “What con­di­tions must func­tions f(m,t), k(x), and p(x) satisfy in or­der to in­sure that p(x) - f(m,k(x)) > 0 for all x > to­day()?”

If so, that still leaves m as a free vari­able.

• Fixed, thanks.

• Ob­vi­ously any given niche within the so­lar sys­tem will have its own finite car­ry­ing ca­pac­ity, but it will be many or­ders of mag­ni­tude higher than that of Earth alone

I’d be sus­pi­cious of that ‘many’ un­less you plan on mov­ing lots of as­ter­oids in-sys­tem. Earth is some prime real es­tate for hu­mans.

• I’m en­vi­sion­ing a slowly grow­ing Dyson Cloud, limited by the to­tal out­put of the sun, availa­bil­ity of atoms in the so­lar sys­tem, and the 14 or so billion years un­til the sun burns out.

So, if not “many” or­ders of mag­ni­tude then would per­haps “sev­eral” be ap­pro­pri­ate?

• That’s not a ‘niche’, that’s com­pletely re­ar­rang­ing the place.

• There are quite a few who ar­gue that we are already over­shoot­ing the car­ry­ing ca­pac­ity. One way to mea­sure it is the global acre http://​​en.wikipe­dia.org/​​wiki/​​Global_hectare

And ac­cord­ing to the foot­print­net­work we are already us­ing up 150% of earths car­ry­ing ca­pac­ity: http://​​www.foot­print­net­work.org/​​en/​​in­dex.php/​​GFN/​​page/​​ba­sics_in­tro­duc­tion/​​ thus us­ing up the available re­sources faster than the are (re)gen­er­ated.

• We cal­ibrate how much effort should be put into miti­gat­ing the risks of nan­otech­nol­ogy by ask­ing what ob­ser­va­tions should make us up­date the like­li­hood we as­sign to a grey-goo sce­nario. We ap­proach miti­ga­tion strate­gies from an en­g­ineer­ing mind­set rather than a poli­ti­cal one.

I think it’s fair to say that the dan­ger of grey goo is greater now than it was in the 1980. How well do the en­g­ineer­ing mind­set work for the prob­lem.

On the other hand when it comes to over­pop­u­la­tion poli­ti­cal solu­tions such as the Chi­nese one do mas­sive amounts of progress.

1: By car­ry­ing ca­pac­ity, I mean finite re­sources such as eas­ily ex­tractable ores, wa­ter, air, EM spec­trum, and land area.

Given that some peo­ple use up no EM spec­trum at all, it can be con­fus­ing to speak of some­thing like it as “car­ry­ing ca­pac­ity”. We tend to use a lot of re­courses be­cause we can and not be­cause we have to.

Ad­vo­cat­ing for pop­u­la­tion con­trol the way the Chi­nese do is poli­ti­cally un­pop­u­lar and pop­u­la­tion con­trol isn’t easy. Cut­ting re­source con­sump­tion the way en­vi­ro­men­tal­ists pro­pose seems to be eas­ier.

Bill Gates jour­ney is also in­ter­est­ing for this ques­tion. He started his philantrophic efforts by fo­cus­ing on re­duc­ing pop­u­la­tion growth. Given that em­pow­ered work­ing woman don’t tend to get more than two chil­dren he switched the fo­cus of his efforts.

That’s why Bill Gates fights malaria. It’s also the cul­tural back­ground in which GiveWell recom­mends fund­ing more bet nets and im­prov­ing lo­cal economies through di­rect money trans­fers. It’s no ac­ci­dent that the effec­tive al­tru­ist crowd doesn’t fo­cus on re­duc­ing pop­u­la­tion. It’s cer­tainly not be­cause they are not smart enough to think of ways to do so that don’t in­volve Chi­nese style policy tools.

• Oh, by the way, I thought of a few prac­ti­cal benefits I can hope to achieve with this dis­cus­sion:

• Next time some some­one who has read enough of this post wan­ders into a de­bate about global warm­ing or de­foresta­tion or what­ever, they will be armed with a con­struc­tive al­ter­na­tive to the stan­dard green vs blue talk­ing points.

• Con­versely, you can find here ar­gu­ments for full-steam-ahead tech­nolog­i­cal progress that lud­dites won’t be ex­pect­ing be­cause it fol­lows di­rectly from some of their fa­vorite “we’re all doomed” ar­gu­ments. I even sus­pect the rea­son I’m get­ting such a drub­bing here is be­cause I’m be­ing mis­taken for a Green­peace-er.

• If I’m right that most en­vi­ron­men­tal and a fair num­ber of poli­ti­cal/​eco­nomic/​so­cial prob­lems are se­que­lae of over­pop­u­la­tion, that would be very use­ful to know, be­cause it would fo­cus efforts on the root cause in­stead of mis­tak­ing the over­whelming ar­ray of symp­toms for in­de­pen­dent prob­lems. A unified the­ory of doom and gloom, if you will.

...edit: one more

• The greater the prob­a­bil­ity I as­sign to the shit hit­ting the fan be­fore sin­gu­lar­ity/​space/​nano-Clause hap­pens, the more of my re­sources it means I should di­vert from my re­search to mea­sures that will in­crease the chances of me and my im­me­di­ate mon­key-sphere sur­viv­ing and pre­serv­ing the in­for­ma­tion needed to re­build.

• I even sus­pect the rea­son I’m get­ting such a drub­bing here is be­cause I’m be­ing mis­taken for a Green­peace-er.

What rea­son do you have to make that sus­pec­tion? You make it clear that you think en­vi­ron­men­tal­ists are wrong.

• The clos­est I’ve come to GUAT is gen­eral in­com­pe­tence as the root cause. Trac­ing the cause of in­com­pe­tence brings up… In­com­pe­tence. I figure if it’s re­cur­sive, it’s prob­a­bly some­thing we definitely need to fo­cus on. If there’s a more severely re­cur­sive cause, I’ve yet to dis­cover it.

• What is GUAT?

• Grand Unified Ar­maged­don The­ory. :p “What will be the root cause of the end of the world?”

• We’re all in­com­pe­tent com­pared to the the­o­ret­i­cal limits of com­pe­tence per­mit­ted by our cur­rent brain ar­chi­tec­ture. But with a smaller pop­u­la­tion, the stakes are lower for what­ever blun­ders we do com­mit (up to a point, of course—there is ob­vi­ously such a thing as a dan­ger­ously low pop­u­la­tion, but not even Wron­gians would claim that we’re close to that bound­ary).

So, what is the first tier of sec­ondary causes af­ter the root cause of in­com­pe­tence?

• It varies widely, but that is the na­ture of opinion.

• Here’s a talk on pop­u­la­tion growth by the head of ‘Pop­u­la­tion Mat­ters’ at 2011′s an­nual Bri­tish Hu­man­ist As­si­ci­a­tion con­fer­ence.

• Thanks. I’ll watch it as soon as I’m some­place where I won’t be wak­ing peo­ple up by do­ing so (or find my head­phones).

• To me this looks like a very fa­mil­iar mulberry bush around which plenty of peo­ple have been go­ing since the early 1970s.

Are you claiming some­thing differ­ent from the clas­sic pop­u­la­tion-bomb limits-to-growth ar­gu­ments? Be­cause if you do not, there seems lit­tle rea­son to re­visit this well-tram­pled ter­ri­tory.

• Let me just step back and ask you what your goal is. Is it...

• Con­vinc­ing me to stop dis­cussing ex­is­ten­tial risks?

• Con­vinc­ing me to stop dis­cussing some class of ex­is­ten­tial risks that in­cludes the Malthu­sian Crunch?

• Con­vinc­ing me to stop dis­cussing the Malthu­sian Crunch speci­fi­cally?

How do you hope to benefit from dis­cour­ag­ing the dis­cus­sion of this topic or top­ics?

Were you all over Robin Han­son for his Malthu­sian sce­nario as well?

• The Malthu­sian Crunch is not an ex­is­ten­tial risk. It leads to a smaller and poorer hu­man­ity, but not to the ab­sence of hu­man­ity.

My goal is to lead you to light and wis­dom, of course :-P

Other than that I’m just ex­press­ing my views and point­ing out holes in your con­struc­tions.

• Fair enough.

It leads to a smaller and poorer hu­man­ity, but not to the ab­sence of hu­man­ity.

But each time we have to re­build from a col­lapse, we have a de­graded car­ry­ing ca­pac­ity and fewer eas­ily ex­ploitable re­sources with which to re­build. This be­comes a ex­is­ten­tial risk if the cy­cle re­peats so many times that we can no longer re­build. Or if it keeps us stuck on one planet long enough for one of the other ex­is­ten­tial risks to get us.

Like I said some­where else, over­shoot is like AIDS—it doesn’t kill you, it just pre­dis­poses you to other prob­lems that do.

At any rate, there’s always the self­ish mo­tive. I don’t want to have to waste time re­build­ing even af­ter one col­lapse be­cause I might not live to see it to com­ple­tion, and what I want even less is to already be cryosus­pended when the next one hap­pens be­cause I won’t be re­vived and won’t be around to do any­thing about it.

• I think that any­thing that risks a col­lapse of civ­i­liza­tion is an ex­is­ten­tial risk.

When the Ro­man Em­pire fell, peo­ple in Europe were able to fall back on iron-age tech­nolo­gies to sur­vive; there was a mass die-off, but hu­man­ity in Europe was able to sur­vive and even­tu­ally re­cover. In most of the first world, that wouldn’t re­ally be an op­tion to­day.

If our civ­i­liza­tion were to col­lapse now, the hu­man race would be in a dra­mat­i­cally re­source-ex­hausted world, in the mid­dle of a mass-ex­tinc­tion event, with­out our tech­nol­ogy to help us, it’s pos­si­ble the hu­man race might not sur­vive. And even if it does, I’m not sure that it’s guaran­teed that we will come back up to a tech­nolog­i­cal civ­i­liza­tion.

• I think that any­thing that risks a col­lapse of civ­i­liza­tion is an ex­is­ten­tial risk.

I think such an ap­proach dilutes the use­ful con­cept on “ex­is­ten­tial risk” into use­less­ness.

I would agree that the col­lapse of the Western civ­i­liza­tion would be un­pleas­ant for ev­ery­one in­volved. But that’s a bit differ­ent thing.

• I think such an ap­proach dilutes the use­ful con­cept on “ex­is­ten­tial risk” into use­less­ness.

Let me be a lit­tle more clear. My rough es­ti­mate would be that a com­plete col­lapse of mod­ern civ­i­liza­tion in the next 50 yeas would have in the neigh­bor­hood of a 25% chance of re­sult­ing in a com­plete hu­man ex­tinc­tion, from a com­bi­na­tion of nat­u­ral fac­tors, re­source de­ple­tion, en­vi­ron­men­tal de­ple­tion, and the in­evitable wars that would ac­com­pany the col­lapse.

I think that that kind of sce­nario is far more likely in the near fu­ture then many other ex­is­ten­tial risks peo­ple worry about.

• First, I think we’re us­ing the word “civ­i­liza­tion” in differ­ent senses. You’re talk­ing about the global sin­gle hu­man civ­i­liza­tion where civ­i­liza­tion means hav­ing run­ning wa­ter and tak­ing tea in the af­ter­noon. I’m talk­ing about mul­ti­ple civ­i­liza­tion which are, ba­si­cally, long-lived cul­tural ag­glomer­a­tions (e.g. there is a Western civ­i­liza­tion but China isn’t part of it).

a com­plete col­lapse of mod­ern civ­i­liza­tion in the next 50 yeas would have in the neigh­bor­hood of a 25% chance of re­sult­ing in a com­plete hu­man extinction

That will prob­a­bly de­pend on ex­actly how did the mod­ern civ­i­liza­tion col­lapse. An all-out nu­clear ex­change will have differ­ent con­se­quences than a snow­bal­ling freeze-up of the fi­nan­cial pay­ments sys­tem.

In any case, I find com­plete hu­man ex­tinc­tion as the re­sult of the civ­i­liza­tion col­lapse to be highly un­likely. There are peo­ples who haven’t changed much for thou­sands of years—would they even no­tice? And ab­sent things like nu­clear win­ter, why would they die out?

More­over, let’s even say 99% of the North Amer­i­can pop­u­la­tion will die. OK. But what would kill the re­main­ing 1%? Sure, tech­nol­ogy will re­vert to a much more prim­i­tive form, but then hu­mans have already been there, they sur­vived quite nicely.

• First, I think we’re us­ing the word “civ­i­liza­tion” in differ­ent senses. You’re talk­ing about the global sin­gle hu­man civ­i­liza­tion where civ­i­liza­tion means hav­ing run­ning wa­ter and tak­ing tea in the af­ter­noon. I’m talk­ing about mul­ti­ple civ­i­liza­tion which are, ba­si­cally, long-lived cul­tural ag­glomer­a­tions (e.g. there is a Western civ­i­liza­tion but China isn’t part of it).

I would say that the whole global sys­tem is so in­ter­min­gled and global right now that a com­plete col­lapse of civ­i­liza­tion of the type I am talk­ing about would likely have to in­clude the en­tire world if it hap­pened at all. 1500 years ago Ro­man civ­i­liza­tion could fall with­out badly hurt­ing Chi­nese civ­i­liza­tion, but I don’t think that’s true any­more.

In any case, I find com­plete hu­man ex­tinc­tion as the re­sult of the civ­i­liza­tion col­lapse to be highly un­likely. There are peo­ples who haven’t changed much for thou­sands of years—would they even no­tice?

In the kind of global de­mo­graphic overex­er­tion and re­source ex­haus­tion lead­ing to a to­tal col­lapse that we’re talk­ing about, a lot of tra­di­tional food sources would be ex­hausted be­fore the col­lapse. In the face of im­pend­ing global star­va­tion, I would ex­pect ev­ery ma­jor fish­ery in the world to be rapidly wiped out, I would ex­pect the rain­forests to be burned for more farm­land, I would ex­pect de­cent soil and eas­ily available wa­ter to be com­pletely ex­hausted, ect. I would ex­pect that pro­cess would take away most of the re­sources that peo­ple need to sur­vive, and that peo­ple liv­ing in a tra­di­tional hunter-gather ex­is­tence or a tra­di­tional sub­sis­tence farm­ing ex­is­tence would prob­a­bly had their land and re­sources taken from them be­fore the end. If we’re talk­ing about billions of peo­ple fac­ing po­ten­tial star­va­tion, I sus­pect that all thought of en­vi­ron­men­tal preser­va­tion or sus­tain­abil­ity would go right out the win­dow, as well as con­cern for the well-be­ing of abo­rigi­nal peo­ple.

There might be some pock­ets of peo­ple left liv­ing tra­di­tional lifestyles some­where (that’s ac­tu­ally what I was think­ing about when I put the ex­tinc­tion pos­si­bil­ity at 25%, in­stead of higher), but even they would also be af­fected by global en­vi­ron­men­tal de­struc­tion. (And, of course, small pock­ets of hu­mans sur­viv­ing on their own can have is­sues from lack of ge­netic di­ver­sity and such.)

More­over, let’s even say 99% of the North Amer­i­can pop­u­la­tion will die. OK. But what would kill the re­main­ing 1%?

What would they live on?

When the Ro­man Em­pire col­lapsed, the pop­u­la­tion of Europe dropped dra­mat­i­cally, per­haps by half ac­cord­ing to some es­ti­mates, but peo­ple still re­mem­bered how to farm us­ing old iron-age tech­nol­ogy, peo­ple still had the knowl­edge of how to build houses out of wood and straw when bet­ter build­ing ma­te­ri­als stopped com­ing from dis­tant parts of the Em­pire, ect. It was a catas­tro­phe, but peo­ple still had enough knowl­edge of how to sur­vive with­out the civ­i­liza­tion to hang on.

How many peo­ple in North Amer­ica to­day do you think have the knowl­edge of how to farm with­out any tech­nol­ogy at all? How many have the knowl­edge to forge their own farm­ing tools? A few do; but places known to have or­ganic farms or tra­di­tional farm­ing knowl­edge (the Amish, for ex­am­ple) would likely be swamped by mil­lions of starv­ing re­fugees. And be­sides that, once a stretch of land has been farmed us­ing in­dus­trial farm­ing tech­niques for sev­eral decades, it is very hard to change it back into some­thing that can be farmed with old-fash­ioned tech­niques; the soil is ba­si­cally com­pletely ex­hausted of all it’s nat­u­ral nu­tri­ents by that point, and only can be farmed with ad­vanced tech­niques.

To­tal hu­man ex­tinc­tion might not be the re­sult, but I wouldn’t rule it out as a sig­nifi­cant pos­si­bil­ity.

And even if we didn’t end up with to­tal ex­tinc­tion, re­mem­ber that an ex­is­ten­tial risk is any­thing that pre­vents mankind from achiev­ing it’s po­ten­tial; you have to not just con­sider the risk of ex­tinc­tion, but then try to es­ti­mate the chances of us re-de­vel­op­ing ad­vanced tech­nol­ogy af­ter a col­lapse. That’s harder to es­ti­mate, but I don’t think it’s 100%.

• Yes, and I sus­pect col­laps­ing due to over­pop­u­la­tion is a much smaller risk then col­laps­ing due to bad policy de­ci­sions made by peo­ple who over­es­ti­mated over­pop­u­la­tion risks.

• What kind of policy de­ci­sions are we talk­ing about? As I posted el­se­where in this thread, I think the best way to con­trol pop­u­la­tion is ed­u­ca­tion, ac­cess to birth con­trol, eco­nomic de­vel­op­ment in the third world, and women’s rights; that has worked bet­ter then any­thing else that has been tried. (Bizarrely I was down­voted for that; are peo­ple some­how op­posed to ed­u­ca­tion and aid for the third world? I don’t re­ally un­der­stand.)

• Se­ri­ously, you think policy de­ci­sions based on an over­es­ti­mated over­pop­u­la­tion risk is an ex­is­ten­tial threat?

Or is it just fun to turn ar­gu­ments around and say stuff like that? My Bayesian a pos­te­ri­ori are scream­ing that THIS is what you are do­ing here, to me.

• For bet­ter or worse, there are peo­ple mak­ing policy de­ci­sions and I know of no rea­son why that would change on the time scales we’re work­ing with.

At the mo­ment, these de­ci­sion mak­ers are act­ing as though they be­lieve:

• Over­pop­u­la­tion is not re­lated to en­vi­ron­men­tal degra­da­tion, vi­o­lent con­flict, and re­source de­ple­tion.

• That tech­nolog­i­cal progress is not the main risk miti­ga­tor against over­pop­u­la­tion and its var­i­ous con­se­quences.

Sup­pos­ing the above con­ven­tional wis­dom is in­cor­rect, but ei­ther way policy mak­ers will make policy, and if that is in­her­ently a bad thing (strong as­sump­tion), isn’t it bet­ter to limit the dam­age by them hav­ing a bet­ter ap­prox­i­ma­tion of re­al­ity?

So, if you agree with the con­ven­tional view, you have noth­ing to worry about (but I have yet to see here con­vinc­ing ar­gu­ments why I should agree with this view if I don’t already). If you dis­agree with the con­ven­tional view, that has im­pli­ca­tions at the very least for al­low­ing its pub­lic apol­o­gists to stand un-de­bated and for which­pub­lic poli­cies and char­i­ta­ble ac­tivi­ties you en­dorse. If you are un­de­cided, then per­haps you’re cu­ri­ous to de­velop bet­ter es­ti­mates, be­cause they may have bear­ing on your sur­vival and pros­per­ity. I know I am.

• It could lead to ab­sence of hu­man civ­i­liza­tion as we know it. A lot of things that we take for granted—like tech­nolog­i­cal de­vel­op­ment—are ac­tu­ally quite frag­ile and de­pend on a lot of con­di­tions be­ing ‘just right’. Through­out his­tory there are many ex­am­ples of great civ­i­liza­tions re­duced to poverty and stag­na­tion.

• It could lead to ab­sence of hu­man civ­i­liza­tion as we know it.

I think I can pretty safely guaran­tee that in a thou­sand years there will be no hu­man civ­i­liza­tion as we know it. There will be some­thing differ­ent, I have no idea what.

And there is a huge differ­ence be­tween “re­duced to poverty and stag­na­tion” and “there are no hu­mans any more”.

• I think I can pretty safely guaran­tee that in a thou­sand years there will be no hu­man civ­i­liza­tion as we know it.

Agreed; the ques­tion is: will it be bet­ter or worse. I can’t imag­ine a fu­ture where re­sources are scarce as be­ing any­thing but bad.

• This is not well-tram­pled ter­ri­tory, just willfully ig­nored ter­ri­tory, un­less some­one has any­thing like re­fu­ta­tion of any of the se­quence of as­ser­tions I make above rather than that it just hasn’t hap­pened yet (in re­cent times).

Be­cause here is what other pre­dic­tions haven’t hap­pened yet: grey goo, as­ter­oid im­pact (in re­cent times), nu­clear war, global pan­demic (in re­cent times), and un­friendly AI. All that any of these sce­nar­ios have go­ing for them is that there are a-pri­ori rea­sons why they are pos­si­ble. If the Club of Rome un­der­es­ti­mated the im­pact of tech­nolog­i­cal growth and so pre­dicted dis­aster sev­eral decades early, that no more in­val­i­dates the un­der­ly­ing threat than if Eliezer made a falsifi­able pre­dic­tion (which he hasn’t as far as I know) about when self-aug­ment­ing AI will ar­rive but over­es­ti­mated Moore’s Law and so pre­dicted dis­aster sev­eral decades early.

But in re­sponse to your ques­tion, as­ser­tions 4-6 are a differ­ent per­spec­tive from main­stream en­vi­ron­men­tal­ism. Your re­ply sounds like you didn’t even read that far. Did you?

If so, which spe­cific points do you dis­agree with? For­get what you think has been re­futed by oth­ers long ago. What, if any­thing, do you see that’s im­plau­si­ble or con­tra­dic­tory?

• global pan­demic (in re­cent times)

Does the 1918 flu count?

• It’s the rea­son I put (in re­cent times).

• 1918 isn’t re­cent in this con­text?

• How does whether or not I call it re­cent al­ter the point I’m try­ing to make?

That point be­ing: if we un­der­stand why some­thing can hap­pen, and we don’t un­der­stand why it hasn’t hap­pened, we need to do so be­fore we de­cide that it’s safe to ig­nore.

• Maybe it would help if you as­signed one of the fol­low­ing la­bels: { the­o­ret­i­cally pos­si­ble but un­likely | likely | very likely | in­evitable } to the fu­ture sce­nar­ios you’re con­cerned about.

I’m perfectly fine with al­lo­cat­ing the same amount of money/​at­ten­tion/​en­ergy to the pop­u­la­tion over­shoot prob­lem as we are al­lo­cat­ing now to the grey goo prob­lem. As far as I know that’s in­dis­t­in­guish­able from zero in any kind of a large-pic­ture view.

Why don’t you think nor­mal mar­ket mechanisms (the more scarce re­source X is, the higher its price, the larger the in­cen­tives to use less, the more in­tense search for its re­place­ment) will han­dle the prob­lem?

• Edit: in re­sponse to thought pro­vok­ing com­men­tary from hairyfig­ment I up­dated the first set of hu­man-made risks from marginal to con­di­tional on no over­shoot, and down­graded the risk of over­shoot to likely. Thanks for your help.

Within the next 50 years...

grey goo : the­o­ret­i­cally pos­si­ble but unlikely

me­teor im­pact : the­o­ret­i­cally pos­si­ble but unlikely

Yel­low­stone Caldera: the­o­ret­i­cally pos­si­ble but unlikely

gamma ray burster: the­o­ret­i­cally pos­si­ble but unlikely

so­lar flare: the­o­ret­i­cally pos­si­ble but unlikely

green goo | no over­shoot: some­what likely

global nu­clear war | no over­shoot: some­what likely

global pan­demic | no over­shoot: some­what likely

new dark ages | no over­shoot: some­what likely

near ex­tinc­tion due to cli­mate change | no over­shoot: the­o­ret­i­cally pos­si­ble but unlikely

wide­spread and se­vere suffer­ing and death due to cli­mate change | no over­shoot: likely

over­shoot: some­what likely

green goo | over­shoot: some­what likely

global nu­clear war | over­shoot: likely

global pan­demic | over­shoot: likely

new dark ages | over­shoot: very likely

near ex­tinc­tion due to cli­mate change | over­shoot: some­what likely

wide­spread and se­vere suffer­ing and death due to cli­mate change | over­shoot: inevitable

• Could you use prob­a­bil­ity num­bers in­stead of works like likely/​un­likely/​very likely. It’s difficult to know what you mean with them.

• I no­tice you have the prob­a­bil­ity of var­i­ous sce­nar­ios con­di­tional on the over­shoot but no prob­a­bil­ity for the over­shoot it­self.

• Shouldn’t mat­ter, I don’t as­sign high weight to am­a­teur prob­a­bil­ities. I be­lieve bokov’s ar­gu­ment is that this threat should be taken se­ri­ously purely on the grounds that we take much more the­o­ret­i­cal of dan­gers se­ri­ously. Do we only take the hy­po­thet­i­cals se­ri­ously? If so, this is a se­ri­ous over­sight.

• That is pre­cisely my main ar­gu­ment.

• Hmm… I would have pegged your main ar­gu­ment as be­ing more re­lated to over­pop­u­la­tion than blind spots speci­fi­cally. Although.… I ad­mit I skimmed a lit­tle. X_X

At least I man­aged to pick up that it was a crit­i­cal part of the ar­ti­cle!

Now that I think about it… I’m not ac­tu­ally wor­ried about over­pop­u­la­tion/​re­source col­lapse, but I am wor­ried about LessWrong be­ing willfully ig­no­rant with­out in­tend­ing to be so. I guess I re­ally dropped the ball here in terms of … Wait, some­thing about the ar­ti­cle made me skim and I didn’t catch it on the first pass. This is in­trigu­ing. It’s been a long time since I’ve had this many in­tro­spec­tive re­al­iza­tions in one thought train. I have to won­der how many oth­ers skimmed as well, what our col­lec­tive rea­son to do so was, and what is the best route to solve this prob­lem.

...Or else you just mis­spoke and re­source col­lapse is ac­tu­ally your main con­cern/​ar­gu­ment.

But even in that case, I skimmed, and I can see skim­ming be­ing a prob­lem. Yay for or­thog­o­nal prop­er­ties!

All this from the mere state­ment of ac­cu­racy. …Did try­ing to avoid in­fer­en­tial silence play any role in your mak­ing this com­ment?

• Now that I think about it… I’m not ac­tu­ally wor­ried about over­pop­u­la­tion/​re­source col­lapse, but I am wor­ried about LessWrong be­ing willfully ig­no­rant with­out in­tend­ing to be so.

I think the chances of a sig­nifi­cant por­tion of LessWrong not hav­ing thought about the is­sue is low. Pop­u­la­tion growth is a well un­der­stand is­sue com­pared to ex­istien­tial risks like grey goo.

bokov makes a se­ries of ar­gu­ments that most peo­ple prob­a­bly have heard be­fore and many con­sider to be re­futed and then sug­gests that be­cause peo­ple don’t agree with him, they have a blindspot.

• What makes you think most LessWrongers have thought about it to a de­gree to which the is­sue can be con­sid­ered in the pro­cess of be­ing solved? (For what­ever needs to be done to “solve” it, whether that is “Do noth­ing differ­ent” or not.)

• What makes you think most LessWrongers have thought about it to a de­gree to which the is­sue can be con­sid­ered in the pro­cess of be­ing solved?

I haven’t used the word solved in the post you quote. That word misses the point. No­body claims that the is­sue of cli­mate change is solved.

The ques­tion is whether it’s use­ful to model the is­sues like cli­mate change in a way that cen­ters around car­ing ca­pac­ity and ig­nores poli­tics.

It looks like a if you have a ham­mer ev­ery­thing looks like a nail is­sue. Yes, you can model the wor­lds prob­lems that way, but that model isn’t very pro­duc­tive.

If you think about pop­u­la­tion amounts it makes sense to men­tally sep­a­rate differ­ent coun­tries and con­ti­nents.

Let’s say you start in the US. As an en­g­ineer you see a clear solu­tion. We should in­crease the amount of abor­tion that hap­pen in the US to get near to the car­ry­ing ca­pac­ity. If you try to push that policy you will see that you run into prob­lems that are highly poli­ti­cal.

The abor­tion de­bate is at the mo­ment about the sa­cred value of life against the sa­cred value of woman’s con­trol over their own body. If you come into that de­bate and say that you want more abor­tions be­cause it has util­ity to keep US pop­u­la­tion down, you are not helping.

You have to re­mem­ber that the US is a coun­try where a good por­tion waits for the sec­ond com­ing of Christ and thinks that the bible says that they should pro­cre­ate as much as pos­si­ble.

Poli­ti­cal is­sues like that make re­duc­ing pop­u­la­tion growth a very differ­ent is­sue than get­ting more telescopes to de­tect po­ten­tially dan­ger­ous as­ter­oids or cool­ing down yel­low­stone by build­ing a gi­ant lake on top of it.

It makes sense to use an en­g­ineer­ing lense to talk about as­ter­oids be­cause there no sig­nifi­cant poli­ti­cal group that con­sid­ers watch­ing as­ter­oids with telescopes to be im­moral. With yel­low­stone you might get some peo­ple who think that you are harm­ing en­dan­gered spez­ies that live in that area, but those are peo­ple with whom you can ar­gue di­rectly they aren’t as poli­ti­cally pow­er­ful as anti-abor­tion Chris­ti­ans.

Another way to ap­proach pop­u­la­tion growth is to ap­proach Africa. De­cid­ing as US or Euro­pean that there should be less Afri­cans has is­sues with neo­colon­ism. That pro­duces poli­ti­cal prob­lems.

It also turns out that that in­creas­ing wealth seems to be a good way to re­duce the amount of chil­dren that a woman gets. That in­sights caused Bill Gates to fo­cus his philantropic efforts in a way where he says things like:

The world to­day has 6.8 billion peo­ple. That’s headed up to about nine billion. Now, if we do a re­ally great job on new vac­cines, health care, re­pro­duc­tive health ser­vices, we could lower that by, per­haps, 10 or 15 per­cent.

You might find that GiveWell’s high­est recom­mended char­ity is about malaria bed nets. Health care for the third world. Again that’s a point where we can make differ­ent ar­gu­ments to en­courage peo­ple to spend money on Afri­can bed nets. Sav­ing a life for 2000\$ seems to be a good ar­gu­ment to con­vince peo­ple.

GiveWell style al­tru­is­tic al­tru­ism is an al­ter­na­tive to ap­proach­ing Africa to “What can we do to re­duce Afri­can pop­u­la­tion as effec­tively as pos­si­ble”.

I think that pop­u­la­tion is an area that ob­vi­ous enough that I would ex­pect smart peo­ple on less­wrong and the effec­tive al­tru­ism com­mu­nity to not be ig­no­rant about the topic.

If you want to get a good feel for the data about pop­u­la­tion growth I would also recom­mend you to play a bit around with Gap­min­der(Press play to see how the child per woman ra­tio changed over the last 60 years.

• I think that pop­u­la­tion is an area that ob­vi­ous enough that I would ex­pect smart peo­ple on less­wrong and the effec­tive al­tru­ism com­mu­nity to not be ig­no­rant about the topic.

Why? It seems like your com­ment was in­tended for some­one ask­ing a differ­ent ques­tion than the one I’m ask­ing. I’m not ask­ing for ar­gu­ments and rea­son­ing you can come up with that are pop­u­la­tion/​re­source us­age re­lated, but rather why you think a mod­er­ate por­tion of LessWrong and the effec­tive al­tru­ism com­mu­nity have put suffi­cient thought into it that it no longer needs to be dis­cussed in con­texts like LessWrong. I had though it was ob­vi­ous that that was the point I was ques­tion­ing, and so would be the fo­cus of any ques­tion I asked in re­sponse of your re­sponse, but it seems it was not as ob­vi­ous as I thought it was.

Ba­si­cally: Why do you think pop­u­la­tion growth is an “ob­vi­ous” is­sue?

• we take much more the­o­ret­i­cal of dan­gers se­ri­ously.

Do we? How much re­sources is al­lo­cated to the risk of gray goo or, say, the Yel­low­stone su­per­vol­cano?

Talk is cheap.

• Well then, bokov is talk­ing about the over­shoot so we’re good, right?

• Depends on how mo­ti­vated oth­ers will now be to bring up this is­sue.

• But oth­ers are already al­lo­cat­ing re­sources to the over­shoot, in my opinion way more than it de­serves.

• In a use­ful way? Quite frankly, I don’t trust very many peo­ple at all to spend their re­sources in use­ful ways. And this in­cludes peo­ple who fre­quent LessWrong.

• I re­al­ized as I was writ­ing this that the over­shoot is kind of like AIDS or ag­ing—it doesn’t kill you di­rectly, just pre­dis­poses you to­ward things that will.

I’ll edit it so the union of the whole set of con­di­tional-on-over­shoot dis­asters plus “other” is the like­li­hood of over­shoot it­self.

• OK then, you put for­ward an es­ti­mate: an over­shoot is very likely. Now what makes you think so?

• This looks in­co­her­ent. You call over­shoot “very likely” and “near ex­tinc­tion due to cli­mate change con­di­tional on over­shoot: some­what likely”. Even if I in­ter­pret those as .7 and .2 re­spec­tively, we wind up with an un­con­di­tional prob­a­bil­ity of at least .14, which I hope is not what you mean by “the­o­ret­i­cally pos­si­ble but un­likely”. If that is what you meant then I do not un­der­stand how the world looks to you, or why you’re not spend­ing this time fundrais­ing for CSER /​ tak­ing heroin.

• If that is what you meant then I do not un­der­stand how the world looks to you, or why you’re not spend­ing this time fundrais­ing for CSER /​ tak­ing heroin.

Classy.

I have only 5 bins here with which to span ev­ery­thing in (0,1): the­o­ret­i­cally pos­si­ble but un­likely, some­what likely, likely, very likely, and in­evitable. The goal is a rough rank­ing, at this point, I don’t have enough in­for­ma­tion to mean­ingfully es­ti­mate ac­tual prob­a­bil­ities. You have a good point, though: it would be more self-con­sis­tent to say con­di­tional on no over­shoot for the first set.

If flam­ing me is what it takes for you to think se­ri­ously about this, then maybe it’s worth it.

• Why don’t you think nor­mal mar­ket mechanisms...

It is not clear that mar­kets can see that far in to the fu­ture. Part of the value of a high-ish dis­count rate is that it dis­counts the more un­re­li­able far fu­ture pre­dic­tions against the closer in ones. To some­one with bet­ter fu­ture vi­sion, the op­ti­mum dis­count rate would be lower.

• It is not clear that mar­kets can see that far in to the fu­ture.

Yes, of course. On the other hand what are your al­ter­na­tives for bet­ter far-seers?

To some­one with bet­ter fu­ture vi­sion...

Some­one who ac­tu­ally has bet­ter fu­ture vi­sion can be­come very rich via the fi­nan­cial mar­kets.

• Why don’t you think nor­mal mar­ket mechanisms (the more scarce re­source X is, the higher its price, the larger the in­cen­tives to use less, the more in­tense search for its re­place­ment) will han­dle the prob­lem?

• “The mar­ket will han­dle it” is a cu­ri­ousity kil­ler rather than an ex­pla­na­tion, no differ­ent from “God will provide”. How will the mar­ket han­dle it? Why hasn’t it done so already? How long will it take? Is it pos­si­ble for the mar­ket to fail? How do we es­ti­mate the chances of failure?

• If a bias or blind spot is wide­spread enough, the mar­ket will not be im­mune to it ei­ther.

• The mar­ket is just an­other op­ti­miza­tion pro­cess. A use­ful and suc­cess­ful one most of the time, but not to be blindly trusted any more than any other op­ti­miza­tion pro­cess (es­pe­cially an op­ti­miza­tion pro­cess that is not un­der­stood).

• Mar­kets are prone to op­ti­miz­ing over a short time win­dow. The hu­man race won’t just spon­ta­neously opt to take the equiv­a­lent of a 10-year pay-cut to avoid dy­ing hor­ribly on year 11.

• Mar­kets are not in­vin­cible. They can fail to keep up with events. They may sys­tem­at­i­cally un­der- or over-value cer­tain items. There might sim­ply not be an ad­e­quate solu­tion in the part of the solu­tion-space that is ac­cessible by a mar­ket.

• Hid­ing in the phrase “more in­tense search for its re­place­ment” is an un­known un­known. If es­ti­mat­ing how and whether the mar­ket will han­dle the prob­lem de­pends on es­ti­mat­ing the out­comes and timescales of on­go­ing re­search, that doesn’t in­spire much op­ti­mism.

Am I say­ing that the UN or some gov­ern­ment should step in with re­source quo­tas and com­pul­sory ster­il­iza­tion? No, be­cause cen­tral­ized bu­reau­cra­cies have an even worse track record. I’m just say­ing that mag­i­cal think­ing com­pro­mises our prob­lem solv­ing abil­ities, and “we are in trou­ble if one of us doesn’t soon come up with a bet­ter plan to ac­cel­er­ate tech­nol­ogy and/​or limit pop­u­la­tion” is a more pro­duc­tive state of mind than a com­fort­ing black box like “the mar­kets will han­dle it”.

How about AI? Do you think nor­mal mar­ket mechanisms (the more peo­ple want to not be turned into pa­per­clips the larger the in­cen­tive to make a friendly AI) can be trusted to han­dle the friendly AI prob­lem?

• “The mar­ket will han­dle it” is a cu­ri­ousity kil­ler rather than an explanation

Nope, it’s nei­ther (un­less you think of the mar­ket as mag­i­cal, a sur­pris­ingly pop­u­lar at­ti­tude).

In this con­text it’s a fore­cast, a pre­dic­tion of what will hap­pen when some re­source X be­comes scarce. The mar­ket is a par­tic­u­lar mechanism in a hu­man so­ciety and “the mar­ket will han­dle it” is an as­ser­tion about al­lo­ca­tion of re­sources un­der spe­cific con­di­tions.

No one is say­ing that the mar­kets are “in­vin­cible” or any other non­sense like that. How­ever if you look at em­piri­cal ev­i­dence aka his­tory, the mar­kets helped hu­man so­cieties adapt and flour­ish in a wide va­ri­ety of con­di­tions, most of which were char­ac­ter­ized by scarcity of some re­sources.

Given this, I am happy to have “the mar­kets will han­dle this” as my prior.

If you think that in this par­tic­u­lar case there will be a mar­ket failure, please provide ar­gu­ments and ev­i­dence. If you think that there is a bet­ter al­ter­na­tive, please name it and again, provide ar­gu­ments and ev­i­dence why it’s bet­ter.

Other­wise you’re just en­gag­ing in a nir­vana fal­lacy.

• How­ever if you look at em­piri­cal ev­i­dence aka his­tory, the mar­kets helped hu­man so­cieties adapt and flour­ish in a wide va­ri­ety of con­di­tions, most of which were char­ac­ter­ized by scarcity of some re­sources.

So? The two broad de­faults for re­spond­ing to scarcity are trade and vi­o­lence, his­tory has plenty of ex­am­ples of both, and poli­ties that were suc­cess­ful at ei­ther will be more thor­oughly doc­u­mented in his­tory due to sur­vivor bias.

Nev­er­the­less, ev­ery com­plex civ­i­liza­tion pre­vi­ous to ours even­tu­ally failed and col­lapsed. If mar­kets ex­plained their suc­cess, do mar­ket failures ex­plain their demise? What rea­son do you have to be so con­fi­dent that ours has noth­ing to worry about?

• Well, the econ­omy of the Ro­man Em­pire col­lapsed when Dio­cle­tian un­der­mined the mar­kets by im­pos­ing price con­trols.

• Nev­er­the­less, ev­ery com­plex civ­i­liza­tion pre­vi­ous to ours even­tu­ally failed and col­lapsed. If mar­kets ex­plained their suc­cess, do mar­ket failures ex­plain their demise?

What do you mean with civil­i­sa­tion in that sen­tence? Are you refer­ing to Fermi or are you tak­ing about hu­man civil­i­sa­tions?

• In this con­text, in­di­vi­d­ual hu­man civ­i­liza­tions.

• Nev­er­the­less, ev­ery com­plex civ­i­liza­tion pre­vi­ous to ours even­tu­ally failed and col­lapsed.

Hu­man­ity is still here and looks pretty com­plex to me :-) In­di­vi­d­ual civ­i­liza­tions come and go, sure, but that’s not the ques­tion we’re dis­cussing. If e.g. the Western civ­i­liza­tion col­lapses, there will be oth­ers ready and will­ing to take its place.

• Ac­tu­ally, for the first time in his­tory, we might have achieved a global civ­i­liza­tion, as well as a global sin­gle point of failure.

• Nope, it’s nei­ther (un­less you think of the mar­ket as mag­i­cal, a sur­pris­ingly pop­u­lar at­ti­tude).

How well did the mar­ket han­dle real es­tate in the mid 2000′s? How well did it han­dle tech stocks in 1999? Tulip bulbs back in the day?

Who thinks it is magic?

• How well did the mar­ket han­dle real es­tate in the mid 2000′s?

Sum­ner likes to point out that in many coun­tries which were claimed to be ‘bub­bles’, the bub­ble never popped. Also true of many re­gions in the USA—how’s that SF bub­ble go­ing?

How well did it han­dle tech stocks in 1999?

How high are the stock prices of Ama­zon, Google, and Ap­ple now? Oh look, Bit­coin is at \$160, how did that hap­pen when ev­ery­one knew it was a bub­ble which popped?

Tulip bulbs back in the day?

Every­thing you know about Tulipo­ma­nia is false or in­com­plete. I sug­gest read­ing Fa­mous First Bub­bles.

• How well did it han­dle tech stocks in 1999?

How high are the stock prices of Ama­zon, Google, and Ap­ple now?

Bit of a glib re­sponse. (One could ask, equally rhetor­i­cally, “How high are the stock prices of Tis­cali, last­minute.com, and In­foS­pace/​Blu­cora now?”) But since you elab­o­rated be­low with ac­tual ar­gu­ments I won’t press this point.

Tulip bulbs back in the day?

Every­thing you know about Tulipo­ma­nia is false or in­com­plete. I sug­gest read­ing Fa­mous First Bub­bles.

Does the book go be­yond Gar­ber’s pa­pers on tulip­ma­nia? My read­ing of Gar­ber’s ar­gu­ment in those pa­pers is:

1. Most peo­ple get their ideas about the tulip mar­ket from Charles Mackay, but he pla­gia­rized his ac­count, and it ul­ti­mately comes from “three anony­mously writ­ten pam­phlets”.

2. Mackay ex­ag­ger­ated, among other things, the amount of na­tional-level eco­nomic dis­tress re­sult­ing from the tulip ma­nia.

3. “Mackay did not re­port trans­ac­tion prices for the rare bulbs im­me­di­ately af­ter the col­lapse”, which are the prices one would need to es­tab­lish the pop­ping of a bub­ble. In­stead he quoted high prices from be­fore the bub­ble popped, and prices “from 60 or 200 years af­ter the col­lapse”. But what he found could be con­sis­tent with the prices ac­cu­rately re­flect­ing changes in fun­da­men­tals. Why? Be­cause a new & at­trac­tive va­ri­ety of flower might grad­u­ally come into fash­ion (rais­ing its price) and then suffer a glut over time as more bulbs be­come available (low­er­ing its price).

4. One can con­firm that’s how things nor­mally worked by look­ing at changes over time in prices long af­ter the bub­ble. Even in non-bub­ble times, bulb prices would con­sis­tently start high and then fall steadily.

I don’t dis­agree with those claims, as far as they go, but high­light­ing a lack of con­clu­sive ev­i­dence for a bub­ble doesn’t mean there wasn’t a bub­ble. Even Gar­ber’s seem­ingly damn­ing re­view of the price data doesn’t mean much, be­cause Gar­ber (like Mackay) fails to quote prices from im­me­di­ately af­ter the col­lapse.

What Gar­ber ac­tu­ally does is calcu­late that tulip bulb prices de­pre­ci­ated by 24%-76% per year over the 5-6 years af­ter the peak. He com­pares that to the 2%-40% an­nual de­pre­ci­a­tion of bulb prices in the next cen­tury, says the ear­lier rates are only mod­estly higher than the later rates, and so there wasn’t a bub­ble-in­di­cat­ing de­vi­a­tion from nor­mal de­pre­ci­a­tion.

But Gar­ber would likely have seen the same thing even if there had been an abrupt bub­ble pop. Sup­pose a tulip bulb’s price peaked at 1000 guilders, crashed to 200 guilders within a week, then sank grad­u­ally to 100 guilders over the next five years. An economist who, know­ing only the start & end points, in­ter­po­lated to es­ti­mate the an­nual de­pre­ci­a­tion would (if I’ve done the sums right) get a 37% rate, which gives no sign of the ini­tial crash. Ob­serv­ing a nor­mal de­pre­ci­a­tion rate isn’t good ev­i­dence against a bub­ble; one has to know prices closer to the event.

Does Gar­ber’s book have those data, or at least a novel ar­gu­ment miss­ing from his pa­pers?

• Bit of a glib re­sponse.

Yes, but it hope­fully wakes up peo­ple who glibly point at one stock or one price change as proof pos­i­tive of bub­bles: the claim for bub­bles is a long-term statis­ti­cal claim, and can­not be sup­ported by sim­ply go­ing “Tulips!”

Does the book go be­yond Gar­ber’s pa­pers on tulip­ma­nia?

I don’t know. Not re­ally in­ter­ested in tak­ing the time to com­pare them in de­tail. Pre­sum­ably the book form in­cludes much more de­tail than space-re­stricted pa­pers.

I don’t dis­agree with those claims, as far as they go, but high­light­ing a lack of con­clu­sive ev­i­dence for a bub­ble doesn’t mean there wasn’t a bub­ble.

Given how many peo­ple cite Tulipo­ma­nia as a ir­refutable smack­down in these sorts of dis­cus­sions (‘Bit­coins are worth­less—at least you could plant tulips!’), learn­ing that there is min­i­mal ev­i­dence for what is pop­u­larly con­sid­ered to be a large, ir­refutable, his­tor­i­cally es­tab­lished, un­ques­tion­able bub­ble should badly dam­age one’s con­fi­dence in other claims re­lat­ing to bub­bles since it tells one a lot about what passes for ev­i­dence in those dis­cus­sions.

Ob­serv­ing a nor­mal de­pre­ci­a­tion rate isn’t good ev­i­dence against a bub­ble; one has to know prices closer to the event.

It’s been a while since I read the book, but doesn’t he do ex­actly that and does com­pare de­pre­ci­a­tion from peak prices in places? For ex­am­ple, on pg64 of my copy:

Even from the peaks of Fe­bru­ary 1637, the price de­clines of the rarer bulbs, English Ad­miral, Ad­miral van der Eyck, and Gen­eral Rot­gans, over the course of six years was not un­usu­ally rapid. We shall see be­low that they fit the pat­tern of de­cline typ­i­cal of a prized va­ri­ety...Prices for these bulbs de­clined at an av­er­age an­nual per­centage rate of 28.5 per­cent. From table 9.1, the three costly bulbs of Fe­bru­ary 1637 (English Ad­miral, Ad­mirael van der Eyck, and Gen­eral Rot­gans) had an av­er­age an­nual price de­cline of 32 per­cent from the peak of the spec­u­la­tion through 1642. Us­ing the eigh­teenth-cen­tury price de­pre­ci­a­tion rate as a bench­mark also fol­lowed by ex­pen­sive bulbs af­ter the ma­nia, we can in­fer that any price col­lapse for rare bulbs in Fe­bru­ary 1637 could not have ex­ceeded 16 per­cent of peak prices. Thus, the crash of Fe­bru­ary 1637 for rare bulbs was not of ex­traor­di­nary mag­ni­tude and did not greatly af­fect the nor­mal time se­ries pat­tern of rare bulb prices.

• Pre­sum­ably the book form in­cludes much more de­tail than space-re­stricted pa­pers.

I’d hope so, al­though I can imag­ine an aca­demic padding things out with ir­rele­vant side de­tail or other yakkety-yak-yak. In those cases one may as well stick with the pa­pers.

Ob­serv­ing a nor­mal de­pre­ci­a­tion rate isn’t good ev­i­dence against a bub­ble; one has to know prices closer to the event.

It’s been a while since I read the book, but doesn’t he do ex­actly that and does com­pare de­pre­ci­a­tion from peak prices in places? For ex­am­ple, on pg64 of my copy:

Not based on that quote. That’s the same rea­son­ing he uses in his pa­pers. (Your quoted bit ap­pears, al­most word-for-word, on pages 550 & 553 of the “Tulip­ma­nia” pa­per.) The flaw is the same; es­ti­mat­ing a de­pre­ci­a­tion rate based on data points 5-6 years apart won’t tell you whether there was an abrupt dip that took only a few days or weeks.

learn­ing that there is min­i­mal ev­i­dence for what is pop­u­larly con­sid­ered to be a large, ir­refutable, his­tor­i­cally es­tab­lished, un­ques­tion­able bub­ble should badly dam­age one’s con­fi­dence in other claims re­lat­ing to bub­bles since it tells one a lot about what passes for ev­i­dence in those dis­cus­sions.

It is a good ex­am­ple of why one shouldn’t take peo­ple’s claims that some­thing’s a bub­ble at face value. Although I don’t think the mag­ni­tude of the tulip­ma­nia has much bear­ing on whether tech stocks, Bit­coins, or real es­tate are/​were bub­bling; for those last three things, there are time se­ries data that’re a lot more rele­vant than what hap­pened to Dutch tulip bulbs 376 years ago.

(I also won­der whether I up­dated too much on the ba­sis of one economist’s con­trar­i­anism. Really, I went too far in my last com­ment by refer­ring to “a lack of con­clu­sive ev­i­dence for a bub­ble” — it’s not as if I’ve looked for that ev­i­dence. I’ve just taken Gar­ber’s word for it.)

• The flaw is the same; es­ti­mat­ing a de­pre­ci­a­tion rate based on data points 5-6 years apart won’t tell you whether there was an abrupt dip that took only a few days or weeks.

But com­par­ing peak prices to prices years later does tell you that any ‘abrupt dip’ must have been com­pen­sated for by other price in­creases or main­te­nance of prices. If prices, from the peak, abruptly go down and then abruptly go up, and then fol­low their usual de­pre­ci­a­tion curve, that’s not a very bub­bly story.

Although I don’t think the mag­ni­tude of the tulip­ma­nia has much bear­ing on whether tech stocks, Bit­coins, or real es­tate are/​were bub­bling; for those last three things, there are time se­ries data that’re a lot more rele­vant than what hap­pened to Dutch tulip bulbs 376 years ago.

Sure. It drives me nuts how peo­ple con­stantly bring up Tulipo­ma­nia. Whether or not one agrees with Gar­ber’s find­ings, it should still be ob­vi­ous to them that ar­gu­ing about mod­ern fi­nance based on Tulipo­ma­nia is like try­ing to crit­i­cize Amer­i­can gov­ern­ment based on an­cient Greek poli­tics—the sources are bad and don’t an­swer the ques­tions we want to know, and even if we did have perfect knowl­edge of what hap­pened so long ago, the cir­cum­stances were so differ­ent and the world was so differ­ent that it can tell us very lit­tle about vaguely similar mod­ern situ­a­tions.

I also won­der whether I up­dated too much on the ba­sis of one economist’s con­trar­i­anism.

Maybe! I won­der that some­times my­self. But hon­estly, Tulipo­ma­nia has the feel of one of those parables which are too good to be true, so I don’t ex­pect a later economist to come along and say ‘ev­ery­thing you thought you knew from Gar­ber is false! yes, the stuff about tulip-break­ing virus is false! and tulip bulbs don’t de­pre­ci­ate ex­tremely fast! the fu­tures con­tracts weren’t can­celed! there were no ex­ten­u­at­ing cir­cum­stances like plague!’ etc

• But com­par­ing peak prices to prices years later does tell you that any ‘abrupt dip’ must have been com­pen­sated for by other price in­creases or main­te­nance of prices.

I don’t fol­low. Gar­ber’s data are con­sis­tent with the sce­nario I sketched in the penul­ti­mate para­graph of this com­ment, where I as­sume away any com­pen­sa­tion for the ini­tial dip.

If prices, from the peak, abruptly go down and then abruptly go up, and then fol­low their usual de­pre­ci­a­tion curve, that’s not a very bub­bly story.

Yeah, Gar­ber’s data are also con­sis­tent with an ini­tial re­bound.

Sure. It drives me nuts how peo­ple con­stantly bring up Tulipo­ma­nia.

Fair enough.

I don’t ex­pect a later economist to come along and say ‘ev­ery­thing you thought you knew from Gar­ber is false! yes, the stuff about tulip-break­ing virus is false! and tulip bulbs don’t de­pre­ci­ate ex­tremely fast! the fu­tures con­tracts weren’t can­celed! there were no ex­ten­u­at­ing cir­cum­stances like plague!’

Plague? Now that’s some­thing I don’t think he men­tions in the pa­pers. (Must...re­sist...urge to bor­row...yet an­other...book.)

• where I as­sume away any com­pen­sa­tion for the ini­tial dip...Gar­ber’s data are also con­sis­tent with an ini­tial re­bound.

No, you don’t. You bury it in the ‘sank grad­u­ally’ part:

But Gar­ber would likely have seen the same thing even if there had been an abrupt bub­ble pop. Sup­pose a tulip bulb’s price peaked at 1000 guilders, crashed to 200 guilders within a week, then sank grad­u­ally to 100 guilders over the next five years.

You can get an abrupt pop in­side an nor­mal-look­ing be­gin­ning/​end com­par­i­son if some­thing com­pen­sates for the pop, like an­other rise (un­likely) or prices then fal­ling slower than they nor­mally would (‘grad­u­ally’). The ground lost in the pop is then made up later.

Plague? Now that’s some­thing I don’t think he men­tions in the pa­pers.

His book’s cap­sule sum­mary of that bit goes

The spec­u­la­tion in com­mon bulbs was a phe­nomenon last­ing one month in the dreary Dutch win­ter of 1637. A drink­ing phe­nomenon held in the tav­erns, it oc­curred in the midst of a mas­sive out­break of bubonic plague and had no real con­se­quence.

It’s the topic of chap­ter 5, “The Bubonic Plague”.

(Must...re­sist...urge to bor­row...yet an­other...book.)

(It’s on Lib­gen, and isn’t a very long book.)

• No, you don’t. You bury it in the ‘sank grad­u­ally’ part: [...] You can get an abrupt pop in­side an nor­mal-look­ing be­gin­ning/​end com­par­i­son if some­thing com­pen­sates for the pop, like an­other rise (un­likely) or prices then fal­ling slower than they nor­mally would (‘grad­u­ally’). The ground lost in the pop is then made up later.

Ohh, I see what you’re get­ting at. I’d in­ter­preted “com­pen­sa­tion” more nar­rowly as some­thing halt­ing or out­right re­vers­ing the fall in prices, not merely de­cel­er­at­ing it.

Yeah, my sce­nario im­plies an un­usu­ally slow price drop af­ter the ini­tial speedy crash. That wouldn’t sur­prise me in the wake of the un­rav­el­ling of a self-fulfilling spec­u­la­tive ma­nia.

It’s the topic of chap­ter 5, “The Bubonic Plague”. [...] (It’s on Lib­gen, and isn’t a very long book.)

Good to know, thanks. Added that to my men­tal things-to-look-at-on-a-rainy-day list.

• gw­ern, I find your po­si­tion against bub­bles to be in­cred­ibly un­likely, and that is post my study­ing eco­nomics and fi­nance in­for­mally for the last 3 decades. But you are gw­ern who my post (as op­posed to my prior) warns me against dis­miss­ing.

If you can sug­gest any read­ing that you found par­tic­u­larly com­pel­ling against the usual in­ter­pre­ta­tion of mar­ket ma­nias, I’d love to take a look. I will google Fa­mous First Bub­bles, haven’t done that yet.

As far as real es­tate bub­ble, first I would point at Mort­gage Backed Se­cu­ri­ties (MBS) rather than the di­rect real es­tate mar­ket. Th­ese were rated AAA, in­sured for less than a penny on the dol­lar, and pur­chased by an­cient and ven­er­a­ble banks and oth­ers. And then in 2007/​2008 they al­most uniformly as a class blew up. Re­turned pen­nies on the dol­lar. Caused mul­ti­ple firms and banks around the world to go bankrupt. Re­sulted in gov­ern­ments around the world pump­ing trillions of dol­lars of liquidity into the sys­tem in a pro­cess analagous to foam­ing the run­way when a plane crashes. And the essence of it pre­dicted pub­li­cly by many of the smartest minds in fi­nance and in­vest­ing. I am think­ing of Buffett and Munger refer­ring to MBS deriva­tives as Weapons of Fi­nan­cial Mass Destruc­tion BEFORE the blowup, and I had in print in a book printed be­fore the de­struc­tion a speecy by Munger talk­ing about how there was go­ing to be a tremen­dously hor­rible event be­cause of deriva­tives “in the next 5 to 10 years” in a speech he gave in I think 2002. While MBS were hot, they were so in de­mand that bro­kers such as Salomon would cre­ate “syn­thetic” MBS, which were es­sen­tially just well doc­u­mented bets that would pay off ex­actly as an MBS would pay off over their life, but were made up be­cause there was still de­mand for MBS even af­ter the last home­less per­son with a pulse in the US had been given a 100% non-doc mort­gage to buy a house which would not be sell­able for even 80% of what was fi­nanced two years later.

Is even this not a bub­ble? Not the mar­ket chas­ing a dream in­stead of a busi­ness propo­si­tion and try­ing to fly up to heaven with the dream and failing?

How high are the stock prices of Ama­zon, Google, and Ap­ple now? Oh look, Bit­coin is at \$160, how did that hap­pen when ev­ery­one knew it was a bub­ble which popped?

The NASDAQ com­pos­ite peaked in early 2000 at over 4000. More than 13 years later it is STILL not back up to that level. Per­haps at least some of the in­vestors in AMZN and AAPL in 1999 were not caught in a bub­ble, but what about the bulk of the money, of which about 70% of the value evap­o­rated in less than 3 years, and which on the whole has not crept back up to even yet? And the NASDAQ com­pos­ite is not the only place to find this re­sult, CSCO, INTC, and QCOM were all bid up much higher in 2000 than they are sel­l­ing for even now. Proof that they were over­val­ued in 2000, no? By a fac­tor of a few? I’d like to know the er­ror I make when I think of this as a bub­ble, as mo­men­tum over­shoot­ing value and ra­tio­nal­ity by a fac­tor of a few?

• Mort­gage Backed Se­cu­ri­ties (MBS) … re­turned pen­nies on the dol­lar.

No, the AAA rates MBS did very well; 90% suffered no loses. It was the ABS CDOs (As­set Backed Se­cu­rity Co­lat­er­al­ised Debt Obli­ga­tions) that did badly.

• No, the AAA rates MBS did very well; 90% suffered no loses. It was the ABS CDOs (As­set Backed Se­cu­rity Co­lat­er­al­ised Debt Obli­ga­tions) that did badly.

In­deed the data you cite shows that it was Aaa rated CDOs that had de­fault rates about 90%. CDOs were backed by mort­gages as well.

Ex­tend­ing what you say about MBS to some more ac­cu­rate state­ments, the AAA rated MBS had about a 9% or 10% de­fault rate out to 4 years. There are 26 more years of life in those mort­gages in which they can still de­fault, go­ing to an even higher cu­mu­la­tive de­fault rate po­ten­tially.

Char­ac­ter­iz­ing a 9% de­fault rate on triple-A se­cu­ri­ties as “did very well” is quite wrong. His­tor­i­cally, triple-A cor­po­rate bonds de­fault at 0.6% or less rate, and triple-A mu­ni­ci­pals de­fault at 0.00% rate. A 9% de­fault rate 15 times higher than the rat­ings were in­tended to sug­gest. And the Baa MBS de­faulted at over 80% rate, more than 15 times the ~5% rate on Baa Cor­po­rate bonds prior to 2007.

The rat­ings were CRAP, sug­gest­ing a de­fault rate which should have been 1 to 2 or­ders of mag­ni­tude lower.

• The rat­ings aren’t in­tra-class in­de­pen­dent. Which is perfectly nor­mal; junk cor­po­rate failures are cor­re­lated too.

• Mort­gage Backed Se­cu­ri­ties (MBS) … re­turned pen­nies on the dol­lar.

No, the AAA rates MBS did very well; 90% suffered no loses. It was the ABS CDOs (As­set Backed Se­cu­rity Co­lat­er­al­ised Debt Obli­ga­tions) that did badly.

• gw­ern, I find your po­si­tion against bub­bles to be in­cred­ibly un­likely, and that is post my study­ing eco­nomics and fi­nance in­for­mally for the last 3 decades.

(For­give me when I read this men­tally as “And that is post my be­ing a ran­dom In­ter­net pun­dit for decades”.)

As far as real es­tate bub­ble, first I would point at Mort­gage Backed Se­cu­ri­ties (MBS) rather than the di­rect real es­tate mar­ket. Th­ese were rated AAA, in­sured for less than a penny on the dol­lar, and pur­chased by an­cient and ven­er­a­ble banks and oth­ers. And then in 2007/​2008 they al­most uniformly as a class blew up. Re­turned pen­nies on the dol­lar. Caused mul­ti­ple firms and banks around the world to go bankrupt.

I don’t think it’s very use­ful to define a ‘bub­ble’ as “any large price in­crease fol­lowed by a price de­crease”.

I’d rather use a more pow­er­ful EMH-fo­cused defi­ni­tion: a bub­ble is large price in­crease which rep­re­sents an in­effi­ciency in the mar­ket which is pre­dictable in ad­vance (not in hind­sight), ex­ploitable, and worth ex­ploit­ing. Merely point­ing out some dis­aster, or some large price de­crease, does not demon­strate the ex­is­tence of bub­bles, be­cause that ob­ser­va­tion could re­sult from un­avoid­able or un­ob­jec­tion­able causes like the in­her­ent con­se­quences of risk-tak­ing, mis­taken analy­ses, per­verse in­cen­tives, etc.

Peo­ple make mis­takes; dis­asters hap­pen. If they never hap­pened, and AAA never went bust, couldn’t one make a lot of money by ex­ploit­ing that in­effi­ciency in the mar­ket and pick­ing up pen­nies in front of the non-ex­is­tent steam­rol­ler?

I am think­ing of Buffett and Munger refer­ring to MBS deriva­tives as Weapons of Fi­nan­cial Mass Destruc­tion BEFORE the blowup, and I had in print in a book printed be­fore the de­struc­tion a speecy by Munger talk­ing about how there was go­ing to be a tremen­dously hor­rible event be­cause of deriva­tives “in the next 5 to 10 years” in a speech he gave in I think 2002. While MBS were hot, they were so in de­mand that bro­kers such as Salomon would cre­ate “syn­thetic” MBS, which were es­sen­tially just well doc­u­mented bets that would pay off ex­actly as an MBS would pay off over their life, but were made up be­cause there was still de­mand for MBS even af­ter the last home­less per­son with a pulse in the US had been given a 100% non-doc mort­gage to buy a house which would not be sell­able for even 80% of what was fi­nanced two years later.

How much money did Munger & Buffet make off their shorts of hous­ing, ex­actly? How much has Paul­son made post-hous­ing? (Does mak­ing billions off hous­ing, and then los­ing billions on gold & China, look more like skill & in­effi­cient mar­kets or luck & se­lec­tion effects?) How many economists did one hear of post-2008 who sud­denly turned out to be Cas­san­dras? You can go onto Bit­coin fo­rums and tech web­sites right now, and watch peo­ple pre­dict 20 out of the last 3 Bit­coin ‘bub­bles’. Fi­nance is just the same. Post hoc se­lec­tion of peo­ple warn­ing some­thing vaguely similar (deriva­tives? that’s a rather round­about way of pre­dict­ing a hous­ing bub­ble, which could have been pow­ered by all sorts of fi­nan­cial in­stru­ments, not just deriva­tives) is worth­less.

Is even this not a bub­ble? Not the mar­ket chas­ing a dream in­stead of a busi­ness propo­si­tion and try­ing to fly up to heaven with the dream and failing?

Hous­ing prices in SF, Aus­tralia, Lon­don, Canada, Man­hat­tan, China are hold­ing steady at bub­bleli­cious prices or try­ing to fly up to heaven. (Again, I bor­row this point from Sum­ner.) Per­haps they are us­ing tech­nol­ogy from the Apollo pro­gram.

The NASDAQ com­pos­ite peaked in early 2000 at over 4000. More than 13 years later it is STILL not back up to that level. Per­haps at least some of the in­vestors in AMZN and AAPL in 1999 were not caught in a bub­ble, but what about the bulk of the money, of which about 70% of the value evap­o­rated in less than 3 years, and which on the whole has not crept back up to even yet?

Why is this not just mis­taken be­liefs about the value of those loser com­pa­nies and about high-tech busi­ness mod­els? (No­tice how the big IPOs lately all have pretty clear rev­enue streams from ad­ver­tis­ing.) How could one know in ad­vance that Pets.com would not be Ama­zon.com, or vice-versa? How does a VC know which of his in­vest­ments will go bankrupt and which will own an in­dus­try? Tell me: if to­mor­row a break is dis­cov­ered in the core Bit­coin pro­to­col/​cryp­tog­ra­phy and the price goes to \$0.00, was Bit­coin a bub­ble or a mis­take?

To sum­ma­rize: I think you are grasp­ing at sur­face fea­tures, not think­ing about the anti-bub­ble ar­gu­ments or are just un­fa­mil­iar, and are en­gaged in post hoc anal­y­sis where you se­lect out of the buzzing hive of ar­gu­ment and dis­agree­ment a few strands which seem right to you with the benefit of many years of data.

• OK you like EMH so much that you think 9 stu­dents from one pro­fes­sor all out­perform­ing for decades is cherry pick­ing and data min­ing. I think it is find­ing a small group of peo­ple wh oclaim to be learn­ing from some­one who has em­piri­cally ver­ified meth­ods, and who, when they ap­ply these meth­ods, get the pre­dicted re­sults con­sis­tently for decades. I think char­ac­ter­iz­ing this as cherry pick­ing and data min­ing is at more likely to be a bad ex­pla­na­tion for what is be­ing seen than is mine, which is that they are do­ing what ehy say they are do­ing, and it is work­ing.

Even a broad in­dex fund is “man­aged.” The con­di­tions for be­ing listed are quite stringent, and in­volve “sur­vival bias” filters, if stocks fall be­low a cer­tain value they are delisted. I ac­tu­ally don’t think that the difficulty of beat­ing the SP500 is much of a proof of EMH as much as it is a proof that very straigt­for­ward stan­dards ap­plied on a slow timescale cap­ture al­most all of the value available from man­ag­ing a port­fo­lio. I think peo­ple in­vest­ing more broadly than SP500, peo­ple in­vest­ing with peo­ple who come in to their liv­ing rooms seek­ing “an­gel” in­vestors do a lot worse. If the mar­ket was effi­cient in prin­ci­ple, then one wouldn’t need the SP500 or even the NASDAQ seal of ap­proval to wind up with re­sults that were at the mar­ket mean. If us­nig your brain is re­quired to pick SP500 over liv­ing room pitch man, then in prin­ci­ple, us­ing your brain is re­quired to get rea­son­able re­sults.

I think if a propo­si­tion of effi­ciency is to be proved true, ti si not by look­ing at the av­er­age perfor­mance of ev­ery tom dick and harry and notic­ing that with math­e­mat­i­cal ne­ces­sity they tend to have the same mean as the mar­ket which of course they com­prise. I think a proper proof of efficinecy re­quires show­ing in de­tail that there are no con­sis­tent out­liers of high perfor­mance. That funds with decades long records of out­perfor­mance oc­cur at the proper rate to be con­sis­tent with pure luck. In­deed, to show that while it ap­pears that some peo­ple pre­dictably out­perform, that for all these ac­tors past perfor­mance is no pre­dic­tor of fu­ture perfor­mance, and that the hang­ers on that joined Buffett in the 60s or 70s or 80s or 90s af­ter see­ing his record THOUGHT their out­perfor­mance was due to their iden­ti­fy­ing a win­ner, but that it was con­sis­tent with just pure dumb con­tin­u­ous luck.

I think their is a gi­gan­tic differ­ence be­tween “we can­not prove that their is alpha” and “the most likely ex­pla­na­tion of what we see is that their is no alpha.”

As to iden­ti­fy­ing bub­bles that were not bub­bles, the only bub­bles I have iden­ti­fied are tech,and real es­tate. I iden­ti­fied a “bub­ble” in a small com­pany stock (Con­duc­tus) where a com­pany with no real prod­ucts gen­er­ated ex­cite­ment by talk­ing about how they were get­ting in to the cel­lu­lar in­dus­try, driv­ing their stock price from 3 to about 80 be­fore they crashed back down to 3. I shorted them at about 70, took my re­turns a few weeks later at 60 or so, they pro­ceeded to rise to 80 and then within a year drop back to 2.5. I iden­ti­fied an­other mis­pric­ing in NHCS where nu­mer­i­cally they were spin­ning out a com­pany which was be­ing com­pletely un­der­val­ued in their cur­rent stock price. I asked oth­ers “can this re­ally be true,” they said only in gen­eral yeah stuff like that hap­pens. I bought a few thou­sand dol­lars worth, made the 20% or so re­turn it seemed I was see­ing lay­ing on the table a few months later.

The main sense in which the mar­ket seems effi­cient is that the prices are pre­dom­i­nantly set us­ing sen­si­ble analy­ses, pre­sum­ably be­cause those who do not fol­low a proven tech­nique of pick­ing sen­si­ble prices do not sur­vive, so the main com­po­nent of mar­ket effi­ciency is that the pro­cesses for beat­ing the mar­ket are broadly ex­er­cised and dom­i­nat­ing the mar­ket. So it is hard to do bet­ter than free-rid­ing on that. But does it turn out that some peo­ple do bet­ter at that pro­cess than oth­ers? I think the best ex­pla­na­tion for what we see is that yes, some do, and that they are a small­ish minor­ity is not be­cause they are just the tail of a ran­dom dis­tri­bu­tion, but be­cause of math­e­mat­i­cal ne­ces­sity beat­ing the av­er­age sig­nifi­cantly can only be done by a minor­ity.

Any­way, thanks for stick­ing with it and ex­plain­ing your po­si­tion to me.

• OK you like EMH so much that you think 9 stu­dents from one pro­fes­sor all out­perform­ing for decades is cherry pick­ing and data min­ing.

To ex­pand even fur­ther on my cri­tique: you are plac­ing a huge amount of weight on 9 stu­dents, of un­known ve­rac­ity, out of an un­known num­ber of stu­dents (it­self out of an un­known num­ber of mil­lions of peo­ple who have tried to beat the mar­ket over the past cen­tury), who have not re­leased au­dited records much less ones com­par­ing them to in­dex­ing, who started half a cen­tury ago (which is the in­vest­ing dark ages com­pared to what goes on now, in 2013), and at least one of whose suc­cesses seem to be par­tially ex­plained by non-effi­ciency-re­lated fac­tors?

This is roughly as con­vinc­ing as Acts of the Apos­tles doc­u­ment­ing the 12 apos­tles’ suc­cesses in beat­ing the (re­li­gious) mar­ket and earn­ing con­verts.

I think peo­ple in­vest­ing more broadly than SP500, peo­ple in­vest­ing with peo­ple who come in to their liv­ing rooms seek­ing “an­gel” in­vestors do a lot worse. If the mar­ket was effi­cient in prin­ci­ple, then one wouldn’t need the SP500 or even the NASDAQ seal of ap­proval to wind up with re­sults that were at the mar­ket mean.

Those an­gel in­vestors are forfeit­ing di­ver­sifi­ca­tion and so can eas­ily earn be­low-av­er­age re­turns. EMH doesn’t mean that you can­not de­liber­ately con­trive to lose money.

I think their is a gi­gan­tic differ­ence be­tween “we can­not prove that their is alpha” and “the most likely ex­pla­na­tion of what we see is that their is no alpha.”

I think in an ad­ver­sar­ial en­vi­ron­ment where ev­ery­one claims to be able to beat the mar­ket and you should give them their money, and there are com­pel­ling the­o­ret­i­cal rea­sons that any beat­ing of the mar­ket would wipe out what­ever ad­van­tage was posssed, there is not such a gi­gan­tic differ­ence.

I bought a few thou­sand dol­lars worth, made the 20% or so re­turn it seemed I was see­ing lay­ing on the table a few months later.

Con­grat­u­la­tions on your day-trad­ing suc­cess. You know what hap­pens to most of them, right?

• EMH doesn’t mean that you can­not de­liber­ately con­trive to lose money.

Un­der EMH is pretty hard to de­liber­ately and con­sis­tently lose money. It’s very easy to get ad­di­tional risk (e.g. by not di­ver­sify­ing), but I don’t think EMH en­vi­sions as­sets with nega­tive ex­pected re­turn.

• Mm, the way I re­mem­bered was that by not di­ver­sify­ing, you were tak­ing on ad­di­tional un­com­pen­sated risk; not di­ver­sify­ing wasn’t com­pletely neu­tral, ex­pected-value wise. (Also, there’s ob­vi­ous ways to guaran­tee los­ing money: trade a lot. The fees will kill you.)

• Yep, that’s what I said—that you can eas­ily get ad­di­tional risk by not di­ver­sify­ing.

And the trad­ing fees are out­side of EMH—there are cer­tainly plenty of ways to re­li­ably lose money in the real world, but not in the EMH world.

• Yep, that’s what I said—that you can eas­ily get ad­di­tional risk by not di­ver­sify­ing.

I said ‘un­com­pen­sated’ risk.

• EMH doesn’t say any­thing about un­com­pen­sated risks.

To get to risk pre­mium you need some­thing like CAPM or APT which are a differ­ent ket­tle of fish.

• … who have not re­leased au­dited records much less ones com­par­ing them to in­dex­ing, …,

Ac­tu­ally the records ARE au­dited, they ARE com­pared to in­dex­ing, and those records and com­par­i­sons are re­ported by the origi­nal ar­ti­cle I men­tioned, which I fi­nally link to here.

you are plac­ing a huge amount of weight on 9 students

If a pro­fes­sor’s stu­dents dom­i­nate some part of en­g­ineer­ing or biol­ogy or chem­istry, it is gen­er­ally taken as ev­i­dence that the pro­fes­sor was teach­ing some­thing real. I sup­pose if we had an Effi­cient Knowl­edge The­ory we would un­der­stand that go­ing to Caltech or MIT was as waste­ful as pick­ing up \$20 bills on the side­walk (which don’t ex­ist in a clas­sic EMH joke).

Should we be ques­tion­ing whether a good ed­u­ca­tion in philos­o­phy or math or physics or en­g­ineer­ing or biol­ogy or… is just a mis­match be­tween the power of ran­dom chance and the hu­man bias to­wards see­ing pat­terns? Or is there some­thing spe­cial about learn­ing how to value com­pa­nies that puts it in a cat­e­gory of anal­y­sis that is differ­ent from all other ob­ser­va­tions of the effects of knowl­edge?

In any case, the ar­ti­cle linked dis­cusses the ran­dom­ness hy­poth­e­sis ex­ten­sively point­ing out among other things that the var­i­ous in­vestors re­ported upon had ex­ceed­ingly small amounts of over­lap in what they ac­tu­ally in­vested in.

Gw­ern, these com­ments are not so much aimed at you, you have ob­vi­ously been down these roads and de­cided which way you would turn. They are aimed at any­body read­ing this who is still not sure about EMH. The ar­ti­cle linked is ex­cel­lent and writ­ten by a guy who walks the walk bet­ter than any­body else in hu­man his­tory (so far).

• Ac­tu­ally the records ARE au­dited, they ARE com­pared to in­dex­ing, and those records and com­par­i­sons are re­ported by the origi­nal ar­ti­cle I mentioned

I don’t see any men­tion of how they were au­dited (Buffett merely says that they ‘were au­dited’, no men­tion of by whom, when, what the au­dits said, whether he saw the re­sults, etc, and offers as re­as­surance that checks were paid for the ap­pro­pri­ate amounts, which is not my prob­lem here), and if you re­ally want to nit­pick, then I would bring up that Buffet does not talk about ‘9’ stu­dents, he ac­tu­ally talks about 4 peo­ple who worked for Gra­ham, tells us that ‘it’s pos­si­ble to trace the record of three’ (well, there’s some se­lec­tion bias right there...) and does not ex­plain how the 3 part­ners did (more se­lec­tion bias), and some of his other ex­am­ples are ques­tion­able at best—in­clud­ing his very good friend Munger, in­clud­ing two funds he ‘in­fluenced’ (while dis­claiming that he might have in­fluenced any other funds and this isn’t cher­ryp­ick­ing, which I don’t un­der­stand how he can hon­estly say how he knows for sure he has not similarly in­fluenced any oth­ers), re­port­ing differ­ent met­rics for differ­ent ex­am­ples (why is Munger com­pared against the Dow while oth­ers are com­pared against the S&P?), not com­par­ing against an in­dex (table 8), and some do not beat the com­par­i­son in­dex at all (Table 9, Becker, un­der­performs S&P by 3%)

If a pro­fes­sor’s stu­dents dom­i­nate some part of en­g­ineer­ing or biol­ogy or chem­istry, it is gen­er­ally taken as ev­i­dence that the pro­fes­sor was teach­ing some­thing real.

Buffett doesn’t dom­i­nate the mar­kets, and the proper com­par­ion is to ideas, not stu­dents—if a sin­gle pro­fes­sor’s stu­dents dom­i­nated, I’d be more in­clined to sus­pect cor­rup­tion or logrol­ling or the pro­fes­sor be­ing a ge­nius at aca­demic in­fight­ing and bu­reau­cracy...

I sup­pose if we had an Effi­cient Knowl­edge The­ory we would un­der­stand that go­ing to Caltech or MIT was as waste­ful as pick­ing up \$20 bills on the side­walk...Should we be ques­tion­ing whether a good ed­u­ca­tion in philos­o­phy or math or physics or en­g­ineer­ing or biol­ogy or… is just a mis­match be­tween the power of ran­dom chance and the hu­man bias to­wards see­ing pat­terns?

Mar­kets are very differ­ent from elec­tronic cir­cuits or par­ti­cle physics or philos­o­phy or en­g­ineer­ing. Cir­cuits don’t care if you found a more effi­cient way to de­sign them. The prop­er­ties of steel will not change when you dis­cover it lets you build prof­itable bridges.

Or is there some­thing spe­cial about learn­ing how to value com­pa­nies that puts it in a cat­e­gory of anal­y­sis that is differ­ent from all other ob­ser­va­tions of the effects of knowl­edge?

Er, yes, there is. That’s kind of the point of the effi­cient mar­kets con­cept! Mar­kets are un­usual and spe­cial in that the at­tempt to find pre­dictable reg­u­lar­i­ties leads to the ex­ploita­tion of the reg­u­lar­i­ties and their dis­ap­pear­ance. (Eliezer de­scribes this as “mar­kets are anti-in­duc­tive”, which is not wrong, but I’m con­vinced there must be some more in­tu­itively un­der­stand­able phrase than that.)

Gw­ern, these com­ments are not so much aimed at you, you have ob­vi­ously been down these roads and de­cided which way you would turn. They are aimed at any­body read­ing this who is still not sure about EMH.

Is that one ar­ti­cle re­ally the best, solidest, most con­vinc­ing crit­i­cism of EMH you can come up with, which you think will per­suade peo­ple read­ing this con­ver­sa­tion that EMH is to a mean­ingful de­gree false and mar­kets are of­ten beat­able—some cher­ryp­icked ques­tion­able ex­am­ples from the dawn of time?

• Is that one ar­ti­cle re­ally the best, solidest, most con­vinc­ing crit­i­cism of EMH you can come up with, which you think will per­suade peo­ple read­ing this con­ver­sa­tion that EMH is to a mean­ingful de­gree false and mar­kets are of­ten beat­able—some cher­ryp­icked ques­tion­able ex­am­ples from the dawn of time?

In its way, yes it is. You get a guy who has im­pec­ca­ble cre­den­tials, a mas­sive pub­lic record who thinks he has been in­vest­ing in­tel­li­gently for decades, who if he IS perform­ing ran­domly is a few sigma out on the pos­i­tive side of the ran­dom dis­tri­bu­tion. You get to see what he has to say about what he thought he was do­ing, how it fits with what a whole bunch of other peo­ple were do­ing, a co­gent de­scrip­tion for why it might work, and a bunch of num­bers about how it does in­deed seem to work. Buffett un­der­stands the idea that he could just be lucky and he ad­dresses it.

If you think the best ex­pla­na­tion of Buffett’s life and re­sults are that he has been fooled by ran­dom­ness, then you are a very differ­ent judge of char­ac­ter and in­for­ma­tion than me or mil­lions of oth­ers like me.

If the EMH was “the mar­kets are re­ally re­ally effi­cient, it is hard to pro­duce alpha (out­perfor­mance), hard to know when you have alpha, and easy to fool your­self be­cause of hu­man bi­ases” then who would ar­gue with that? Not me. But that step from “re­ally hard” to “im­pos­si­ble” is un­rea­son­able. It is not im­pos­si­ble to be a great base­ball player. It is not im­pos­si­ble to con­sis­tently beat other play­ers at poker, even though ev­ery­body play­ing has the same in­for­ma­tion, on av­er­age across all the hands. It is not im­pos­si­ble to un­der­stand 10 lan­guages, even though to most of us most of them sound like noise.

If EMH was right, wouldn’t the smartest, most quan­ti­ta­tive par­ti­ci­pants in the mar­ket have figured that out? Wouldn’t Re­nais­sance Tech­nolo­gies have 1) failed, and 2) figured out that their failure was con­sis­tent with ran­dom­ness where they thought there was or­der?

EMH is the hy­poth­e­sis that be­cause bunches of smart peo­ple all work to figure out what the best in­vest­ment is, there can be no ex­cess re­turns available to the smart peo­ple who all work hard to figure out what the best in­vest­ment is. Well if there are not ex­cess re­turns available to them, why do they do it?

Isn’t EMH the hy­poth­e­sis that, for EVERYBODY in the mar­ket, it would be more effi­cient to free ride and use your in­tel­li­gence on some­thing where you can ac­tu­ally pro­duce a re­turn?

Isn’t EMH ul­ti­mately a big floppy tent held up by a tent pole which the EMH’ers deny ex­ists?

• In its way, yes it is. You get a guy who has im­pec­ca­ble cre­den­tials, a mas­sive pub­lic record who thinks he has been in­vest­ing in­tel­li­gently for decades, who if he IS perform­ing ran­domly is a few sigma out on the pos­i­tive side of the ran­dom dis­tri­bu­tion. You get to see what he has to say about what he thought he was do­ing, how it fits with what a whole bunch of other peo­ple were do­ing, a co­gent de­scrip­tion for why it might work, and a bunch of num­bers about how it does in­deed seem to work. Buffett un­der­stands the idea that he could just be lucky and he ad­dresses it. If you think the best ex­pla­na­tion of Buffett’s life and re­sults are that he has been fooled by ran­dom­ness, then you are a very differ­ent judge of char­ac­ter and in­for­ma­tion than me or mil­lions of oth­ers like me.

First, let me point out that I put a fair amount of work into point­ing out all those flaws and holes in your last best cita­tion, and I’m a lit­tle an­noyed that you com­pletely ig­nored all of them in fa­vor of say­ing “but Buffett is so high-sta­tus and I like him so much”. Yes, and Ge­orge W. Bush fa­mously said of Vladimir Putin, “I looked the man in the eye. I found him to be very straight for­ward and trust­wor­thy and we had a very good di­alogue. I was able to get a sense of his soul. He’s a man deeply com­mit­ted to his coun­try and the best in­ter­ests of his coun­try and I ap­pre­ci­ate very much the frank di­alogue and that’s the be­gin­ning of a very con­struc­tive re­la­tion­ship.” We all know how that turned out.

What you think about Buffett’s “char­ac­ter” is ir­rele­vant to me, and for me, fur­ther em­pha­sizes your ex­tremely poor rea­son­ing in this area—that when pushed back, you re­sort to one man and your be­liefs about his “char­ac­ter”.

If EMH was right, wouldn’t the smartest, most quan­ti­ta­tive par­ti­ci­pants in the mar­ket have figured that out? Wouldn’t Re­nais­sance Tech­nolo­gies have 1) failed, and 2) figured out that their failure was con­sis­tent with ran­dom­ness where they thought there was or­der?

I don’t know why RenTech performs as well as they seem to. Pre­sum­ably it’s not the same rea­son that Mad­off was able to beat the mar­ket for so many years in con­tra­ven­tion of EMH. Per­haps it was the same rea­son SAC did well (in­sider trad­ing) and they sim­ply haven’t been caught yet. Or maybe there were some in­effi­cien­cies back when they started which they erased and have since been coast­ing on their rep­u­ta­tion. Given that it’s a very pri­vate hedge fund, we’ll prob­a­bly never know.

EMH is the hy­poth­e­sis that be­cause bunches of smart peo­ple all work to figure out what the best in­vest­ment is, there can be no ex­cess re­turns available to the smart peo­ple who all work hard to figure out what the best in­vest­ment is. Well if there are not ex­cess re­turns available to them, why do they do it?

Be­cause there is de­mand for in­vest­ment ser­vices, con­sid­er­able cog­ni­tive bi­ases at play along with wish­ful think­ing (‘I will be the next Buffett!’), and nor­mal prof­its available. After all, if no one was there tak­ing even the nor­mal prof­its, there would im­me­di­ately be ex­cess re­turns at­tract­ing peo­ple to the en­ter­prise...

Isn’t EMH the hy­poth­e­sis that, for EVERYBODY in the mar­ket, it would be more effi­cient to free ride and use your in­tel­li­gence on some­thing where you can ac­tu­ally pro­duce a re­turn?

No.

Isn’t EMH ul­ti­mately a big floppy tent held up by a tent pole which the EMH’ers deny ex­ists?

No.

• Be­cause there is de­mand for in­vest­ment ser­vices, con­sid­er­able cog­ni­tive bi­ases at play along with wish­ful think­ing (‘I will be the next Buffett!’), and nor­mal prof­its available. After all, if no one was there tak­ing even the nor­mal prof­its, there would im­me­di­ately be ex­cess re­turns at­tract­ing peo­ple to the en­ter­prise...

So your hy­poth­e­sis is that some pro­cess ties all the peo­ple there need­ing to provide skull sweat to get nor­mal re­turns (and cre­ate a mar­ket for ev­ery­body else that is effi­cient) works equally across all such play­ers? It sure doesn’t work that way in any other hu­man en­ter­prise I can think of. In­tel and AMD pro­duce differ­ent qual­ity chips for lap­tops. At the other end of tilt, In­tel and Qual­comm pro­duce very differ­ent qual­ity of chips for mo­bile. The physics de­part­ment at Caltech pro­duces a very differ­ent product, re­search wise and teach­ing wise, than the physics de­part­ment at USC. Writer Stephen King pro­duces a very differ­ent qual­ity of novel than do a thou­sand or more other au­thors pop­u­lat­ing the in­creas­ingly vir­tual shelves of book­stores. Even here on less­wrong, some of us write won­der­ful stuff which is read by many and ad­mired, while oth­ers of us strug­gle to get our karma up to 1000 and then hang on by our finger­nails stop­ping from say­ing what we re­ally want to say to keep it there.

So why on FSM’s tomato-col­ored earth would you ex­pect these fi­nan­cial cre­ators of effi­ciency to all get the same re­sults from their efforts?

And when shown the spread in effec­tive­ness in re­sults, to deny the ev­i­dence of your own eyes and de­clare it all to be the dis­tant tail of mil­lions of coin flip­pers?

It doesn’t seem like a stretch to you? It doesn’t seem that the ev­i­dence is strong that the mar­ket is VERY effi­cient, but that the ev­i­dence is not there that it is COMPLETELY effi­cient?

• First, let me point out that I put a fair amount of work into point­ing out all those flaws and holes in your last best cita­tion, and I’m a lit­tle an­noyed that you com­pletely ig­nored all of them in fa­vor of say­ing “but Buffett is so high-sta­tus and I like him so much”.

If you want I’ll go through them point by point.

I don’t see any men­tion of how they were au­dited (Buffett merely says that they ‘were au­dited’, no men­tion of by whom, when, what the au­dits said, whether he saw the re­sults, etc, and offers as re­as­surance that checks were paid for the ap­pro­pri­ate amounts, which is not my prob­lem here)

Pre­sum­ably you can see the differ­ence be­tween your stat­ing these are NOT au­dited, and then when pointed out that they are, you back off to this.

The re­sults of the au­dit are the re­sults in this ar­ti­cle. That is, these are re­sults re­ported which sur­vived the au­dits.

In many of the cases, the au­dits are “typ­i­cal” of the in­vest­ment ad­vi­sory busi­ness, but I do not know what that means ex­actly. But it is a level play­ing field against all other in­vest­ment ad­visers.

Also for a few, not all, of the in­vestors cited here, they run/​ran for decades pub­lic in­vest­ment busi­nesses. Isn’t the pre­pon­der­ance of your Bayesian a posts that if at least these mem­bers of this widely read, cited, and dis­cussed “su­per­in­vestors” ar­ti­cle was just wrong, that this would at least have lead to trace­able re­ports of the dis­crep­an­cies on the in­ter­net, fin­able with google search?

To the ex­tent your ob­jec­tions amount to “Buffett could be an idiot and a fraud, ei­ther not know­ing or not car­ing what it means to make these claims” my an­swers are go­ing to be we have 5 decades of im­pec­ca­ble record, if you think Buffett is that un­re­li­able then gen­er­ally there is no ar­gu­ing with you as any­body who says some­thing with which you dis­agree you will ques­tion as an idiot or a fraud. If you can­not tell that Buffett is not an idiot or a fraud, or have not fol­lowed him well enough to be sure one way or the other, then I would sug­gest you have no busi­ness weigh­ing in on the sub­tle sub­ject of whether the mar­ket is so in­effi­cient that the best in­vestors in the world are just coin-flip­pers.

What you think about Buffett’s “char­ac­ter” is ir­rele­vant to me, and for me, fur­ther em­pha­sizes your ex­tremely poor rea­son­ing in this area—that when pushed back, you re­sort to one man and your be­liefs about his “char­ac­ter”.

I sug­gest rely­ing upon Buffett be­cause you and ev­ery­one else out there who can read has in­finitely more rea­son to rely upon Buffett than to rely upon me. And fur­ther, what is needed int he dis­cus­sion of EMH vs non-EMH is not some brilli­ant new in­sight that I can provide that you haven’t seen some­where else already. EMH vs non-EMH is a sub­tle ques­tion, is the mar­ket so effi­cient that Buffett can’t con­sis­tently beat it with­out com­mit­ting a crime, ei­ther in­sider trad­ing or some other in­for­ma­tion-twist­ing fraud, or is it just a lit­tle less effi­cient than that? The “in­sight” I have is that what pushes it to­wards effi­ciency is com­pet­ing analy­ses on op­po­site sides of each trade. The “in­sight” I have is that ev­ery bit of ev­i­dence sug­gests that in busi­ness some peo­ple have su­pe­rior skill or al­gorithms or SOMETHING and are more suc­cess­ful than oth­ers. And they can do it se­ri­ally, com­mand high prices in very com­pet­i­tive mar­kets, blah blah blah, and show EVERY BIT as much ev­i­dence of be­ing “real” as are great pitch­ers or ten­nis play­ers or tenors or talk show hosts or porn stars. And your case is that no, with in­vest­ing it is differ­ent, the peo­ple who do the work are so smart that they get it right in an un­beat­able way, but so stupid that they don’t even re­al­ize they would be bet­ter off free rid­ing.

What is needed is not any great in­sight from one or the other of us, I don’t think, but ev­i­dence that is hard to deny that yes, the mar­ket can be beat. I think that ev­i­dence would con­sist of mar­ket beat­ers com­ing from a nar­rowly defined group of peo­ple who set out to beat the mar­ket by study­ing it and al­low­ing ev­i­dence to drive their fu­ture hy­pothe­ses and efforts. And what do we find in the mar­ket? Ex­actly that, mar­ket beat­ers are smart and talk in terms of causal­ity, of what makes a busi­ness great, of where the mo­men­tum traders and the chartists missed the boat.

But my causal chain of how the mar­ket could be merely VERY effi­cient has been, I hope, pre­sented by now. Let me know if it hasn’t.

Mar­kets are very differ­ent from elec­tronic cir­cuits or par­ti­cle physics or philos­o­phy or en­g­ineer­ing. Cir­cuits don’t care if you found a more effi­cient way to de­sign them. The prop­er­ties of steel will not change when you dis­cover it lets you build prof­itable bridges.

As much as you might hy­poth­e­size that we will not see se­cu­ri­ties mar­kets make the same mis­takes they have made in the past, does the ev­i­dence sup­port that? And in any case, the idea that mar­kets do learn or have learned SOMETHING sup­ports only the VEMH, the very effi­cient mar­ket hy­poth­e­sis, which is not con­tro­ver­sial. By this I mean the hy­poth­e­sis that it is hard to beat the mar­ket, be­cause all the easy stuff has been figured out and is prop­erly ac­counted for by the bulk of the traded money in the mar­ket.

I tracked Chiplo­tle stock on and off from around 2000 for­ward. There were two classes of shares, A, and B, with the B’s trad­ing at a very con­sis­tent 10% dis­count to the A’s. I would check once or twice a year to see if this differ­ence per­sisted, and it did. The thing that was sur­pris­ing was that the doc­u­men­ta­tion of the com­pany ex­plained that these shares had equal value, rep­re­sented iden­ti­cal frac­tions of the to­tal com­pany. Why they traded at a 10% differ­ence I never saw an ex­pla­na­tion, and I always ques­tioned whether there was some de­tail I was miss­ing. Here, in late 2007 is doc­u­men­tary ev­i­dence that the differ­ence per­sisted. Here, two years later, is Chipo­tle’s re­port that they were elimi­nat­ing the two classes in fa­vor of one class, and that the ex­change rate would be 1:1 just as I had always be­lieved.

In my case, I am an elec­tri­cal en­g­ineer/​physi­cist, try­ing to con­cen­trate on build­ing new cell phone al­gorithms for at least a few hours a day. In­stead of or­ga­niz­ing the fi­nanc­ing to ex­ploit this weird in­effi­ciency at low cost, I just checked in on it ev­ery year or two. Want­ing to see if I was right. Had I been a pro­fes­sional trader, I would have looked more at cre­at­ing an ar­bi­trage on the A and B shares and cap­tur­ing the col­lapse of the ar­bi­trary pric­ing differ­ence. As an am­a­teur I didn’t know if it would ever col­lapse, and the bro­kers are nei­ther smart enough nor dumb enough to let me buy the As and short the Bs with­out a lot of cap­i­tal in my ac­count to an­chor what they see as two un­cor­re­lated risky bets.

My point here is this is just ONE of MANY pos­si­ble sto­ries of mod­er­ate sized in­effi­cien­cies I have seen with my own eyes. Others I have traded. Yes, ev­ery one of them is an anec­dote. The plu­ral of anec­dote is not data. But a bunch of anec­dotes like that cre­ates, it would seem, mar­ket beat­ing perfor­mance for many traders trad­ing differ­ent stocks.

Maybe mar­kets COULD be differ­ent than cir­cuits and so on, and maybe as com­put­ers and AI takes over more and more, they will get more and more effi­cient. But even then, the most pow­er­ful AIs will be beat­ing the mar­ket, even as they es­sen­tially set th prices at lev­els that make it in­cred­ibly hard for any­body else to beat the mar­ket. The thing that drives mar­ket mak­ers is not their stu­pidity, but their in­tel­li­gence and ra­tio­nal­ity. Seems to me.

Er, yes, there is. That’s kind of the point of the effi­cient mar­kets con­cept! Mar­kets are un­usual and spe­cial in that the at­tempt to find pre­dictable reg­u­lar­i­ties leads to the ex­ploita­tion of the reg­u­lar­i­ties and their dis­ap­pear­ance.

THIS is a hy­poth­e­sis. And the only word in that hy­poth­e­sis I will ar­gue with is the last one: dis­ap­pear­ance. The pre­dictable reg­u­lar­i­ties don’t dis­ap­pear from the time-stream of prices, if there is a mis­pric­ing at 2:31 PM on Thurs­day it is frozen there in the per­ma­nent record. What changes is how long it takes for the record to close those var­i­ous gaps. Maybe be­fore com­put­ers a broad class of in­effi­cient prices were never traded away. Maybe in the 1980s a broad class of in­effi­cien­cies were cap­i­tal­ized upon by peo­ple with com­put­ers over the course of a two week pe­riod. Maybe by the 2000s those same in­effi­cien­cies were traded away within hours or min­utes.

But my points are: 1) we are not ar­gu­ing effi­ciency vs in­effi­ciency, we are ar­gu­ing too effi­cient to beat vs nearly too effi­cient to beat and 2) with­out the in­effi­cien­cies, no one would be there to pay the ac­tors mak­ing the mar­ket more effi­cient by trad­ing the in­effi­cien­cies, and that no, it is not their stu­pidity that keeps them work­ing for free.

I hope this is what you wanted when you sug­gested I was ig­nor­ing your point and merely ar­gu­ing pro hominem, cit­ing peo­ple who I thought should be much more be­liev­able than I am. If I missed any­thing that still seems crit­i­cal, flag it to me and I’ll an­swer it.

• I’d rather use a more pow­er­ful EMH-fo­cused defi­ni­tion: a bub­ble is large price in­crease which rep­re­sents an in­effi­ciency in the mar­ket which is pre­dictable in ad­vance (not in hind­sight), ex­ploitable, and worth ex­ploit­ing.

I’m happy with that defi­ni­tion. EMH (Effi­cient Mar­ket Hy­poth­e­sis) for those of you fol­low­ing along at home.

In my case I had amassed a small for­tune by Oc­to­ber of 1999 by sim­ply hold­ing the stock op­tions I had been granted on tak­ing the job 4 years ear­lier. They were up more than 10X at that point. Ac­tion­able? My very in­tel­li­gent col­lege room­mate owned his own fi­nan­cial ad­vis­ing firm. He spent two weeks on the phone with me con­vinc­ing me that it would be gi­gan­ti­cally more sen­si­ble to cash out these op­tions and give them to him to in­vest “in case, in the fu­ture, peo­ple get up in the morn­ing, put their clothes on, and go out­side in­stead of sit­ting in front of their PCs all day or­der­ing stuff off the in­ter­net.” He sent me books to read in­clud­ing this one first pub­lished in 1841. This de­scribes witch hunts as well as South Sea, Tulip and other fi­nan­cial bub­bles. Jim, my room­mate, had been refer­ring to tech as a bub­ble for a year or two be­fore I talked to him in Oc­to­ber of 1999. The ac­tion he was tak­ing with his other clients was to sim­ply not get in to tech. This was a hor­ribly un­satis­fy­ing strat­egy un­til about the mid­dle of 2000 when tech was well into its slide from the top.

By the time I cashed out and handed him the money in about de­cem­ber 1999, the stock had more than dou­bled again. The hu­man in me wanted to hold on to it be­cause, ob­vi­ously, this was a stock which kept on dou­bling. He ex­plained to the ra­tio­nal­ist in me that what­ever the case for in­vest­ing that money in some­thing else was at half the price, the case was TWICE as good at twice the price, un­less we had learned some­thing quite im­por­tant and pos­i­tive about the busi­ness in the last two months. Which we hadn’t of course. What we had learned is that there was no short­age of “greater fools” will­ing to buy in AFTER all that price ap­pre­ci­a­tion had already hap­pened on old in­for­ma­tion that was not chang­ing nearly as fast as the price.

Over the next three years the stock I had sold in De­cem­ber 1999 gave back about 75% of its price gains. Mean­while, my friend in­vested my money in REITs, Berk­shire Hath­away, banks, and a bunch of other as­set classes not even dreamed about by most of my fel­low techies. The money I had given him grew by 40% more or less, I don’t re­mem­ber ex­actly, while the nearly half of my origi­nal stock grant I had kept in my em­ploy­ers stock con­tracted to 20% of its peak value.

So yes, to me the in­ter­net bub­ble ap­pears to have been ac­tion­able be­fore it burst. The “in­vestors” who stayed with the bub­ble, my­self in­cluded with what started out as nearly half of my for­tune and ended as about a tenth of it. The shift of 60% of my money out of the bub­ble pre­served my wealth at a level that may well have been unique among my peers at this com­pany.

I re­al­ize you can’t get a drug ap­proved with this kind of ev­i­dence. But you re­al­ize that most of what we “know” is the best model we can come up with in the ab­sence of dou­ble blind stud­ies. I’ve de­tailed the one best ex­am­ple in my life. I agree it is HARD to act on bub­bles, short­ing them is scary and fraught with risk, you are bet­ting you can stay solvent longer than the mar­ket can stay stupid, which is quite a bet in­deed. So bub­bles, so spec­tac­u­larly ob­vi­ous in ret­ro­spect, may be no more re­li­ably use­ful for mak­ing money than is any mis­pric­ing, even smaller more tem­po­rary ones.

Out of cu­ri­ousity, are you enough of an EMH’er that you don’t be­lieve in mis­pric­ings? Or at least not in pub­li­cly traded fi­nan­cial se­cu­ri­ties mar­kets? Do you think it is just a roll of the dice that 9 stu­dents of Ben Gra­ham all ran funds which had long term re­turns above mar­ket av­er­ages? I think a bub­ble is just a par­tic­u­lar kind of mis­pric­ing, a par­tic­u­lar kind of in­effi­ciency. It may be no eas­ier to ex­ploit than the other kinds of mis­pric­ings, but it is prob­a­bly not harder to ex­ploit. And short­ing is not the only way to ex­ploit bub­bles or mis­pric­ings, just stick­ing with a dis­ci­pline which on av­er­age avoids them ap­pears to work for a broad range of in­vestors, in­clud­ing such low-en­tropy cat­e­gories of in­vestors as former stu­dents of one pro­fes­sor who es­poused value in­vest­ing.

• Ac­tion­able? My very in­tel­li­gent col­lege room­mate owned his own fi­nan­cial ad­vis­ing firm. He spent two weeks on the phone with me con­vinc­ing me that it would be gi­gan­ti­cally more sen­si­ble to cash out these op­tions and give them to him to in­vest “in case, in the fu­ture, peo­ple get up in the morn­ing, put their clothes on, and go out­side in­stead of sit­ting in front of their PCs all day or­der­ing stuff off the in­ter­net.”

This ac­tion­able ad­vice is also 100% jus­tifi­able with­out re­course to claims of su­pe­rior per­cep­tion sim­ply by the high value of di­ver­sifi­ca­tion. Keep­ing a large sum of money in a sin­gle stock’s op­tions is re­ally risky, even if you think it’s +EV, and even if you think some EMH con­di­tions don’t ap­ply (you had in­sider knowl­edge the mar­ket didn’t, the mar­ket was not deep or liquid, you had spe­cial cir­cum­stances, etc). Same rea­son I keep tel­ling kiba to cash out some of his bit­coins and di­ver­sify—I am bullish on Bit­coin, but he should not keep so much of his net worth in a sin­gle volatile & risky as­set.

He sent me books to read in­clud­ing this one first pub­lished in 1841.

MacKay is not the most re­li­able au­thor­ity on these mat­ters, you know. The book I men­tion punc­tures a few of the myths MacKay ped­dles.

Jim, my room­mate, had been refer­ring to tech as a bub­ble for a year or two be­fore I talked to him in Oc­to­ber of 1999. The ac­tion he was tak­ing with his other clients was to sim­ply not get in to tech. This was a hor­ribly un­satis­fy­ing strat­egy un­til about the mid­dle of 2000 when tech was well into its slide from the top.

An anec­dote, as you well re­al­ize. You re­call the hits and for­get the misses. How many other bub­bles did Jim call over the years? Did his clients on net out­perform in­dices?

Mean­while, my friend in­vested my money in REITs, Berk­shire Hath­away, banks, and a bunch of other as­set classes not even dreamed about by most of my fel­low techies. The money I had given him grew by 40% more or less, I don’t re­mem­ber exactly

And would have grown by how much if they had been in REITs in 2008?

I agree it is HARD to act on bub­bles, short­ing them is scary and fraught with risk, you are bet­ting you can stay solvent longer than the mar­ket can stay stupid, which is quite a bet in­deed.

It’s not just that you’re bet­ting that you can stay solvent longer, you’re bet­ting that you have cor­rectly spot­ted a bub­ble. There was a guy on the Bit­coin fo­rums who en­tered into a short con­tract tar­get­ing Bit­coin at \$30. Last I heard, he was up­side-down by \$100k and it was as­sumed he would not be pay­ing out.

Do you think it is just a roll of the dice that 9 stu­dents of Ben Gra­ham all ran funds which had long term re­turns above mar­ket av­er­ages?

As a mat­ter of fact, some­one a while ago emailed me that to try to ar­gue that EMH was false. This is what I said to them:

A cute story from long ago, but me­thinks the lady doth protest too much—he may say he has not cher­ryp­icked them, but that’s not true: the in­sidious thing about datam­in­ing and mul­ti­ple com­par­i­son is that there’s noth­ing false about the re­sults, if you slice the data such-and-such a way you will get their claimed re­sult. And even if there are no other em­ploy­ees or con­trac­tors or stu­dents quietly omit­ted and we take ev­ery­thing at face value, he hasn’t shown that they aren’t counted in the coin-flip­ping orangutans given a loaded coin pro­duc­ing 7% re­turns a year. Why aren’t his former cowork­ers giv­ing away dozens of billions of dol­lars? If they were beat­ing 7% like he says they were, they should—in 2012, 28 years later—be sit­ting on im­mense for­tunes. Buffett him­self seems, these days, to gen­er­ate a lot of Berk­shire profit just from be­ing so big and liquid, in sel­l­ing all sorts of in­surance and mak­ing huge pur­chases like his re­cent railway pur­chase.

I don’t think that even be­gins to over­turn effi­cient mar­kets, sorry.

Speak­ing of Buffett’s mag­i­cal re­turns, I found http://​​www.prospect­magaz­ine.co.uk/​​eco­nomics/​​se­crets-of-war­ren-buf­fett/​​ in­ter­est­ing al­though I’m not com­pe­tent to eval­u­ate the re­search claims.

Out of cu­ri­ousity, are you enough of an EMH’er that you don’t be­lieve in mis­pric­ings? Or at least not in pub­li­cly traded fi­nan­cial se­cu­ri­ties mar­kets?

Pretty much. I be­lieve in in­effi­cien­cies in small or niche mar­kets like Bit­coin or pre­dic­tion mar­kets, but in big bonds or stocks? No way.

It may be no eas­ier to ex­ploit than the other kinds of mis­pric­ings, but it is prob­a­bly not harder to ex­ploit.

I have watched countless peo­ple, from Paul­son to Spitz­nagel to Dr Doom to Thiel, lose billions or sell their com­pa­nies or get out of fi­nance due to failed bets they made on ‘ob­vi­ous’ pre­dic­tions like hy­per­in­fla­tion and ‘bub­bles’ in US Trea­suries since that hous­ing bub­ble which they sup­pos­edly called based on their su­pe­rior ra­tio­nal­ity & in­vest­ing skills. It cer­tainly seems like it’s harder to ex­ploit. As I said, when you look at com­plete track records and not iso­lated ex­am­ples—do they look like luck & se­lec­tion effects, or skill & sus­tained in­effi­cien­cies?

• Speak­ing of Buffett’s mag­i­cal re­turns, I found http://​​www.prospect­magaz­ine.co.uk/​​eco­nomics/​​se­crets-of-war­ren-buf­fett/​​ in­ter­est­ing al­though I’m not com­pe­tent to eval­u­ate the re­search claims.

I heartily en­dorse this anal­y­sis. I would recom­mend ac­tu­ally the origi­nal pa­per rather than the re­view of that pa­per cited by gw­ern.

At no point that I could find in this pa­per did they find that they needed to ap­peal to luck or ran­dom out­lier qual­ity to ex­plain Buffett’s perfor­mance. In­deed, ex­cept that it is decades af­ter the fact, it seemed fairly sim­ple for them to ex­plain Buffett’s perfor­mance quan­ti­ta­tively from pick­ing stocks that the au­thor’s say sys­tem­at­i­cally out­perform the mar­ket, stick­ing with his method of pick­ing stocks in good and bad times for his port­fo­lio or the mar­ket as a whole, and in us­ing a mod­er­ate amount of lev­er­age, they es­ti­mate about 1.6.

Not rocket sci­ence, not snake oil, and not a long se­quence of lucky coin-flips.

• Be­cause ‘Seek and ye shall find’ is not always true. If a re­source be­comes so scarce as to crip­ple the func­tion­ing of so­ciety, it may be already too late to start search­ing for an al­ter­na­tive.

It is best to have fore­sight and start in­ves­ti­gat­ing solu­tions to prob­lems be­fore they be­come, you know, ac­tual prob­lems. It’s ir­re­spon­si­ble to do oth­er­wise. As mwen­gler pointed out, mar­kets can’t see that far into the fu­ture.

• It is best to have foresight

It is, of course, best. The prob­lem is that we do not have it. At least for the last few cen­turies hu­mans have shown re­mark­able lack of abil­ity to pre­dict where the twists and turns of tech­nol­ogy will lead.

To use the clas­sic ex­am­ple,

In 50 years, ev­ery street in Lon­don will be buried un­der nine feet of ma­nure. -- The Lon­don Times, 1894

• This isn’t about tech­nol­ogy though. It’s about re­sources. The Lon­don Times cor­rectly iden­ti­fied that if the pro­lifer­a­tion of horses were to con­tinue, the streets would be cov­ered by ma­nure. The solu­tion to this prob­lem was to stop us­ing horses, which is ex­actly what hap­pened, al­though of course not ev­ery­one was able to see it at that time.

And that’s pre­cisely the point. Peo­ple in­ves­ti­gated al­ter­na­tive means of trans­porta­tion while they still had the lux­ury of do­ing so. Not be­cause of fear of ma­nure, of course, but be­cause they re­al­ized that horses were non-ideal in many other ways. Imag­ine what would have hap­pened if the streets had been cov­ered by nine feet of ma­nure, and only then peo­ple started think­ing about ways of get­ting out of that mess (liter­ally).

• Nor­mal mar­ket mechanisms: Imag­ine a world near­ing ma­te­rial limits, and a pop­u­la­tion where each in­di­vi­d­ual owns the same frac­tion of those ma­te­ri­als. If half the pop­u­la­tion has one child per cou­ple and the other half has four chil­dren per cou­ple, the sup­ply/​de­mand changes drive the price of hu­man cap­i­tal in terms of ma­te­ri­als way down, the first half of the pop­u­la­tion sees their already-larger in­her­i­tances grow fur­ther in value, and the lat­ter half of the pop­u­la­tion finds them­selves un­able to af­ford a sec­ond gen­er­a­tion. Near ma­te­rial limits there is an in­cen­tive to limit your own re­pro­duc­tion.

Nor­mal poli­ti­cal mechanisms: When half of our pop­u­la­tion has one child per cou­ple and the other half has four chil­dren per cou­ple, the first half of the pop­u­la­tion is out­voted four-to-one, and the in­her­i­tances and their gains get re­dis­tributed to sup­port the poor ma­jor­ity, un­til ma­te­rial to re­dis­tribute runs out any­way. Although the so­ciety as a whole has an in­cen­tive to limit its re­pro­duc­tion, this in­cen­tive takes the form of a billion-way Pri­soner’s Dilemma, and each mem­ber of the so­ciety has a strong in­cen­tive to defect.

(There are strong prob­lems with the above rea­son­ing. For in­stance the dis­cus­sion of “in­cen­tive” is from the per­spec­tive of genes ra­tio­nally try­ing to max­i­mize their pop­u­la­tion, which is a du­bi­ous de­scrip­tion of hu­man be­hav­ior at best. But most “mar­ket in­cen­tives will fix it” ar­gu­ments as­sume ra­tio­nal re­ac­tions to in­cen­tives to be­gin with, so if those aren’t pre­sent any­way then Q.E.D.)

• lat­ter half of the pop­u­la­tion finds them­selves un­able to af­ford a sec­ond generation

The tech­ni­cal term for that is “starv­ing to death”, so let’s call it what it is. Don’t worry, I won’t judge you—I’m a prag­ma­tist deeply skep­ti­cal of pre­scrip­tive moral­ity. I re­spect some­one who frankly doesn’t give a damn about the less for­tu­nate more than I re­spect some­one who pre­tends to give a damn. From a prag­matic point of view, though, that starv­ing half will re­sort to the other stan­dard re­sponse to scarcity: vi­o­lence.

Which brings us to...

Nor­mal poli­ti­cal mechanisms

Do we live in a world where nor­mal poli­ti­cal mechanisms op­er­ate? Uh-oh, we do. Does the im­pact of poli­ti­cal mechanisms out­weigh the op­po­site im­pact of mar­ket mechanisms to limit fam­ily size? Uh-oh, we don’t know.

So, for you to con­tinue be­liev­ing that the mar­kets will pre­vent over­pop­u­la­tion with­out any spe­cific per­son think­ing or do­ing any­thing about it, it’s your turn to come up with es­ti­mates on the net effect gov­ern­ment has on tech­nolog­i­cal progress and pop­u­la­tion growth, and the net effect an pure free mar­ket would have on tech­nolog­i­cal progress and pop­u­la­tion growth.

• Imag­ine a world near­ing ma­te­rial limits, and a pop­u­la­tion where each in­di­vi­d­ual owns the same frac­tion of those ma­te­ri­als.

Can’t we imag­ine some­thing ei­ther more use­ful or more fun? This doesn’t re­sem­ble re­al­ity at all so I don’t see the point.

I’ll freely con­cede that you can imag­ine a world where the mar­kets won’t work at all—so what?

But most “mar­ket in­cen­tives will fix it” ar­gu­ments as­sume ra­tio­nal re­ac­tions to in­cen­tives to be­gin with

No they don’t. Th­ese ar­gu­ments point to the em­pirics of hu­man his­tory. Hu­mans are not ra­tio­nal and yet mar­kets work (again, em­piri­cally) re­mark­ably well.

• I was in­tend­ing to offer a de­liber­ately over­sim­plified ex­am­ple to illus­trate a much more gen­eral point.

If your physics pro­fes­sor talks about conic sec­tion or­bits, it doesn’t mean that he’s an idiot who thinks there are only two as­tro­nom­i­cal bod­ies in the uni­verse, that as­tro­nom­i­cal bod­ies are point masses, that gen­eral rel­a­tivity doesn’t ex­ist, that quan­tum me­chan­ics doesn’t ex­ist, etc.

(I am not sug­gest­ing that you are cur­rently en­rol­led in a physics class, but am again us­ing a sim­ple ex­am­ple to illus­trate a more gen­eral point.)

In both cases, do you see the point I was in­tend­ing? This isn’t a rhetor­i­cal ques­tion: I would be happy to try to “imag­ine some­thing more use­ful” if it’s ac­tu­ally nec­es­sary to com­mu­ni­cate, but I’m afraid I get the im­pres­sion that the failure to com­mu­ni­cate here is that you’re not try­ing to meet me halfway.

Your spe­cific point be­fore was not “em­piri­cally, mar­kets work, there­fore by in­duc­tion they will always con­tinue to work”, it was “prices cre­ate in­cen­tives to use less of scarce things”. I’m point­ing out that when things nec­es­sary to sur­vive and re­pro­duce be­come re­ally scarce, we em­piri­cally stop us­ing mar­kets and start us­ing poli­ti­cally-di­rected ra­tioning. Even at­tempts to trade tem­porar­ily scarce ne­ces­si­ties at non-short­age-in­duc­ing prices are vil­ified as “prof­i­teer­ing”. Do you dis­agree?

• do you see the point I was in­tend­ing? This isn’t a rhetor­i­cal question

No, I don’t. The point that hu­man so­cieties can and some­times do over­ride or sim­ply just ban the mar­kets is rather ob­vi­ous and I fail to see the rele­vance to the topic un­der dis­cus­sion.

it was “prices cre­ate in­cen­tives to use less of scarce things”

Not only. Notably prices cre­ate in­cen­tives to use sub­sti­tutes, as well as in­vent and pro­duce new and bet­ter sub­sti­tutes.

when things nec­es­sary to sur­vive and re­pro­duce be­come re­ally scarce, we em­piri­cally stop us­ing mar­kets and start us­ing poli­ti­cally-di­rected ra­tioning.

Some­times we do and some­times we don’t. All poli­ti­cally-di­rected ra­tioning is in­vari­ably ac­com­panied by a black mar­ket any­way. And I still don’t un­der­stand your point.

• Well, “I don’t un­der­stand your point” is a big im­prove­ment over “you’re just talk­ing about some­thing imag­i­nary”, so let’s start from here.

Let’s see what the re­main­ing in­fer­en­tial dis­tance might be com­posed of:

Is there even such a thing as “over­pop­u­la­tion”? I.e. is it even pos­si­ble for hu­mans to re­pro­duce faster than they can in­crease their effec­tive re­sources to sup­port the in­creased pop­u­la­tion? I’d say “yes”, but it’s start­ing to sound like your an­swer would be “no”.

If we were in an “over­pop­u­lated” world, what would the mar­ket solu­tion be?

What would ac­tu­ally hap­pen in that world when we tried to im­ple­ment the mar­ket solu­tion?

• is it even pos­si­ble for hu­mans to re­pro­duce faster than they can in­crease their effec­tive re­sources to sup­port the in­creased pop­u­la­tion?

Pos­si­ble, yes. But there are two fur­ther ques­tions: is that likely? and would re­source con­straints cause a “soft land­ing” for the global pop­u­la­tion or will there be a mas­sive crash to num­bers far be­low what the re­sources can sus­tain?

If we were in an “over­pop­u­lated” world, what would the mar­ket solu­tion be?

Make it more ex­pen­sive and less valuable to have chil­dren.

What would ac­tu­ally hap­pen in that world when we tried to im­ple­ment the mar­ket solu­tion?

No idea, de­pends on the par­tic­u­lars. Not to men­tion that the “mar­ket solu­tion” gen­er­ally doesn’t need to be im­ple­mented—all it needs is for the gov­ern­ment not to in­terfere.

• It looks like we’re closer than I feared—I’d agree with your first two an­swers, and “no idea” is hard to dis­agree with on the third. I’d have to also an­swer “no idea” to “is that likely?”, I’m afraid. If re­ally pressed for an an­swer I’d say it’s prob­a­bly not likely to hap­pen (sub 50%), but it’s likely enough (greater than 5%?) to be worth wor­ry­ing about, con­sid­er­ing the mag­ni­tude of the con­se­quences.

An­swer­ing your sec­ond ques­tion then only de­pends on a cou­ple is­sues:

First: is it pos­si­ble to “save” and “spend” wealth? I.e., can we turn long-term cap­i­tal into short-term con­sum­ables and vice-versa? I’d say the an­swer is “yes”, there are lots of ways we can di­vert re­sources be­tween lux­ury/​main­te­nance/​up­keep and im­me­di­ate sur­vival. This is usu­ally a good thing, since it means that we can ac­cu­mu­late sav­ings against dis­aster in a way that isn’t just push­ing ac­coun­tants’ num­bers around or shift­ing wealth be­tween de­mo­graph­ics… but it also opens up the pos­si­bil­ity of a mas­sive crash, in which it’s pos­si­ble to “eat our seed corn” and con­tinue to grow and sur­vive in an un­sus­tain­able way which can have sud­den dis­con­ti­nu­ities when the sav­ings start to run out.

Se­cond: what would ac­tu­ally hap­pen when we al­lowed the mar­ket solu­tion to oc­cur? (is that bet­ter lan­guage? you’re right that “tried to im­ple­ment” had some du­bi­ous con­no­ta­tions)

“No idea” is a good hon­est start, but it’s not hard to make a few ed­u­cated guesses. If poor kids are too nu­mer­ous for their par­ents and vol­un­tary char­ity to pay for, but there are still wealthy peo­ple around too, what hap­pens? We might ask for “the gov­ern­ment not to in­terfere”, but even if you can make a case for that be­ing the cor­rect de­fault nor­ma­tive ex­pec­ta­tion, is that truly your pos­i­tive ex­pec­ta­tion? Is this a world where gov­ern­ments typ­i­cally don’t in­terfere with mar­kets, and they won’t let some hun­gry kids stop them from stick­ing to those non-in­terfer­ence prin­ci­ples?

• can we turn long-term cap­i­tal into short-term con­sum­ables and vice-versa? I’d say the an­swer is “yes”

I am con­sid­er­ably more doubt­ful about that. A re­source short­age is about lack of par­tic­u­lar molecules or atoms (or, maybe, cheap enough en­ergy). Long-term cap­i­tal mostly ex­ists as fi­nan­cial in­stru­ments, land, build­ings, and such. As the old say­ing goes, you can’t eat money.

If poor kids are too nu­mer­ous for their par­ents and vol­un­tary char­ity to pay for, but there are still wealthy peo­ple around too, what hap­pens?

The usual. That’s the nor­mal state of be­ing for most of hu­man­ity’s his­tory. It’s hap­pen­ing right now—look at Africa. All his­tor­i­cal les­sons (about the com­par­a­tive util­ity of mar­kets vs di­rect gov­ern­ment in­ter­ven­tion) are fully ap­pli­ca­ble.

• Why don’t you think nor­mal mar­ket mechanisms (the more scarce re­source X is, the higher its price, the larger the in­cen­tives to use less, the more in­tense search for its re­place­ment) will han­dle the prob­lem?

Mar­kets are a mechanism of re­source al­lo­ca­tion. They can be quite effi­cient some­times, and fail spec­tac­u­larly some other times, but in any case they don’t cre­ate new re­sources out of thin air.

Even in our pre­sent era of rel­a­tive abun­dance, there are many peo­ple who die of star­va­tion, epi­demic dis­eases and vi­o­lent con­flict. In a fu­ture with scarcer re­sources and higher pop­u­la­tion, how are the mar­kets go­ing to han­dle the prob­lem that they aren’t un­able to han­dle now?

• there are many peo­ple who die of star­va­tion, epi­demic dis­eases and vi­o­lent conflict

That’s a cute switcheroo.

The origi­nal is­sue was scarcity of re­sources lead­ing to an “over­shoot”—pop­u­la­tion spik­ing past re­source con­straints and then crash­ing back hard. I said that the mar­kets al­lo­cate re­sources pretty well and there doesn’t seem to be an ob­vi­ous rea­son why would they fail in this par­tic­u­lar case.

No one claimed that the mar­kets will mag­i­cally solve “star­va­tion, epi­demic dis­eases and vi­o­lent con­flict”. It’s rather ob­vi­ous that they don’t—but that’s an en­tirely sep­a­rate dis­cus­sion.

• So, just to be clear, your po­si­tion is that mar­kets will pre­vent pop­u­la­tion growth from stop­ping in the fore­seable fu­ture or that pop­u­la­tion will grace­fully set at the ca­pac­ity level with­out vi­o­lent os­cilla­tions?

• My po­si­tion is that there is no al­ter­na­tive that you can cred­ibly show to be bet­ter than the mar­kets in deal­ing with the is­sue of re­source scarcity.

Mar­kets do not “pre­vent pop­u­la­tion growth from stop­ping”. As to the grace­ful­ness of land­ing, it’s for the gym­nas­tics judges to es­ti­mate. By the way, I do not ex­pect the pop­u­la­tion to reach the “ca­pac­ity level” in the fore­see­able fu­ture.

• Here’s what I ex­pect some­one who se­ri­ously be­lieved that mar­kets will han­dle it would sound like:

“Wow, over­pop­u­la­tion is a threat? Clearly there are in­effi­cien­cies the rest of the mar­ket is too stupid to ex­ploit. Let’s see if I can get rich by figur­ing out where these in­effi­cien­cies are and how to ex­ploit them.”

Whereas “the mar­kets will han­dle it, pe­riod, full stop” is not a be­lief, it’s an ex­cuse.

• Here is what I sound like:

“Wow, over­pop­u­la­tion is a threat? I don’t be­lieve it. Show me.”

• My po­si­tion is that there is no al­ter­na­tive that you can cred­ibly show to be bet­ter than the mar­kets in deal­ing with the is­sue of re­source scarcity.

I don’t think any­body (but the most ex­treme leftists) is propos­ing a Soviet-like cen­trally planned econ­omy, but with­out reg­u­la­tion, mar­kets mechanisms alone don’t nec­es­sar­ily deal well with re­source scarcity. This has been both ob­served em­piri­cally and un­der­stood the­o­ret­i­cally (tragedy of the com­mons, nega­tive ex­ter­nal­ities, etc.)

By the way, I do not ex­pect the pop­u­la­tion to reach the “ca­pac­ity level” in the fore­see­able fu­ture.

Why not? Growth rate is already in de­cline, and AFAIK, most mod­els of world pop­u­la­tion growth pre­dict a peak in this cen­tury.

• with­out regulation

Well, mar­kets don’t ex­ist in a vac­uum, of course, they need a rea­son­able frame­work of law and or­der. Just to start with you need prop­erty rights and the abil­ity to en­force con­tracts.

tragedy of the com­mons, nega­tive ex­ter­nal­ities, etc.

That’s a differ­ent thing that doesn’t have much to do with the mar­kets abil­ity to deal with re­source scarcity.

You keep point­ing out that mar­kets are not Je­sus and they don’t au­tomag­i­cally solve all hu­man­ity’s prob­lems. Yes, yes, of course, but no one is ar­gu­ing that. We’re talk­ing about a fairly spe­cific prob­lem—deal­ing with re­source scarcity—and you keep on bring­ing up how mar­kets don’t solve vi­o­lence and pol­lu­tion...

Why not?

I ex­pect the pop­u­la­tion to reach a plateau and sta­bi­lize at some point. I do not ex­pect that plateau to be the ca­pac­ity level of the planet.

• deal­ing with re­source scarcity—and you keep on bring­ing up how mar­kets don’t solve vi­o­lence and pol­lu­tion...

Well, they should provide a con­struc­tive al­ter­na­tive to the former, and the lat­ter is iso­mor­phous with a scarcity of non-pol­luted air/​wa­ter/​land.

• Does “let the mar­ket han­dle it” ap­ply to ev­ery risk equally?

If not, what dis­t­in­guishes risks to which it ap­plies less? What do we do about those risks?

If it ap­plies equally to all risks, then ei­ther it’s pointless to talk about risks be­cause the mar­ket will han­dle them all the way we would like them to be han­dled, or it’s pointless to say that the mar­ket will han­dle them be­cause that’s already im­plied and the fact that we still con­sider them risks means we’re not com­pletely con­fi­dent that the mar­ket will han­dle them the way we would like.

• Does “let the mar­ket han­dle it” ap­ply to ev­ery risk equally?

Of course not. Don’t erect silly straw­men.

...the fact that we still con­sider them risks

Don’t erect silly straw­men. The mar­ket pro­vides no guaran­tees. There will be win­ners and losers. On oc­ca­sions the mar­ket will be spec­tac­u­larly wrong. So? If you have a prov­ably-bet­ter al­ter­na­tive let’s use that. Do you hap­pen to have one?

• If you have a prov­ably-bet­ter al­ter­na­tive let’s use that. Do you hap­pen to have one?

No, I wish.

This is the the stage where I’m hop­ing to col­lab­o­ra­tively iden­tify what the rele­vant un­knowns are and what bounds we can as­sign to them. The next stage is to brain­storm what solu­tions might work and see if they cluster into any par­tic­u­lar re­gions of the solu­tion space. Also, to break them up by scale—in­di­vi­d­ual, lo­cal, na­tional, global. Come up with recom­men­da­tions for ac­tions one can take im­me­di­ately to im­ple­ment the first two (i.e. how to make the place where you live more likely to be a bea­con of civ­i­liza­tion). If we have some re­ally smart/​en­trepreneurial LessWronger get in­ter­ested, pos­si­bly come up with in­di­vi­d­ual/​lo­cal ac­tions that scale to have na­tional/​global im­pact if they catch on with enough peo­ple.

This is a big prob­lem and I’m not un­der the illu­sion that it’s go­ing to be solved by this post. But we have to start some­where. And if the best prob­lem-solvers are ig­nor­ing this prob­lem be­cause the moral scolds and lud­dites have pissed all over it, maybe chang­ing that state of af­fairs is a good place to start.

• Are you claiming some­thing differ­ent from the clas­sic pop­u­la­tion-bomb limits-to-growth ar­gu­ments? Be­cause if you do not, there seems lit­tle rea­son to re­visit this well-tram­pled ter­ri­tory.

If I un­der­stand cor­rectly, af­ter forty years, the main pre­dic­tions stated in The Limits to Growth are still sub­stan­tially con­sis­tent with ob­served data.

• the main pre­dic­tions stated in The Limits to Growth are still sub­stan­tially con­sis­tent with ob­served data.

Can you throw some links in my di­rec­tion?

• Thank you.

I’ve looked at the A Com­par­i­son of `The Limits to Growth` with Thirty Years of Real­ity, I wasn’t par­tic­u­larly im­pressed with how the pre­dic­tions are fairing. Sorry for off­hand dis­mis­sal, I don’t have much in­ter­est in fisk­ing that re­port...

• I’ve looked at the A Com­par­i­son of The Limits to Growth with Thirty Years of Real­ity, I wasn’t par­tic­u­larly im­pressed with how the pre­dic­tions are fairing.

From the ab­stract: “The anal­y­sis shows that 30 years of his­tor­i­cal data com­pares fa­vor­ably with key fea­tures of a busi­ness-as-usual sce­nario called the “stan­dard run” sce­nario, which re­sults in col­lapse of the global sys­tem mid­way through the 21st Cen­tury. ”

• I haven’t read the ab­stract, I have skimmed through the pa­per it­self. As I said, I wasn’t par­tic­u­larly im­pressed.

• Have you watch this video and does it change any of your views? Hans Rosling makes the claim that world pop­u­la­tion will top out at around 10 billion, by sim­ply con­tin­u­ing to do what we do now, ed­u­cate peo­ple and let them have ac­cess to birth con­trol.

• It’s not au­to­mat­i­cally given that the zero-to-nega­tive pop­u­la­tion growth among post-in­dus­trial so­cieties will be suffi­cient to miti­gate the fact that the num­ber of re­sources they use per cap­ita can be an or­der of mag­ni­tude higher than pre-ser­vice econ­omy so­cieties.

Although I do agree that it does seem like get­ting ev­ery­one wealthy enough to snap out of high-birth-rate-mode as fast as pos­si­ble is prob­a­bly the best non-co­er­cive solu­tion for min­i­miz­ing this sort of risk. (Which is ac­tu­ally part of the rea­son I spec­u­late that effec­tive al­tru­ism might be bet­ter placed in global in­ter­ven­tion than in sci­ence re­search, though I re­main un­cer­tain)

• Yes, I have. In my opinion ten billion is too close to over­shoot and even 7 billion is too close. Espe­cially if it is ac­com­panied by in­creased per-cap­ita de­mand for re­sources, which it has been so far. If we’re go­ing to rely mainly on the pop­u­la­tion term of the equa­tion, I think we need to shrink down to about 4 billion be­fore we’re back in the safe zone.

• Yes, I have. In my opinion ten billion is too close to over­shoot and even 7 billion is too close.

Why? We could just half our re­source con­sump­tion.

There a good rea­son why magic num­bers aren’t pop­u­lar among ra­tio­nal­ists. Re­duc­ing com­plex sys­tem where you can turn mul­ti­ple vari­ables to sin­gle num­bers doesn’t help you to un­der­stand them.

• Why? We could just half our re­source con­sump­tion.

If you be­lieve we can freely choose to do so on a global ba­sis as a pre­ven­ta­tive mea­sure, you are far more of an op­ti­mist than I am.

If you be­lieve that things will get bad enough that we will be forced to do so, you might be more of a pes­simist than I am.

There a good rea­son why magic num­bers aren’t pop­u­lar among ra­tio­nal­ists. Re­duc­ing com­plex sys­tem where you can turn mul­ti­ple vari­ables to sin­gle num­bers doesn’t help you to un­der­stand them.

Yes, if you’re tempted to use magic num­bers you should just use un­knowns with clearly stated sup­port ranges and get a gen­eral re­sult. I would rather have this dis­cus­sion at the level of “let f(m,t) be the frac­tion of earth’s max­i­mum ca­pac­ity ‘m’ we can ex­ploit at tech­nol­ogy level ‘t’ , let k(x) be the tech­nol­ogy level at year ‘x’, and let p(x) be pop­u­la­tion at year ‘x’. What prop­er­ties must f(m,t), k(x), and p(x) have to in­sure that p(x) - f(m,t) > 0 for all x > to­day()?”

I’m plug­ging in magic num­bers be­cause oth­er­wise I’ll be mi­s­un­der­stood even worse. Maybe I’m wrong about that.

• If you be­lieve we can freely choose to do so on a global ba­sis as a pre­ven­ta­tive mea­sure, you are far more of an op­ti­mist than I am.

If you be­lieve that things will get bad enough that we will be forced to do so, you might be more of a pes­simist than I am.

Com­pared to freely choose to cut pop­u­la­tion num­bers in half or even fur­ther, I think the prob­lem of re­source us­age seems eas­ier. It’s still a hard prob­lem.

But maybe we don’t even have to cut en­ergy con­sump­tion that much. So­lar en­ergy seem to get cheaper by 50% ev­ery 7 years. Bat­ter­ies also seem to im­prove well.

I’m plug­ging in magic num­bers be­cause oth­er­wise I’ll be mi­s­un­der­stood even worse. Maybe I’m wrong about that.

The prob­lem with the magic num­bers in that case is that the re­sult­ing the­ory doesn’t tell us very much about the util­ity of re­duc­ing hu­man pop­u­la­tion by 5%. I wrote more about an­other is­sues in other posts.

• For the most part, my em­pha­sis is not on limit­ing pop­u­la­tion di­rectly. I do be­lieve that char­i­ta­ble efforts have the re­spon­si­bil­ity to miti­gate the risk of a de­mo­graphic trap in the ar­eas they serve. But I think get­ting any­body who mat­ters to listen is a lost cause.

My em­pha­sis is on be­ing con­scious of the fact that the rea­son we’re still al­ive and pros­per­ing is that we are con­tin­u­ously buy­ing our­selves more time with tech­nol­ogy and use this in­sight to mo­ti­vate greater in­vest­ment in re­search and de­vel­op­ment. This seems like an eas­ier sell.

• I do be­lieve that char­i­ta­ble efforts have the re­spon­si­bil­ity to miti­gate the risk of a de­mo­graphic trap in the ar­eas they serve. But I think get­ting any­body who mat­ters to listen is a lost cause.

The Bill and Melinda Gates foun­da­tion ac­counts for a good share of char­ity spend­ing. They started by be­ing very fo­cused on the is­sue of re­duc­ing pop­u­la­tion.

They spent mil­lions on the is­sue and have seen the em­piric effects of their pro­jects. To the ex­tend that they don’t listen any­more to the kind of ar­gu­ments that you are mak­ing is that they up­dated in face of em­piric ev­i­dence.

• So, cu­ri­ous: am I get­ting down­voted here be­cause I trig­gered your “ugh” field? Brought up some­thing you don’t like to think about?

Be­cause in some of my posts I’ve been kind of snippy, but I can’t find a sin­gle way in which I’m vi­o­lat­ing the rules of ra­tio­nal con­struc­tive dis­course in the above post.

This is not be­cause I care about my score. It’s be­cause usu­ally I un­der­stand what I did to earn an up-vote or down-vote. Here I’m gen­uinely cu­ri­ous what spe­cific be­hav­ior you could pos­si­bly be try­ing to dis­cour­age? I mean, it couldn’t pos­si­ble be sim­ple dis­agree­ment with you, be­cause this is LessWrong. So en­lighten me—maybe it’s a be­hav­ior I’ll want to min­i­mize too once I’m aware of it.

• You’re as­sert­ing a highly nonob­vi­ous re­sult (seven billion looks fine from here) as though it were ob­vi­ous fact.

• Thanks, fixed.

• Not sure if you were ad­dress­ing me par­tic­u­larly but in case you did, I didn’t down­vote you. I ac­tu­ally found your claim, that 4 billion is back in the safe zone, to be thought pro­vok­ing be­cause that idea is novel to me per­son­ally, so thanks for that, but I don’t have an opinion on it yet.

• No­tice that, af­ter do­ing my home­work and see­ing that the range of es­ti­mates of car­ry­ing ca­pac­ity were in the range of 4-16 billion with a me­dian of 10 billion, I re­vised my own es­ti­mate up­ward from 2 billion. Although, be­ing at car­ry­ing ca­pac­ity doesn’t sound par­tic­u­larly safe ei­ther, just safer.

• he can’t re­ally no­tice it since you ed­ited it away

• You make big claims with no backup.

• Treat­ing num­bers that peo­ple with ob­vi­ous bi­ases pul­led out of their ass as cred­ible. Se­ri­ously, look at the his­tory of car­ry­ing ca­pac­ity es­ti­mates, they’re always just above (or just be­low) what­ever the cur­rent pop­u­la­tion hap­pens to be.

• Right. What’s dis­turb­ing is that peo­ple who don’t share these bi­ases don’t re­spond with es­ti­mates of their own. They re­spond with “too neg­ligible to mat­ter”.

So, what would be a ra­tio­nal way to up­date based on both the de­tailed num­bers pro­vided by sources bi­ased to­ward be­liev­ing that over­pop­u­la­tion is a threat and on vague num­bers pro­vided by sources bi­ased against be­liev­ing that over­pop­u­la­tion is a threat?

What do you think the na­ture of each of these bi­ases might be? Per­haps that might shed some light on how to cor­rect for them.

By the way, how is this any differ­ent from half a cen­tury of pre­dic­tions that AI is just around the cor­ner?

• Hans Rosling makes the claim that world pop­u­la­tion will top out at around 10 billion, by sim­ply con­tin­u­ing to do what we do now, ed­u­cate peo­ple and let them have ac­cess to birth con­trol.

Malthus will be count­ing the ma­chines too.

Hu­man num­bers may de­cline dur­ing a memetic takeover, but ma­chine num­bers prob­a­bly won’t.

• the 10 billion topout is a very close-in re­sult. Based on pop­u­la­tion dis­tri­bu­tion right now, a top at 10 fol­lowed by at least a lit­tle de­cline is baked in.

How­ever, this says NOTHING about the pop­u­la­tion and its growth or shrink­age rate 100 years from now. The pop­u­la­tion dis­tri­bu­tion used to pre­dict that pop­u­la­tion hasn’t been born yet.

• So… a ma­jor­ity or at least a vo­cal plu­ral­ity of us be­lieve that tech­nol­ogy is not nec­es­sary for pre­vent­ing pop­u­la­tion from over­shoot­ing the planet’s car­ry­ing ca­pac­ity?

Or, you are so ve­he­mently op­posed to the very con­cept of limit­ing con­di­tions that it dis­cred­its any ar­gu­ment it is part of, re­gard­less of what the rest of the ar­gu­ment?

• Malthu­sian Crunch: Not Ad­ja­cent to This Com­plete Break­fast.

• It seems like these dis­cus­sions, even when they use biolog­i­cal ter­minol­ogy like “car­ry­ing ca­pac­ity”, never seem to take biol­ogy into ac­count as any­thing but a static force.

Malthus as­sumed that agri­cul­ture only in­creased pro­duc­tion ar­ith­meti­cally, some­thing that the Green Revolu­tion dis­proves as it con­tinues to in­crease crop yields and the per­centage of arable land wor­ld­wide much faster than our pop­u­la­tion has grown. And it’s not ex­actly like we were in dan­ger of hit­ting our up­per limits be­fore; even in the US you can see over­grown fal­low fields with just a short drive out from a city (our tax dol­lars at work, cour­tesy of gen­er­ous farm sub­sidies meant to keep food ex­pen­sive), while most of the world’s farm­ers have been so thor­oughly out-com­peted by food aid that they can­not af­ford the tech­nol­ogy to use their fields effi­ciently. Even the fresh wa­ter crisis is only a tem­po­rary prob­lem; we are even now de­vel­op­ing plants and ir­ri­ga­tion meth­ods which can use salty and even con­tam­i­nated wa­ter as effec­tively as old fresh­wa­ter ir­ri­ga­tion ever did.

Agri­cul­ture pro­vides food raw ma­te­ri­als (even plas­tics) and en­ergy, with mod­ern tech­nol­ogy it is al­most com­pletely re­new­able, and our agri­cul­tural ca­pa­bil­ity is ex­pand­ing far faster than our pop­u­la­tion is. Ob­vi­ously there are hard limits on how many peo­ple the earth can sup­port, but it is a the­o­ret­i­cal dis­cus­sion on the level of how long we have un­til the sun col­lapses or the heat death of the uni­verse oc­curs. The poli­tics of pop­u­la­tion re­duc­tion are not, and never have been, about re­source preser­va­tion.

• “the Green Revolu­tion dis­proves”

“the tech­nol­ogy to use their fields effi­ciently”

“de­vel­op­ing plants and ir­ri­ga­tion meth­ods”

“with mod­ern tech­nol­ogy it is al­most com­pletely re­new­able”

This illus­trates pre­cisely what I’m try­ing to say. The rea­son we haven’t ex­pe­rienced a Malthu­sian Crunch is not that the con­cept it­self is im­pos­si­ble or ab­surd, but be­cause we de­velop new tech­nolo­gies fast enough to con­tinu­ally post­pone it.

This has some im­pli­ca­tions:

• If tech­nolog­i­cal de­vel­op­ment is de­railed by cul­tural back­lash, pro­longed re­ces­sion, or poli­ti­cal lu­nacy, we may find our­selves hav­ing to cope with pop­u­la­tion over­shoot on top of what­ever the origi­nal prob­lem was.

• Re­spon­si­ble global cit­i­zens need to defend and pro­mote tech­nolog­i­cal progress with ev­ery bit of the same zeal they cur­rently have for the nat­u­ral en­vi­ron­ment.

• Ex­trap­o­la­tions of con­tinued tech­nolog­i­cal progress based on past perfor­mance are in­her­ently un­re­li­able. So if our ex­trap­o­la­tions of not hav­ing to worry about over­shoot are in effect ex­trap­o­la­tions of ex­trap­o­la­tions about tech­nolog­i­cal progress, then those ex­trap­o­la­tions are them­selves not re­li­able and we can­not af­ford com­pla­cency.

• Ob­vi­ously there are hard limits on how many peo­ple the earth can sup­port, but it is a the­o­ret­i­cal dis­cus­sion on the level of how long we have un­til the sun col­lapses or the heat death of the uni­verse oc­curs.

I don’t re­ally think this is true. Ex­po­nen­tial growth can put up some very high num­bers very fast. At 2010 growth rates, hu­man­ity should be in the quadrillions within mere cen­turies. In con­trast, sun changes should not make earth un­in­hab­it­able for mil­lions of years at least.

• Yes, ex­actly.

Maybe the ques­tion to pop­u­la­tion de­niers should be framed as:

• What up­per and lower bounds do you place on the hard limits of how many hu­mans the planet can sup­port in­definitely?

• What up­per and lower bounds do you place on the rate at which tech­nolog­i­cal progress pushes the prac­ti­cally achiev­able limits to­ward the hard limits above?

• What up­per and lower bounds on fu­ture world pop­u­la­tion lev­els given that the cur­rent num­ber is 7 billion?

From this we can then de­rive at least a self-con­sis­tent prob­a­bil­ity that over­pop­u­la­tion de­niers should as­sign to Malthu­sian Crunch.

• I think that prob­a­bly the most effec­tive means of pop­u­la­tion con­trol, his­tor­i­cally speak­ing has been (in no par­tic­u­lar or­der):

-In­creased ed­u­ca­tion (es­pe­cially of fe­males)

-Im­proved ac­cess to birth control

-Fem­i­nism, in­creased women’s rights

-Creat­ing a so­ciety where women are al­lowed and en­couraged to work out­side the home

-Im­proved eco­nomics; get­ting out of a third-world eco­nomic state is vital

-Low­ered child­hood mor­tal­ity rates

-Longer life-spans in general

Top-down pop­u­la­tion con­trols (like China’s) have much more se­vere side effects, and are prob­a­bly less effec­tive in the long run.

• You prob­a­bly need to look back more than the last 50 years to get any kind of in­sight into the things that will effect hu­man pop­u­la­tion over the next few hun­dred or thou­sand years.

• I’m talk­ing about things we can do right now to deal with the po­ten­tial of pop­u­la­tion growth.

Ob­vi­ously if we cure old age, or start up­load­ing our­selves to com­put­ers, or ge­net­i­cally en­g­ineer our­selves into some­thing differ­ent, or fun­da­men­tally change hu­man re­pro­duc­tion tech­nolog­i­cally, we would be in a com­pletely differ­ent situ­a­tion and would need to come up with new solu­tions. But I’m not sure we can re­ally plan for that un­til we ac­tu­ally see how that would un­fold; and, in any case, in any of those sce­nar­ios, we would be bet­ter able to deal with the con­se­quences with a world pop­u­la­tion of 9 billion then with a world pop­u­la­tion of 12 billion.