Blind Spot: Malthusian Crunch

In an un­re­lated thread, one thing led to an­other and we got onto the sub­ject of over­pop­u­la­tion and car­ry­ing ca­pac­ity. I think this topic needs a post of its own.

TLDR mathy ver­sion:

let f(m,t) be the pop­u­la­tion that can be sup­ported us­ing the frac­tion of Earth’s the­o­ret­i­cal re­source limit m we can ex­ploit at tech­nol­ogy level t

let t = k(x) be the tech­nol­ogy level at year x

let p(x) be pop­u­la­tion at year x

What con­di­tions must con­stant m and func­tions f(m,k(x)), k(x), and p(x) satisfy in or­der to in­sure that p(x) - f(m,t) > 0 for all x > to­day()? What em­piri­cal data are rele­vant to es­ti­mat­ing the prob­a­bil­ity that these con­di­tions are all satis­fied?

Long ver­sion:

Here I would like to ex­plore the ev­i­dence for and against the pos­si­bil­ity that the fol­low­ing as­ser­tions are true:

  1. Without hu­man in­ter­ven­tion, the car­ry­ing ca­pac­ity of our en­vi­ron­ment (broadly defined1) is finite while there are no *in­trin­sic* limits on pop­u­la­tion growth.

  2. There­fore, if the car­ry­ing ca­pac­ity of our en­vi­ron­ment is not ex­tended at a suffi­cient rate to out­pace pop­u­la­tion growth and/​or pop­u­la­tion growth does not slow to a suffi­cient level that car­ry­ing ca­pac­ity can keep up, car­ry­ing ca­pac­ity will even­tu­ally be­come the limit on pop­u­la­tion growth.

  3. Abun­dant data from zo­ol­ogy show that the mechanisms by which car­ry­ing ca­pac­ity limits pop­u­la­tion growth in­clude star­va­tion, epi­demics, and vi­o­lent com­pe­ti­tion for re­sources. If the mo­men­tum of pop­u­la­tion growth car­ries it past the car­ry­ing ca­pac­ity an over­shoot oc­curs, mean­ing that the pop­u­la­tion size doesn’t just re­main at a sus­tain­able level but rather plum­mets dras­ti­cally, some­times to the point of ex­tinc­tion.

  4. The above three as­ser­tions im­ply that hu­man in­ter­ven­tion (by ex­pand­ing the car­ry­ing ca­pac­ity of our en­vi­ron­ment in var­i­ous ways and by limit­ing our birth-rates in var­i­ous ways) are what have to rely on to pre­vent the above sce­nario, let’s call it the Malthu­sian Crunch.

  5. Just as the Nazis have dis­cred­ited eu­gen­ics, main­stream en­vi­ron­men­tal­ists have dis­cred­ited (at least among ra­tio­nal­ists) the con­cept of finite car­ry­ing ca­pac­ity by giv­ing it a cultish stigma. More­over, solu­tions that rely on sweep­ing, heavy-handed reg­u­la­tion have re­cieved so much at­ten­tion (per­haps be­cause the chain of causal­ity is eas­ier to un­der­stand) that to many peo­ple they seem like the *only* solu­tions. Find­ing these solu­tions un­palat­able, they in­stead re­ject the prob­lem it­self. And by they, I mean us.

  6. The al­ter­na­tive most en­vi­ron­men­tal­ists ei­ther ig­nore or out­right op­pose is de­liber­ately try­ing to ac­cel­er­ate the rate of tech­nolog­i­cal ad­vance­ment to in­crease the “safety zone” be­tween ex­pan­sion of car­ry­ing ca­pac­ity and pop­u­la­tion growth. More­over, we are close to a level of tech­nol­ogy that would al­low us to start coloniz­ing the rest of the so­lar sys­tem. Ob­vi­ously any given niche within the so­lar sys­tem will have its own finite car­ry­ing ca­pac­ity, but it will be many or­ders of mag­ni­tude higher than that of Earth alone. Ex­pand­ing into those niches won’t pre­vent die-offs on Earth, but will at least be a par­tial hedge against to­tal ex­tinc­tion and a nec­es­sary step to­ward even­tual ex­pan­sion to other star sys­tems.

Please note: I’m not propos­ing that the above as­ser­tions must be true, only that they have a high enough prob­a­bil­ity of be­ing cor­rect that they should be taken as se­ri­ously as, for ex­am­ple, grey goo:

Pre­dic­tions about the dan­gers of nan­otech made in the 1980′s shown no signs of com­ing true. Yet, there is no known log­i­cal or phys­i­cal rea­son why they can’t come true, so we don’t ig­nore it. We cal­ibrate how much effort should be put into miti­gat­ing the risks of nan­otech­nol­ogy by ask­ing what ob­ser­va­tions should make us up­date the like­li­hood we as­sign to a grey-goo sce­nario. We ap­proach miti­ga­tion strate­gies from an en­g­ineer­ing mind­set rather than a poli­ti­cal one.

Shouldn’t we hold our­selves to the same stan­dard when dis­cussing pop­u­la­tion growth and over­shoot? Sub­sti­tute in some other ex­is­ten­tial risks you take se­ri­ously. Which of them have an ex­pec­ta­tion2 of oc­cur­ing be­fore a Malthu­sian Crunch? Which of them have an ex­pec­ta­tion of oc­cur­ing af­ter?

Foot­notes:

1: By car­ry­ing ca­pac­ity, I mean finite re­sources such as eas­ily ex­tractable ores, wa­ter, air, EM spec­trum, and land area. Cer­tain very slowly re­plen­ish­ing re­sources such as fos­sil fuels and bio­di­ver­sity also be­have like finite re­sources on a hu­man timescale. I also in­clude non-finite re­sources that ex­pand or re­plen­ish at a finite rate such as use­ful plants and an­i­mals, potable wa­ter, arable land, and breath­able air. Tech­nol­ogy ex­pands car­ry­ing ca­pac­ity by al­low­ing us to ex­ploit all re­source more effi­ciently (pa­per­less offices, telecom­mut­ing, fuel effi­ciency), open up re­serves that were pre­vi­ously not eco­nom­i­cally fea­si­ble to ex­ploit (shale oil, methane clathrates, high-rise build­ings, seast­eading), and ac­cel­er­ate the re­newal of non-finite re­sources (agri­cul­ture, land recla­ma­tion pro­jects, toxic waste re­me­di­a­tion, de­sal­iniza­tion plants).

2: This is a hard ques­tion. I’m not ask­ing which catas­tro­phe is the mostly likely to hap­pen ever while hold­ing ev­ery­thing else con­stant (the pos­si­ble ones will be tied for 1 and the im­pos­si­ble ones will be tied for 0). I’m ask­ing you to men­tally (or phys­i­cally) draw a set of sur­vival curves, one for each catas­tro­phe, with the x-axis rep­re­sent­ing time and the y-axis rep­re­sent­ing frac­tion of Everett branches where that catas­tro­phe has not yet oc­cured. Now, which curves are the up­per bound on the curve rep­re­sent­ing Malthu­sian Crunch, and which curves are the lower bound? This is how, in my opinioon (as an ag­ing re­searcher and bio­statis­ti­cian for what­ever that’s worth) you think about haz­ard func­tions, in­clud­ing those for ex­is­ten­tial haz­ards. Keep in mind that some haz­ard func­tions change over time be­cause they are con­di­tioned on other events or be­cause they are cyclic in na­ture. This means that the thing most likely to wipe us out in the next 50 years is not nec­es­sar­ily the same as the thing most likely to wipe us out in the 50 years af­ter that. I don’t have a for­mal an­swer for how to trans­form that into op­ti­mal al­lo­ca­tion of re­sources be­tween miti­ga­tion efforts but that would be the next step.