Where is my Flying Car?

Link post

Book re­view: Where Is My Fly­ing Car? A Me­moir of Fu­ture Past, by J. Storrs Hall (aka Josh).

If you only read the first 3 chap­ters, you might imag­ine that this is the his­tory of just one in­dus­try (or the mys­te­ri­ous lack of an in­dus­try).

But this book at­tributes the ab­sence of that in­dus­try to a broad set of prob­lems that are keep­ing us poor. He looks at the post-1970 slow­down in in­no­va­tion that Cowen de­scribes in The Great Stag­na­tion[1]. The two books agree on many symp­toms, but de­scribe the causes differ­ently: where Cowen says we ate the low hang­ing fruit, Josh says it’s due to some­one “spray­ing paraquat on the low-hang­ing fruit”.

The book is full of mostly good in­sights. It sig­nifi­cantly changed my opinion of the Great Stag­na­tion.

The book jumps back and forth be­tween polemics about the Great Stran­gu­la­tion (with a bit too much out­rage porn), and nerdy de­scrip­tions of en­g­ineer­ing and pi­lot­ing prob­lems. I found those large shifts in tone to be some­what di­s­ori­ent­ing—it’s like the au­thor can’t de­cide whether he’s an autis­tic youth who is ea­gerly de­scribing his lat­est ob­ses­sion, or an an­gry old man com­plain­ing about how the world is go­ing to hell (I’ve met the au­thor at Fore­sight con­fer­ences, and got similar but milder im­pres­sions there).

Josh’s main ex­pla­na­tion for the Great Stran­gu­la­tion is the rise of Green fun­da­men­tal­ism[2], but he also de­scribes other cul­tural /​ poli­ti­cal fac­tors that seem re­lated. But be­fore look­ing at those, I’ll look in some depth at three in­dus­tries that ex­em­plify the Great Stran­gu­la­tion.

bookcover

The good old days of Science Fiction

The lead­ing SF writ­ers of the mid 20th cen­tury made pre­dic­tions for to­day that looked some­what close to what we got in many ar­eas, with a big set of ex­cep­tions in the ar­eas around trans­porta­tion and space ex­plo­ra­tion.

The ab­sence of fly­ing cars is used as an ar­gu­ment against fu­tur­ists’ abil­ity to pre­dict tech­nol­ogy. This can’t be dis­missed as just a minor er­ror of some ob­scure fore­cast­ers. It was a wide­spread vi­sion of lead­ing tech­nol­o­gists.

Josh pro­vides a de­cent ar­gu­ment that we should treat that ab­sence as a clue to why U.S. eco­nomic growth slowed in the 1970s, and why growth is still dis­ap­point­ing.

Were those SF writ­ers clue­less op­ti­mists, mak­ing mostly ran­dom fore­cast­ing er­rors? No! Josh shows that for the least en­ergy in­ten­sive tech­nolo­gies, their op­ti­mism was about right, and the more en­ergy in­ten­sive the tech­nol­ogy was, the more re­al­ity let them down.

Is it just a co­in­ci­dence that peo­ple started wor­shiping en­ergy con­ser­va­tion around the start of the Great Stag­na­tion? Josh says no, we de­vel­oped er­go­pho­bia—no, not the stan­dard mean­ing of er­go­pho­bia: Josh has re­defined it to mean fear of us­ing en­ergy.

Did fly­ing cars prove to be tech­ni­cally harder than ex­pected?

The sim­ple an­swer is: mostly no. The peo­ple who pre­dicted fly­ing cars knew a fair amount about the difficulty, and we may have for­got­ten more than we’ve learned since then.

Josh de­scribes, in more de­tail than I wanted, a wide va­ri­ety of plau­si­ble ap­proaches to build­ing fly­ing cars. None of them clearly qual­ify as low-hang­ing fruit, but they also don’t look farther from our grasp than did fly­ing ma­chines in 1900.

How se­ri­ous were the tech­ni­cal ob­sta­cles?

Air traf­fic control

Be­fore read­ing this book, I as­sumed that there were se­ri­ous tech­ni­cal prob­lems here. In hind­sight, that looks dumb.

Josh calcu­lates that there’s room for a mil­lion non-pres­sur­ized air­craft at one time, un­der cur­rent rules about dis­tance be­tween planes (as­sum­ing they’re spread out evenly; it doesn’t say all Tesla em­ploy­ees can land near their office at 9am). And he points out that seag­ull tor­na­does (see this video) provide hints that cur­rent rules are many or­ders of mag­ni­tude away from any hard limits.

Reg­u­la­tors’ fear of prob­lems looks like an ob­sta­cle, but it’s un­clear whether any­one put much thought into solv­ing them, and it doesn’t look like the in­dus­try got far enough for this is­sue to be very im­por­tant.

Skill

It seems un­likely that any­where near as many peo­ple would learn to fly com­pe­tently as have learned to drive. So this looks like a large ob­sta­cle for the av­er­age fam­ily, given 20th cen­tury tech­nol­ogy.

But we didn’t get close the point where that was a large ob­sta­cle to fur­ther adop­tion. And 21st cen­tury tech­nol­ogy is mak­ing progress to­ward con­ve­nient ways of con­nect­ing com­pe­tent pi­lots with peo­ple who want to fly, ex­cept where it’s ac­tively dis­cour­aged.

Cost

If the eco­nomic growth of 1945-1970 had con­tinued, we’d be ap­proach­ing wealth lev­els where peo­ple on a UBI … oops, I mean on a na­tional ba­sic in­come could hope to af­ford an oc­ca­sional ride in a fly­ing Uber that comes to their door. At least if there were no poli­ti­cal prob­lems that drove up costs.

Weather

Weather will make fly­ing cars a less pre­dictable means than ground cars to get to a given des­ti­na­tion. That seems to ex­plain a mod­est frac­tion of peo­ple’s re­luc­tance to buy fly­ing cars, but that ex­plains at most a mod­est part of the puz­zle.

Safety

The lead­ing cause of death among ac­tive pi­lots is … mo­tor­cy­cle ac­ci­dents.

I wasn’t able to ver­ify that, and other sources say that gen­eral avi­a­tion is roughly as dan­ger­ous as mo­tor­cy­cles. Mo­tor­cy­cles are dan­ger­ous enough that they’d likely be ille­gal if they hadn’t been around be­fore the Great Stran­gu­la­tion, so whether ei­ther of those are con­sid­ered safe enough seems to de­pend on ac­ci­dents of his­tory.

Peo­ple have ir­ra­tional fears of risk, but there has also been a ra­tio­nal trend of peo­ple de­mand­ing more safety be­cause we can now af­ford more safety. I ex­pect this is a mod­er­ate part of why early SF writ­ers over­es­ti­mated de­mand for fly­ing cars.

The li­a­bil­ity crisis seems to have hit gen­eral avi­a­tion harder than it hit most other in­dus­tries. I’m still un­clear why.

One of the more ironic reg­u­la­tory patholo­gies that has shaped the world of gen­eral avi­a­tion is that most of the planes we fly are ei­ther 40 years old or home­made—and that we were forced into that po­si­tion in the name of safety.

If the small air­craft in­dus­try hadn’t mostly shut down, it’s likely that new planes would have more safety fea­tures (airbags? whole-air­plane parachutes?).

The fly­ing car in­dus­try hit a num­ber of speed­bumps, such as WWII di­vert­ing tal­ent and re­sources to other types of avi­a­tion, then a key en­trepreneur be­ing dis­tracted by a patent dis­pute, and then was largely shut down by li­a­bil­ity law­suits. It seems like progress should have been a bit faster around 1950-1970 - I’m con­fused as to whether the in­dus­try did well then.

At any rate, it looks like li­a­bil­ity law­suits were the in­dus­try’s biggest prob­lem, and they com­bined with a more hos­tile cul­ture and ex­pen­sive en­ergy to stop progress around 1980.

The book shifted my opinion from “those SF writ­ers were con­fused” to “fly­ing cars should be roughly as wide­spread as mo­tor­cy­cles”. We should be close to hav­ing au­topi­lots which elimi­nate the need for hu­man pi­lots (and the same for mo­tor­cy­cles?), and then I’d con­sider it some­what rea­son­able for the av­er­age fam­ily to have a fly­ing car.

Nu­clear Power

Josh em­pha­sizes the im­por­tance of cheap en­ergy for things such as fly­ing cars, space travel, erad­i­cat­ing poverty, etc., and iden­ti­fies nu­clear power as the main tech­nol­ogy that should have made en­ergy in­creas­ingly af­ford­able. So it seems im­por­tant to check his claims about what went wrong with nu­clear power.

He cites a study by Peter Lang, with this strange learn­ing curve:

It shows a trend of costs de­clin­ing with ex­pe­rience, just like a nor­mal in­dus­try where there’s some com­pe­ti­tion and where con­sumers seem to care about price. Then that trend was re­placed by a clear ex­am­ple of cost dis­ease[3]. I’ve pre­vi­ously blogged about the value of learn­ing curves (aka ex­pe­rience curve effects) in fore­cast­ing.

This is pretty in­con­sis­tent with run­ning out of low-hang­ing fruit, and is con­sis­tent with a broad class of poli­ti­cal prob­lems, in­clud­ing the hy­poth­e­sis of hos­tile reg­u­la­tion, and also the hy­poth­e­sis that nu­clear mar­kets were once com­pet­i­tive, then switched to hav­ing a good deal of monopoly power.

This is a pretty strong case that some­thing avoid­able went wrong, but leaves a good deal of un­cer­tainty about what went wrong, and Josh seemed a lit­tle too quick to jump to the ob­vi­ous con­clu­sion here, so I in­ves­ti­gated fur­ther[4]. I couldn’t find any­one ar­gu­ing that nu­clear power hit tech­ni­cal prob­lems around 1970, but then it’s hard to find many peo­ple who try to ex­plain nu­clear cost trends at all.

This book chap­ter sug­gests there was a shift from en­g­ineer­ing de­ci­sions be­ing mostly made by the com­pa­nies that were do­ing the con­struc­tion, to mostly be­ing de­ter­mined by reg­u­la­tors. Since reg­u­la­tors have lit­tle in­cen­tive to care about cost, the effect seems fairly similar to the in­dus­try be­com­ing a monopoly. Cost dis­ease seems fairly nor­mal for mo­nop­o­lies.

That chap­ter also points out the effects of reg­u­la­tory de­lays on costs: “The in­crease in to­tal con­struc­tion time … from 7 years in 1971 to 12 years in 1980 roughly dou­bled the fi­nal cost of plants.“[5]

In sum, some­thing went wrong with nu­clear power. The prob­lems look more poli­ti­cal than tech­ni­cal. The re­sult­ing high cost of en­ergy slowed eco­nomic progress by mak­ing some new tech­nolo­gies too ex­pen­sive, and by di­vert­ing tal­ent to en­ergy con­ser­va­tion. And by pro­tect­ing the fos­sil fuel in­dus­tries, it caused mil­lions of deaths, and maybe 174 Gt of un­nec­es­sary CO2 emis­sions (about 31% of all man-made CO2 emis­sions).

This book con­vinced me that I’d un­der­es­ti­mated how im­por­tant nu­clear power could have been.

Nanotech

So the tech­nol­ogy of the Se­cond Atomic Age will be a con­fluence of two strongly syn­er­gis­tic atomic tech­nolo­gies: nan­otech and nu­clear.

The book has a chap­ter on the fea­si­bil­ity of Feyn­man /​ Drexler style nan­otech, which at­tempts to find a com­pro­mise be­tween Drexler’s ex­cru­ci­at­ingly tech­ni­cal Nanosys­tems and his sci­ence-fic­tion style Eng­ines of Creation. That com­pro­mise will con­vince a few peo­ple who weren’t con­vinced by Drexler, but most peo­ple will ei­ther find it in­suffi­ciently tech­ni­cal, or else hard to fol­low be­cause it re­quires a good deal of tech­ni­cal knowl­edge.

Josh ex­plains some key parts of why the gov­ern­ment didn’t fund re­search into the Feyn­man /​ Drexler vi­sion of nan­otech: cen­tral­iza­tion and bu­reau­cra­ti­za­tion of re­search fund­ing, plus the Machi­avelli Effect

  • the old or­der op­poses change, and benefi­cia­ries of change “do not read­ily be­lieve in new things un­til they have had a long ex­pe­rience of them.”

Josh de­scribes the main­stream re­ac­tion to nan­otech fairly well, but that’s not the whole story.

Why didn’t the mil­i­tary fund nan­otech? Nan­otech would likely ex­ist to­day if we had cred­ible fears of Al Qaeda re­search­ing it in 2001. But my fear of a nan­otech arms race ex­ceeds my de­sire to use nan­otech.

Many VCs would get con­fused by top aca­demics who dis­missed (straw-man ver­sions of) Drexler’s vi­sion. But there are a few VCs such as Steve Jurvet­son who un­der­stand Drexler’s ideas well enough to not be con­fused by that smoke. With those VCs, the ex­pla­na­tion is no en­trepreneurs tried a suffi­ciently in­cre­men­tal path

Most ap­proaches to nan­otech re­quire a long enough se­ries of de­vel­op­ment steps to achieve a mar­ketable product that VCs won’t fund them. That’s not a fool­ish mis­take on VCs part—they have sen­si­ble rea­sons to think that some other com­pany will get most of the re­wards (how much did Xerox get from PARC’s UI in­no­va­tions?). Josh pro­motes an ap­proach to nan­otech that seems more likely to pro­duce in­ter­me­di­ate prod­ucts which will sell. As far as I know, no en­trepreneurs at­tempted to fol­low that path (maybe be­cause it looked too long and slow?).

The patent sys­tem has been mar­keted as a solu­tion to this kind of prob­lem, but it seems de­signed for a hedge­hog-like model of in­no­va­tion, when what we ought to be in­cen­tiviz­ing is a more fox-like in­no­va­tion pro­cess.

Mostly there isn’t a good sys­tem of fund­ing tech­nolo­gies that take more than 5 years to gen­er­ate prod­ucts.

If gov­ern­ment fund­ing got this right dur­ing the golden age of SF, the hard ques­tions should be fo­cused more on what went right then, than on what is wrong with fund­ing now. But I’m guess­ing there was no golden age in which ba­sic R&D got ap­pro­pri­ate fund­ing, ex­cept when we were lucky enough for pop­u­lar opinion to sup­port the tech­nolo­gies in ques­tion.

Prob­lems with these three in­dus­tries aren’t enough to ex­plain the stag­na­tion, but Josh con­vinced me that the prob­lems which af­fected these in­dus­tries are more per­va­sive, af­fect­ing pretty much all en­ergy-in­ten­sive tech­nolo­gies.

Cul­ture and politics

Of all the great im­prove­ments in know-how ex­pected by the clas­sic sci­ence-fic­tion writ­ers, com­pe­tent gov­ern­ment was the one we got the least.

I’ll fo­cus now on the un­der­ly­ing causes of stag­na­tion.

Green fun­da­men­tal­ism and er­go­pho­bia are ar­guably suffi­cient to ex­plain the hos­tility to nu­clear power and avi­a­tion, but it’s less clear how they ex­plain the li­a­bil­ity crisis or the stag­na­tion in nan­otech.

Josh also men­tions a va­ri­ety of other cul­tural cur­rents, each of which ex­plain some of the prob­lems. I ex­pect these are strongly over­lap­ping effects, but I won’t be sur­prised if they sound as dis­jointed as they did in the book.

It mat­ters whether we fear an all-see­ing god. From the book Big Gods: How Reli­gion Trans­formed Co­op­er­a­tion and Con­flict:

In a civ­i­liza­tion where a be­lief in a Big God is effec­tively uni­ver­sal, there is a ma­jor ad­van­tage in the kind of things you can do col­lec­tively. In to­day’s Amer­ica, you can’t be trusted to ride on an air­liner with a nail file. How could you be trusted driv­ing your own 1000-horse­power fly­ing car? … The green re­li­gion, on the other hand, in­stead of en­hanc­ing peo­ple’s in­nate con­science, tends to de­grade it, in a phe­nomenon called “li­cens­ing.” Peo­ple who virtue-sig­nal by buy­ing or­ganic prod­ucts are more likely to cheat and steal

[6]

From Peter Turchin: when an em­pire be­comes big enough to stop wor­ry­ing about ex­ter­nal threats to its ex­is­tence, the co­op­er­a­tive “we’re all in the same boat” spirit is re­placed by a “win­ner take all” men­tal­ity.

the evolu­tion­ary pres­sures to what we con­sider moral be­hav­ior arise only in non-zero-sum in­ter­ac­tions. In a dy­namic, grow­ing so­ciety, peo­ple can in­ter­act co­op­er­a­tively and both come out ahead. In a static no-growth so­ciety, pres­sures to­ward moral­ity and co­op­er­a­tion van­ish;

Self de­cep­tion is less valuable on a fron­tier where you’re strug­gling with na­ture than it is when most strug­gles in­volve so­cial in­ter­ac­tion, where self-de­cep­tion makes virtue sig­nal­ing eas­ier.

“If your neigh­bor is Sav­ing the Planet, it seems some­how less valuable merely to keep clean wa­ter run­ning”.

“Tech­nolo­gies that pro­voke an­tipa­thy and pro­mote dis­cord, such as so­cial net­works, are the or­der of the day; tech­nolo­gies that em­power ev­ery­one but re­quire a back­ground of mu­tual trust and co­op­er­a­tion, such as fly­ing cars, are con­sid­ered amus­ing anachro­nisms.”

Those were Josh’s points. I’ll add these thoughts:

It’s likely that cul­tural changes led com­pe­tent en­g­ineers to lose in­ter­est in work­ing for reg­u­la­tory agen­cies. I don’t think Josh said that ex­plic­itly, but it seems to fol­low fairly nat­u­rally from what he does say.

Josh refers to Robin Han­son a fair amount, but doesn’t men­tion Robin’s sug­ges­tion that in­creas­ing wealth lets us re­turn to for­ager val­ues. “Big god” val­ues are clearly farmer val­ues.

Man­cur Ol­son’s The Rise and De­cline of Na­tions (listed in the bibliog­ra­phy, with­out ex­pla­na­tion), pre­dicted in 1982 that spe­cial in­ter­ests would be an in­creas­ing drag on growth in sta­ble na­tions. His rea­son­ing differs a fair amount from Josh’s, but their con­clu­sions sound fairly similar.

Josh of­ten fo­cuses on Greens as if they’re a large part of the prob­lem, but I’m in­clined to fo­cus more on the ero­sion of trust and co­op­er­a­tion, and treat the Greens more as a symp­tom.

The most de­struc­tive as­pects of Green fun­da­men­tal­ism can be ex­plained by spe­cial in­ter­ests, such as coal com­pa­nies and dem­a­gogues, who ma­nipu­late long-stand­ing prej­u­dices for new pur­poses. How much of Great Stran­gu­la­tion was due to spe­cial in­ter­ests such as coal com­pa­nies? I don’t know, but it looks like the coal in­dus­try would have died by 2000 (ac­cord­ing to Peter Lang) if the pre-1970 trends in nu­clear power had con­tinued.

Green re­li­gious ideas ex­plain hos­tility to en­ergy-in­ten­sive tech­nolo­gies, but I have doubts about whether that would be trans­lated into effec­tive ac­tion. Greens could have caused cul­tural changes that shifted the best and the bright­est away from wealth cre­ation and to­ward liti­ga­tion.

That at­tempt to at­tribute the stag­na­tion mainly to Greens seems a bit weaker than the spe­cial in­ter­ests ex­pla­na­tion. But I re­main very un­cer­tain about whether there’s a sin­gle cause, or whether it took sev­eral in­de­pen­dent er­rors to cause the stag­na­tion.

What now? I don’t see how we could just turn on a be­lief in a big god. The book says we’ll likely pros­per in spite of the prob­lems dis­cussed here, but leaves me a bit gloomy about achiev­ing our full po­ten­tial.

The book could use a bet­ter way of la­bel­ing en­vi­ron­men­tal­ists who aren’t Green fun­da­men­tal­ists. Josh clearly un­der­stands that there are big differ­ences be­tween Green fun­da­men­tal­ists and peo­ple with prag­matic mo­tives for re­duc­ing pol­lu­tion or pre­serv­ing parks. Even when peo­ple adopt Green val­ues mostly for sig­nal­ing pur­poses, there are im­por­tant differ­ences be­tween safe rit­u­als, such as re­cy­cling, and sig­nals that pro­tect the coal in­dus­try.

Yet stan­dard poli­ti­cal ter­minol­ogy makes it sound like at­tacks on the Greens sig­nal hos­tility to all of those groups. I wish Josh took more care to sig­nal a nar­rower fo­cus of hos­tility.

Iron­i­cally for a book that com­plains about virtue sig­nal­ing, a fair amount of the book looks like virtue sig­nal­ing. Maybe that gave him a li­cense to ig­nore mun­dane things like pub­li­ciz­ing the book (I couldn’t find a men­tion of the book on his fly­ing car blog un­til 3 months af­ter it was pub­lished).

Has the act of writ­ing this re­view li­censed me to for­get about be­ing effec­tive? I’m a bit wor­ried.

Mis­cel­la­neous com­ments and complaints

It isn’t per­haps re­al­ized just how much the war on cars con­tributed to the great stag­na­tion—or how much fly­ing cars could have helped pro­long the boom.

Josh pro­vides a good anal­y­sis of the benefits of near-uni­ver­sal car own­er­ship, and why some­thing similar should ap­ply to fly­ing cars. But he misses what I’ll guess was the biggest benefit of cars—peo­ple ap­plied for jobs for which they couldn’t have pre­vi­ously man­aged to get to an in­ter­view. Com­pany towns were sig­nifi­cant in the 19th cen­tury—with down­sides that bore some similar­ity to slav­ery, due to large ob­sta­cles to find­ing a job in an­other town. Bet­ter trans­porta­tion and com­mu­ni­ca­tions changed that.

He says “a cen­tury of cli­mate change in the worst case might cost us as much as li­a­bil­ity lawyers do now.” He gets his es­ti­mate of the worst case from this GAO re­port. That’s mis­lead­ing about how we should eval­u­ate the ac­tual worst case. I’m not too clear how they got those num­bers, but they likely mean some­thing more like that there’s a 95% chance that ac­cord­ing to some model, cli­mate change will do no more dam­age than lawyers. That still leaves plenty of room for the worst 1% of pos­si­ble out­comes to be much worse than lawyers ac­cord­ing to the model, and there’s enough un­cer­tainty in cli­mate sci­ence that we should ex­pect more than a 5% chance of the model erring on the op­ti­mistic side. Note also that it’s not hard to find a some­what re­spectable source that says cli­mate change might cost over 20% of global GDP. I see other prob­lems with his cli­mate change com­ments, but they seem less im­por­tant than his dis­mis­sal of the tail risks.

Josh re­ports that fly­ing a plane causes him to think in far mode, much like our some­what bi­ased view of the fu­ture.

It’s been a long time since I’ve flown a plane, but I don’t re­call that effect be­ing sig­nifi­cant. I find that a bet­ter way to achieve that ex­pe­rience is to hike up a moun­tain whose sum­mit is above the clouds. Although there are rel­a­tively few places that have an ap­pro­pri­ate moun­tain nearby, and it takes some­what spe­cial timing where I live to do that.

While re­search­ing this re­view, I found this weird liti­ga­tion story: Dis­ney Sued for Not Build­ing Fly­ing “Star Wars” Car.

I of­ten tend to side with tech­nolog­i­cal de­ter­minist views of his­tory, but this book pro­vides some ev­i­dence against that. Just com­pare Uber with “Uber for planes”—it looks like there’s a good deal of luck in­volved in what progress gets al­lowed.

Josh illus­trates the Machi­avelli Effect by an ex­am­ple of ex­pert ad­vice that fat is un­healthy, and he com­plains that the ex­perts ig­nore Gary Taubes car­bo­pho­bic counter-move­ment. Yet what I see is peo­ple on both sides of that de­bate fo­cus­ing on in­ter­ven­tions that are mostly ir­rele­vant.

Josh points out that we can test the ad­vice, and re­ports that he lost a good deal of weight af­ter switch­ing to a high-fat diet. Well, I tried a similar switch in 2012 from a low-fat diet to a high-fat diet, and it had no effect on my weight (and a ter­rible effect on my ho­mo­cys­teine and sdLDL, due to high sat­u­rated fat). The dietary changes the had the best effects on my weight were al­ter­nate day calorie re­stric­tion, cut­ting out junk food (mainly via pa­leo heuris­tics), and eat­ing less kelp (which was de­press­ing my thy­roid via ex­cess io­dine).

He cites Scott Alexan­der in other con­texts, but ap­par­ently missed this post point­ing out se­ri­ous flaws in Taubes’ claims. Note also that Taubes re­acted poorly to ev­i­dence against his the­ory.

Mis­cel­la­neous ques­tions prompted by the book

The book hints that cul­tural be­liefs have im­por­tant in­fluences on where smart peo­ple ap­ply their tal­ents. This mostly seems hard to an­a­lyze. Would Elon Musk be swayed by er­go­pho­bia or Green fun­da­men­tal­ism? That seems like the main ex­am­ple I can gen­er­ate about a com­pe­tent tech leader whose plans seem some­what in­fluenced by what pop­u­lar be­liefs about where tech­nol­ogy should head. Tesla and So­larCity ar­guably fit a pat­tern of Musk be­ing in­fluenced by Green vi­sions. But SpaceX looks more like pan­der­ing to the vi­sions of er­gophiles.

The book left me won­der­ing: where does high mod­ernism fit into this story? I see many similar­i­ties be­tween high mod­ernism and this book’s no­tion of who the bad guys are. Yet high mod­ernism started to crum­ble a bit be­fore the worst parts of the Great Stran­gu­la­tion started (i.e. around 1970). The book hints at a semi-satis­fy­ing an­swer: Chris­ti­an­ity and high mod­ernism pro­duced a de­cent bal­ance of power where each ide­ol­ogy checked the oth­ers’ ex­cesses, but Green fun­da­men­tal­ism eroded the good as­pects of high mod­ernism while strength­en­ing the worst as­pects.

Did oil prices rise in the 1970s due to ev­i­dence that nu­clear prices were ris­ing? I can al­most imag­ine OPEC be­ing pre­scient enough to see that nu­clear reg­u­la­tion saved them from im­por­tant com­pe­ti­tion. The timing of OPEC’s ini­tial effects on the mar­ket seems to closely co­in­cide with the nu­clear in­dus­try de­vel­op­ing cost dis­ease. But I don’t quite ex­pect that OPEC lead­ers were that smart.

Another odd hy­poth­e­sis: in­creas­ing mo­bil­ity en­abled peo­ple to move too eas­ily to bet­ter ju­ris­dic­tions. This scared lots of spe­cial in­ter­ests (e.g. lo­cal gov­ern­ments, com­pa­nies with a lo­cal monopoly, etc., whose power de­pended on cap­tive cus­tomers), who re­acted by ad­vo­cat­ing poli­cies which re­duced mo­bil­ity (e.g. stifling trans­porta­tion, en­courag­ing home own­er­ship in­stead of rent­ing).

Quotes

I’ve only tried to sum­ma­rize and an­a­lyze the more mod­est and ba­sic parts of the book here. Some parts of the book are too strange for me to want to re­view. I will close with some quotes from them:

Hmmm. This might ex­plain some of the book’s pe­cu­liar­i­ties: “ideation re­ca­pitu­lates ine­bri­a­tion!“.

“The hu­man of the fu­ture will have more and bet­ter senses, be stronger and be adapt­able to a much wider range of en­vi­ron­ments, and last but not least have the bio­sphere atom-re­ar­rang­ing ca­pa­bil­ity built in. The hu­man of the fu­ture need not have any ecolog­i­cal foot­print at all.”

His fa­vorite form of re­new­able en­ergy is nu­clear: “In other words, if we start tak­ing ura­nium out of sea­wa­ter and use it for the en­tire world’s en­ergy econ­omy, in­deed a ro­bustly grow­ing en­ergy econ­omy, the con­cen­tra­tion in sea­wa­ter will not de­cline for liter­ally mil­lions of years.”

“In the Se­cond Atomic Age, Litve­nenko would have got­ten a text from his left kid­ney tel­ling him that it had col­lected 26.5 micro­grams of Polo­nium-210, and what would he like to do with it?”

He asks us not to call this a green­house: “The LEDs emit only the fre­quen­cies used by chloro­phyll, so they are an ap­par­ently whim­si­cal pur­ple. The air is moist, warm, and has a sig­nifi­cantly higher frac­tion of CO2 than nat­u­ral air … the plants do not need pes­ti­cides be­cause in­sects sim­ply can’t get to them. … you get some­thing like 300 times as much let­tuce per square foot of ground than the pre-in­dus­trial mule-and-plow dirt farmer. All you need is power, to have fresh lo­cal straw­ber­ries in Jan­uary in the Yukon or in Au­gust in Antarc­tica.”

And he likes tall build­ings. I don’t want to clas­sify this com­ment:

A ten-mile tower might have a foot­print of a square mile and could house 40 mil­lion peo­ple. Eight such build­ings would house the en­tire cur­rent pop­u­la­tion of the United States, leav­ing 2,954,833 square miles of land available for or­ganic laven­der farms.

Com­pared to the sky­hook (geo­sta­tion­ary or­bital tower), which is just barely pos­si­ble even with the the­o­ret­i­cal best ma­te­rial prop­er­ties, a tower 100 km high is easy. Flawless di­a­mond, with a com­pres­sive strength of 50 GPa, does not even need a ta­per at all for a 100 km tower; a 100-km column of di­a­mond weights 3.5 billion new­tons per square me­ter but can sup­port 50 billion. Even com­mer­cially available poly­crys­tal­line syn­thetic di­a­mond with ad­ver­tised strengths of 5 GPa would work.

A Weather Ma­chine could prob­a­bly dou­ble global GDP sim­ply by re­gional cli­mate con­trol. … You could make land in lots of places, such as North­ern Canada and Rus­sia, as valuable as Cal­ifor­nia.

Um, don’t for­get the mil­i­tary im­pli­ca­tions which might offset that.

I used to be sort of com­fortable with Reynolds num­bers and lift-to-drag ra­tios, but this claim seems to be be­yond my pay grade:

Given the ridicu­lous wingspan and the vir­tu­ally in­finite Reynolds num­ber, we might get a lift-to-drag ra­tio of 100; we would need 1 billion pounds of thrust.

He’s in­ter­ested in cold fu­sion, but ad­mits it’s hard:

But we would like a the­ory in which when­ever some mechanism causes mir­a­cle 1 to hap­pen, it al­most always causes mir­a­cles 2 and 3. … It seems at first blush that say­ing there might be a quan­tum cou­pling be­tween phonons and some nu­clear de­gree of free­dom is in­dis­t­in­guish­able from magic. But if you look closely, it’s not com­pletely in­sane.

I’ll take his word on that for now, since “look closely” ap­pears to re­quire way more physics than I’m up for.

Biotech gets ap­prox­i­mately one para­graph, in­clud­ing: “Ex­pect Astro the talk­ing dog be­fore 2062. Ex­pect to live long enough to see him.”

One of the hard­est jobs that hu­mans do, some well and some poorly, is man­age­ment of other hu­mans. One of the ma­jor rea­sons this is hard is that hu­mans are self­ish, un­rea­son­able, frac­tious, and just plain ornery. … On the other hand, man­ag­ing robots with hu­man-level com­pe­tence will be fal­ling-down easy. In the next cou­ple of decades, robots will be climb­ing up the lev­els of com­pe­tence to com­pete with hu­mans at one job and an­other. Un­til they be­come spec­tac­u­larly bet­ter, though, I sus­pect that the ma­jor effect will be to make man­age­ment eas­ier—per­haps so easy a robot could do it! Once we build trust­wor­thy IQ 200 ma­chines, only an idiot will trust any hu­man to make any de­ci­sion that mat­tered …

What then are we hu­mans sup­posed to do?

Don’t look at me! We already know that only a fool would ask a hu­man such an im­por­tant ques­tion. Ask the ma­chines.

when some­one in­vents a method of turn­ing a Ni­caragua into a Nor­way, ex­tract­ing only a 1% profit from the im­prove­ment, they will be­come rich be­yond the dreams of avarice and the world will be­come a much bet­ter, hap­pier, place. Wise in­cor­rupt­ible robots may have some­thing to do with it.

Footnotes

[1] - I haven’t read The Great Stag­na­tion, so I’m com­ment­ing based on sim­ple sum­maries of it. Based on what I know of Cowen, the books are of su­perfi­cially similar qual­ity. Cowen does an un­usual amount of broad but shal­low re­search, whereas Josh is less pre­dictable about his re­search qual­ity, but his re­search is of­ten much deeper than Cowen’s. E.g. for this book, it in­cluded learn­ing how to fly, and buy­ing a plane. That re­search alone likely cost him more money than he’ll make from the book (not to men­tion hun­dreds of hours of his time), and it’s not the only way in which his re­search is sur­pris­ingly deep.

[2] - not in quite the same sense as what peo­ple who call them­selves Green fun­da­men­tal­ists mean, but pretty close. Both sides seems to agree that a key is­sue is whether in­dus­trial growth is good or bad.

Some of what Josh dis­likes about the worst Greens:

My own doubts came when DDT was in­tro­duced. In Guyana, within two years, it had al­most elimi­nated malaria. So my chief quar­rel with DDT, in hind­sight, is that it has greatly added to the pop­u­la­tion prob­lem.

[3] - at least in many coun­tries. South Korea’s nu­clear costs have con­tinued to de­cline. The vari­a­tion in when cost dis­ease hits sug­gests some­thing other than en­g­ineer­ing prob­lems.

I got con­cerned about the lack of data from China. I couldn’t find com­pa­rable Chi­nese data, so I used fi­nan­cial data from CGN Power Com­pany (2011 data here, first half 2018 data here) to show, if my math is right, that CGN sold power at RMB0.3695 ($0.0558) /​ KWh in 2011 ver­sus RMB0.2966 ($0.0448) /​ KWh in 2018, a de­cline of nearly 20%. I.e. no cost dis­ease there.

Note: I own stock in CGN Power.

Josh claims that the Navy’s nu­clear power pro­gram avoided stran­gu­la­tion. Where can I get data about the cost trends there?

[4] - I’ve looked for anti-nuke ar­gu­ments about the cost of nu­clear power, and most seem to as­sume that cost dis­ease is in­evitable. A few look for signs that nu­clear power has been treated un­fairly, and fo­cus on things like sub­sidies or car­bon taxes.

It seems quite plau­si­ble that they start with the as­sump­tion that most wealth is a gift from Mother Na­ture, and con­clude that most im­por­tant con­flicts are zero-sum strug­gles over who gets those gifts. They don’t see any­thing that looks like tak­ing re­sources away from nu­clear power, and con­clude that nu­clear power has been reg­u­lated fairly.

Let me sug­gest an anal­ogy: imag­ine the early days of the dot-com boom, when the benefits of Google search were not widely un­der­stood. Imag­ine also a coal­i­tion of mu­sic dis­trib­u­tors, and peo­ple who are de­voted to com­mu­nity-build­ing via pro­mot­ing so­cial in­ter­ac­tion in lo­cal libraries. Such a coal­i­tion might see Google as a threat, and point to the risks that Google would make porn more abun­dant. Such a coal­i­tion might well pro­mote laws re­quiring Google to check each search re­sult for porn (e.g. via man­ual in­spec­tion, or by only in­dex­ing pages of com­pa­nies who take re­spon­si­bil­ity for keep­ing porn off their sites). It would be ob­vi­ous that Google needs to charge a mod­er­ately high sub­scrip­tion fee for its search—surely the new rules would only in­crease the sub­scrip­tion fees by a small frac­tion. [It ac­tu­ally seemed ob­vi­ous to most hy­per­text en­thu­si­asts up through about 1995 that a com­pany like Google would need to charge users for its ser­vice.] Oh, and Xanadu has some in­ter­est­ing ideas for how to use micro­pay­ments to more eas­ily charge for that kind of ser­vice—maybe Google can run un­der Xanadu?

A per­son who had no per­sonal ex­pe­rience of benefit­ing from Google might not no­tice much harm from such a reg­u­la­tion, or might as­sume it has a neg­ligible effect on Google’s costs. And some­one who imag­ines that Mother Na­ture is the pri­mary source of free lunches is likely to se­ri­ously un­der­es­ti­mate the benefits of Google.

I’ve seen oc­ca­sional hints that peo­ple at­tribute the cost in­creases to valuable safety mea­sures that had been miss­ing from early re­ac­tors, but I haven’t found any­one say­ing that who seems aware of the risks of keep­ing other en­ergy sources in busi­ness. So I’m in­clined to treat that the way I treat con­cerns about the safety of con­sumers pump­ing gas, or the dan­gers of caf­feinated driv­ing.

[5] - note that the high in­fla­tion of the time com­pli­cates that pic­ture. A more sim­plified model would go like this: imag­ine 0% in­fla­tion, and the com­pany bor­rows money at an in­ter­est rate of 5%. Then a 5-year de­lay causes the cost of cap­i­tal to rise 27.6% (1.05^5). Cost of cap­i­tal is one of the larger costs of nu­clear power, so the de­lays alone look suffi­cient to turn nu­clear power from quite com­pet­i­tive to fairly un­com­pet­i­tive.

I ex­pect that peo­ple who are un­fa­mil­iar with fi­nance will un­der­es­ti­mate the sig­nifi­cance of this.

[6] - Josh says this is an ex­am­ple of how sci­ence works pretty well: so­cial sci­en­tists are likely quite bi­ased against this con­clu­sion, but keep up­hold­ing it.