Where is my Fly­ing Car?

Link post

Book re­view: Where Is My Fly­ing Car? A Mem­oir of Fu­ture Past, by J. St­orrs Hall (aka Josh).

If you only read the first 3 chapters, you might ima­gine that this is the his­tory of just one in­dustry (or the mys­ter­i­ous lack of an in­dustry).

But this book at­trib­utes the ab­sence of that in­dustry to a broad set of prob­lems that are keep­ing us poor. He looks at the post-1970 slow­down in in­nov­a­tion that Cowen de­scribes in The Great Stag­na­tion[1]. The two books agree on many symp­toms, but de­scribe the causes dif­fer­ently: where Cowen says we ate the low hanging fruit, Josh says it’s due to someone “spray­ing paraquat on the low-hanging fruit”.

The book is full of mostly good in­sights. It sig­ni­fic­antly changed my opin­ion of the Great Stag­na­tion.

The book jumps back and forth between po­lem­ics about the Great Stran­gu­la­tion (with a bit too much out­rage porn), and nerdy de­scrip­tions of en­gin­eer­ing and pi­lot­ing prob­lems. I found those large shifts in tone to be some­what dis­or­i­ent­ing—it’s like the au­thor can’t de­cide whether he’s an aut­istic youth who is eagerly de­scrib­ing his latest ob­ses­sion, or an angry old man com­plain­ing about how the world is go­ing to hell (I’ve met the au­thor at Foresight con­fer­ences, and got sim­ilar but milder im­pres­sions there).

Josh’s main ex­plan­a­tion for the Great Stran­gu­la­tion is the rise of Green fun­da­ment­al­ism[2], but he also de­scribes other cul­tural /​ polit­ical factors that seem re­lated. But be­fore look­ing at those, I’ll look in some depth at three in­dus­tries that ex­em­plify the Great Stran­gu­la­tion.

bookcover

The good old days of Science Fiction

The lead­ing SF writers of the mid 20th cen­tury made pre­dic­tions for today that looked some­what close to what we got in many areas, with a big set of ex­cep­tions in the areas around trans­port­a­tion and space ex­plor­a­tion.

The ab­sence of fly­ing cars is used as an ar­gu­ment against fu­tur­ists’ abil­ity to pre­dict tech­no­logy. This can’t be dis­missed as just a minor er­ror of some ob­scure fore­casters. It was a wide­spread vis­ion of lead­ing tech­no­lo­gists.

Josh provides a de­cent ar­gu­ment that we should treat that ab­sence as a clue to why U.S. eco­nomic growth slowed in the 1970s, and why growth is still dis­ap­point­ing.

Were those SF writers clue­less op­tim­ists, mak­ing mostly ran­dom fore­cast­ing er­rors? No! Josh shows that for the least en­ergy in­tens­ive tech­no­lo­gies, their op­tim­ism was about right, and the more en­ergy in­tens­ive the tech­no­logy was, the more real­ity let them down.

Is it just a co­in­cid­ence that people star­ted wor­ship­ing en­ergy con­ser­va­tion around the start of the Great Stag­na­tion? Josh says no, we de­veloped er­go­pho­bia—no, not the stand­ard mean­ing of er­go­pho­bia: Josh has re­defined it to mean fear of us­ing en­ergy.

Did fly­ing cars prove to be tech­nic­ally harder than ex­pec­ted?

The simple an­swer is: mostly no. The people who pre­dicted fly­ing cars knew a fair amount about the dif­fi­culty, and we may have for­got­ten more than we’ve learned since then.

Josh de­scribes, in more de­tail than I wanted, a wide vari­ety of plaus­ible ap­proaches to build­ing fly­ing cars. None of them clearly qual­ify as low-hanging fruit, but they also don’t look farther from our grasp than did fly­ing ma­chines in 1900.

How ser­i­ous were the tech­nical obstacles?

Air traffic control

Be­fore read­ing this book, I as­sumed that there were ser­i­ous tech­nical prob­lems here. In hind­sight, that looks dumb.

Josh cal­cu­lates that there’s room for a mil­lion non-pres­sur­ized air­craft at one time, un­der cur­rent rules about dis­tance between planes (as­sum­ing they’re spread out evenly; it doesn’t say all Tesla em­ploy­ees can land near their of­fice at 9am). And he points out that seagull tor­nadoes (see this video) provide hints that cur­rent rules are many or­ders of mag­nitude away from any hard lim­its.

Regu­lat­ors’ fear of prob­lems looks like an obstacle, but it’s un­clear whether any­one put much thought into solv­ing them, and it doesn’t look like the in­dustry got far enough for this is­sue to be very im­port­ant.

Skill

It seems un­likely that any­where near as many people would learn to fly com­pet­ently as have learned to drive. So this looks like a large obstacle for the av­er­age fam­ily, given 20th cen­tury tech­no­logy.

But we didn’t get close the point where that was a large obstacle to fur­ther ad­op­tion. And 21st cen­tury tech­no­logy is mak­ing pro­gress to­ward con­veni­ent ways of con­nect­ing com­pet­ent pi­lots with people who want to fly, ex­cept where it’s act­ively dis­cour­aged.

Cost

If the eco­nomic growth of 1945-1970 had con­tin­ued, we’d be ap­proach­ing wealth levels where people on a UBI … oops, I mean on a na­tional ba­sic in­come could hope to af­ford an oc­ca­sional ride in a fly­ing Uber that comes to their door. At least if there were no polit­ical prob­lems that drove up costs.

Weather

Weather will make fly­ing cars a less pre­dict­able means than ground cars to get to a given des­tin­a­tion. That seems to ex­plain a mod­est frac­tion of people’s re­luct­ance to buy fly­ing cars, but that ex­plains at most a mod­est part of the puzzle.

Safety

The lead­ing cause of death among act­ive pi­lots is … mo­tor­cycle ac­ci­dents.

I wasn’t able to verify that, and other sources say that gen­eral avi­ation is roughly as dan­ger­ous as mo­tor­cycles. Mo­tor­cycles are dan­ger­ous enough that they’d likely be il­legal if they hadn’t been around be­fore the Great Stran­gu­la­tion, so whether either of those are con­sidered safe enough seems to de­pend on ac­ci­dents of his­tory.

People have ir­ra­tional fears of risk, but there has also been a ra­tional trend of people de­mand­ing more safety be­cause we can now af­ford more safety. I ex­pect this is a mod­er­ate part of why early SF writers over­es­tim­ated de­mand for fly­ing cars.

The li­ab­il­ity crisis seems to have hit gen­eral avi­ation harder than it hit most other in­dus­tries. I’m still un­clear why.

One of the more ironic reg­u­lat­ory patho­lo­gies that has shaped the world of gen­eral avi­ation is that most of the planes we fly are either 40 years old or homemade—and that we were forced into that po­s­i­tion in the name of safety.

If the small air­craft in­dustry hadn’t mostly shut down, it’s likely that new planes would have more safety fea­tures (airbags? whole-air­plane para­chutes?).

The fly­ing car in­dustry hit a num­ber of speed­bumps, such as WWII di­vert­ing tal­ent and re­sources to other types of avi­ation, then a key en­tre­pren­eur be­ing dis­trac­ted by a pat­ent dis­pute, and then was largely shut down by li­ab­il­ity law­suits. It seems like pro­gress should have been a bit faster around 1950-1970 - I’m con­fused as to whether the in­dustry did well then.

At any rate, it looks like li­ab­il­ity law­suits were the in­dustry’s biggest prob­lem, and they com­bined with a more hos­tile cul­ture and ex­pens­ive en­ergy to stop pro­gress around 1980.

The book shif­ted my opin­ion from “those SF writers were con­fused” to “fly­ing cars should be roughly as wide­spread as mo­tor­cycles”. We should be close to hav­ing auto­pi­lots which elim­in­ate the need for hu­man pi­lots (and the same for mo­tor­cycles?), and then I’d con­sider it some­what reas­on­able for the av­er­age fam­ily to have a fly­ing car.

Nuc­lear Power

Josh em­phas­izes the im­port­ance of cheap en­ergy for things such as fly­ing cars, space travel, erad­ic­at­ing poverty, etc., and iden­ti­fies nuc­lear power as the main tech­no­logy that should have made en­ergy in­creas­ingly af­ford­able. So it seems im­port­ant to check his claims about what went wrong with nuc­lear power.

He cites a study by Peter Lang, with this strange learn­ing curve:

It shows a trend of costs de­clin­ing with ex­per­i­ence, just like a nor­mal in­dustry where there’s some com­pet­i­tion and where con­sumers seem to care about price. Then that trend was re­placed by a clear ex­ample of cost dis­ease[3]. I’ve pre­vi­ously blogged about the value of learn­ing curves (aka ex­per­i­ence curve ef­fects) in fore­cast­ing.

This is pretty in­con­sist­ent with run­ning out of low-hanging fruit, and is con­sist­ent with a broad class of polit­ical prob­lems, in­clud­ing the hy­po­thesis of hos­tile reg­u­la­tion, and also the hy­po­thesis that nuc­lear mar­kets were once com­pet­it­ive, then switched to hav­ing a good deal of mono­poly power.

This is a pretty strong case that some­thing avoid­able went wrong, but leaves a good deal of un­cer­tainty about what went wrong, and Josh seemed a little too quick to jump to the ob­vi­ous con­clu­sion here, so I in­vest­ig­ated fur­ther[4]. I couldn’t find any­one ar­guing that nuc­lear power hit tech­nical prob­lems around 1970, but then it’s hard to find many people who try to ex­plain nuc­lear cost trends at all.

This book chapter sug­gests there was a shift from en­gin­eer­ing de­cisions be­ing mostly made by the com­pan­ies that were do­ing the con­struc­tion, to mostly be­ing de­term­ined by reg­u­lat­ors. Since reg­u­lat­ors have little in­cent­ive to care about cost, the ef­fect seems fairly sim­ilar to the in­dustry be­com­ing a mono­poly. Cost dis­ease seems fairly nor­mal for mono­pol­ies.

That chapter also points out the ef­fects of reg­u­lat­ory delays on costs: “The in­crease in total con­struc­tion time … from 7 years in 1971 to 12 years in 1980 roughly doubled the fi­nal cost of plants.“[5]

In sum, some­thing went wrong with nuc­lear power. The prob­lems look more polit­ical than tech­nical. The res­ult­ing high cost of en­ergy slowed eco­nomic pro­gress by mak­ing some new tech­no­lo­gies too ex­pens­ive, and by di­vert­ing tal­ent to en­ergy con­ser­va­tion. And by pro­tect­ing the fossil fuel in­dus­tries, it caused mil­lions of deaths, and maybe 174 Gt of un­ne­ces­sary CO2 emis­sions (about 31% of all man-made CO2 emis­sions).

This book con­vinced me that I’d un­der­es­tim­ated how im­port­ant nuc­lear power could have been.

Nanotech

So the tech­no­logy of the Se­cond Atomic Age will be a con­flu­ence of two strongly syn­er­gistic atomic tech­no­lo­gies: na­n­otech and nuc­lear.

The book has a chapter on the feas­ib­il­ity of Feyn­man /​ Drexler style na­n­otech, which at­tempts to find a com­prom­ise between Drexler’s ex­cru­ci­at­ingly tech­nical Nanosys­tems and his sci­ence-fic­tion style Engines of Creation. That com­prom­ise will con­vince a few people who weren’t con­vinced by Drexler, but most people will either find it in­suf­fi­ciently tech­nical, or else hard to fol­low be­cause it re­quires a good deal of tech­nical know­ledge.

Josh ex­plains some key parts of why the gov­ern­ment didn’t fund re­search into the Feyn­man /​ Drexler vis­ion of na­n­otech: cent­ral­iz­a­tion and bur­eau­crat­iz­a­tion of re­search fund­ing, plus the Ma­chiavelli Effect

  • the old or­der op­poses change, and be­ne­fi­ciar­ies of change “do not read­ily be­lieve in new things un­til they have had a long ex­per­i­ence of them.”

Josh de­scribes the main­stream re­ac­tion to na­n­otech fairly well, but that’s not the whole story.

Why didn’t the mil­it­ary fund na­n­otech? Na­n­otech would likely ex­ist today if we had cred­ible fears of Al Qaeda re­search­ing it in 2001. But my fear of a na­n­otech arms race ex­ceeds my de­sire to use na­n­otech.

Many VCs would get con­fused by top aca­dem­ics who dis­missed (straw-man ver­sions of) Drexler’s vis­ion. But there are a few VCs such as Steve Jur­vet­son who un­der­stand Drexler’s ideas well enough to not be con­fused by that smoke. With those VCs, the ex­plan­a­tion is no en­tre­pren­eurs tried a suf­fi­ciently in­cre­mental path

Most ap­proaches to na­n­otech re­quire a long enough series of de­vel­op­ment steps to achieve a mar­ket­able product that VCs won’t fund them. That’s not a fool­ish mis­take on VCs part—they have sens­ible reas­ons to think that some other com­pany will get most of the re­wards (how much did Xerox get from PARC’s UI in­nov­a­tions?). Josh pro­motes an ap­proach to na­n­otech that seems more likely to pro­duce in­ter­me­di­ate products which will sell. As far as I know, no en­tre­pren­eurs at­temp­ted to fol­low that path (maybe be­cause it looked too long and slow?).

The pat­ent sys­tem has been mar­keted as a solu­tion to this kind of prob­lem, but it seems de­signed for a hedge­hog-like model of in­nov­a­tion, when what we ought to be in­centiv­iz­ing is a more fox-like in­nov­a­tion pro­cess.

Mostly there isn’t a good sys­tem of fund­ing tech­no­lo­gies that take more than 5 years to gen­er­ate products.

If gov­ern­ment fund­ing got this right dur­ing the golden age of SF, the hard ques­tions should be fo­cused more on what went right then, than on what is wrong with fund­ing now. But I’m guess­ing there was no golden age in which ba­sic R&D got ap­pro­pri­ate fund­ing, ex­cept when we were lucky enough for pop­u­lar opin­ion to sup­port the tech­no­lo­gies in ques­tion.

Prob­lems with these three in­dus­tries aren’t enough to ex­plain the stag­na­tion, but Josh con­vinced me that the prob­lems which af­fected these in­dus­tries are more per­vas­ive, af­fect­ing pretty much all en­ergy-in­tens­ive tech­no­lo­gies.

Cul­ture and politics

Of all the great im­prove­ments in know-how ex­pec­ted by the clas­sic sci­ence-fic­tion writers, com­pet­ent gov­ern­ment was the one we got the least.

I’ll fo­cus now on the un­der­ly­ing causes of stag­na­tion.

Green fun­da­ment­al­ism and er­go­pho­bia are ar­gu­ably suf­fi­cient to ex­plain the hos­til­ity to nuc­lear power and avi­ation, but it’s less clear how they ex­plain the li­ab­il­ity crisis or the stag­na­tion in na­n­otech.

Josh also men­tions a vari­ety of other cul­tural cur­rents, each of which ex­plain some of the prob­lems. I ex­pect these are strongly over­lap­ping ef­fects, but I won’t be sur­prised if they sound as dis­join­ted as they did in the book.

It mat­ters whether we fear an all-see­ing god. From the book Big Gods: How Re­li­gion Trans­formed Co­op­er­a­tion and Con­flict:

In a civil­iz­a­tion where a be­lief in a Big God is ef­fect­ively uni­ver­sal, there is a ma­jor ad­vant­age in the kind of things you can do col­lect­ively. In today’s Amer­ica, you can’t be trus­ted to ride on an air­liner with a nail file. How could you be trus­ted driv­ing your own 1000-horsepower fly­ing car? … The green re­li­gion, on the other hand, in­stead of en­han­cing people’s in­nate con­science, tends to de­grade it, in a phe­nomenon called “li­cens­ing.” People who vir­tue-sig­nal by buy­ing or­ganic products are more likely to cheat and steal

[6]

From Peter Turchin: when an em­pire be­comes big enough to stop wor­ry­ing about ex­ternal threats to its ex­ist­ence, the co­oper­at­ive “we’re all in the same boat” spirit is re­placed by a “win­ner take all” men­tal­ity.

the evol­u­tion­ary pres­sures to what we con­sider moral be­ha­vior arise only in non-zero-sum in­ter­ac­tions. In a dy­namic, grow­ing so­ci­ety, people can in­ter­act co­oper­at­ively and both come out ahead. In a static no-growth so­ci­ety, pres­sures to­ward mor­al­ity and co­oper­a­tion van­ish;

Self de­cep­tion is less valu­able on a fron­tier where you’re strug­gling with nature than it is when most struggles in­volve so­cial in­ter­ac­tion, where self-de­cep­tion makes vir­tue sig­nal­ing easier.

“If your neigh­bor is Sav­ing the Planet, it seems some­how less valu­able merely to keep clean wa­ter run­ning”.

“Tech­no­lo­gies that pro­voke an­ti­pathy and pro­mote dis­cord, such as so­cial net­works, are the or­der of the day; tech­no­lo­gies that em­power every­one but re­quire a back­ground of mu­tual trust and co­oper­a­tion, such as fly­ing cars, are con­sidered amus­ing ana­chron­isms.”

Those were Josh’s points. I’ll add these thoughts:

It’s likely that cul­tural changes led com­pet­ent en­gin­eers to lose in­terest in work­ing for reg­u­lat­ory agen­cies. I don’t think Josh said that ex­pli­citly, but it seems to fol­low fairly nat­ur­ally from what he does say.

Josh refers to Robin Han­son a fair amount, but doesn’t men­tion Robin’s sug­ges­tion that in­creas­ing wealth lets us re­turn to for­ager val­ues. “Big god” val­ues are clearly farmer val­ues.

Man­cur Olson’s The Rise and De­cline of Na­tions (lis­ted in the bib­li­o­graphy, without ex­plan­a­tion), pre­dicted in 1982 that spe­cial in­terests would be an in­creas­ing drag on growth in stable na­tions. His reas­on­ing dif­fers a fair amount from Josh’s, but their con­clu­sions sound fairly sim­ilar.

Josh of­ten fo­cuses on Greens as if they’re a large part of the prob­lem, but I’m in­clined to fo­cus more on the erosion of trust and co­oper­a­tion, and treat the Greens more as a symp­tom.

The most de­struct­ive as­pects of Green fun­da­ment­al­ism can be ex­plained by spe­cial in­terests, such as coal com­pan­ies and dem­agogues, who ma­nip­u­late long-stand­ing pre­ju­dices for new pur­poses. How much of Great Stran­gu­la­tion was due to spe­cial in­terests such as coal com­pan­ies? I don’t know, but it looks like the coal in­dustry would have died by 2000 (ac­cord­ing to Peter Lang) if the pre-1970 trends in nuc­lear power had con­tin­ued.

Green re­li­gious ideas ex­plain hos­til­ity to en­ergy-in­tens­ive tech­no­lo­gies, but I have doubts about whether that would be trans­lated into ef­fect­ive ac­tion. Greens could have caused cul­tural changes that shif­ted the best and the bright­est away from wealth cre­ation and to­ward lit­ig­a­tion.

That at­tempt to at­trib­ute the stag­na­tion mainly to Greens seems a bit weaker than the spe­cial in­terests ex­plan­a­tion. But I re­main very un­cer­tain about whether there’s a single cause, or whether it took sev­eral in­de­pend­ent er­rors to cause the stag­na­tion.

What now? I don’t see how we could just turn on a be­lief in a big god. The book says we’ll likely prosper in spite of the prob­lems dis­cussed here, but leaves me a bit gloomy about achiev­ing our full po­ten­tial.

The book could use a bet­ter way of la­beling en­vir­on­ment­al­ists who aren’t Green fun­da­ment­al­ists. Josh clearly un­der­stands that there are big dif­fer­ences between Green fun­da­ment­al­ists and people with prag­matic motives for re­du­cing pol­lu­tion or pre­serving parks. Even when people ad­opt Green val­ues mostly for sig­nal­ing pur­poses, there are im­port­ant dif­fer­ences between safe rituals, such as re­cyc­ling, and sig­nals that pro­tect the coal in­dustry.

Yet stand­ard polit­ical ter­min­o­logy makes it sound like at­tacks on the Greens sig­nal hos­til­ity to all of those groups. I wish Josh took more care to sig­nal a nar­rower fo­cus of hos­til­ity.

Iron­ic­ally for a book that com­plains about vir­tue sig­nal­ing, a fair amount of the book looks like vir­tue sig­nal­ing. Maybe that gave him a li­cense to ig­nore mundane things like pub­li­ciz­ing the book (I couldn’t find a men­tion of the book on his fly­ing car blog un­til 3 months after it was pub­lished).

Has the act of writ­ing this re­view li­censed me to for­get about be­ing ef­fect­ive? I’m a bit wor­ried.

Mis­cel­laneous com­ments and complaints

It isn’t per­haps real­ized just how much the war on cars con­trib­uted to the great stag­na­tion—or how much fly­ing cars could have helped pro­long the boom.

Josh provides a good ana­lysis of the be­ne­fits of near-uni­ver­sal car own­er­ship, and why some­thing sim­ilar should ap­ply to fly­ing cars. But he misses what I’ll guess was the biggest be­ne­fit of cars—people ap­plied for jobs for which they couldn’t have pre­vi­ously man­aged to get to an in­ter­view. Com­pany towns were sig­ni­fic­ant in the 19th cen­tury—with down­sides that bore some sim­il­ar­ity to slavery, due to large obstacles to find­ing a job in an­other town. Bet­ter trans­port­a­tion and com­mu­nic­a­tions changed that.

He says “a cen­tury of cli­mate change in the worst case might cost us as much as li­ab­il­ity law­yers do now.” He gets his es­tim­ate of the worst case from this GAO re­port. That’s mis­lead­ing about how we should eval­u­ate the ac­tual worst case. I’m not too clear how they got those num­bers, but they likely mean some­thing more like that there’s a 95% chance that ac­cord­ing to some model, cli­mate change will do no more dam­age than law­yers. That still leaves plenty of room for the worst 1% of pos­sible out­comes to be much worse than law­yers ac­cord­ing to the model, and there’s enough un­cer­tainty in cli­mate sci­ence that we should ex­pect more than a 5% chance of the model erring on the op­tim­istic side. Note also that it’s not hard to find a some­what re­spect­able source that says cli­mate change might cost over 20% of global GDP. I see other prob­lems with his cli­mate change com­ments, but they seem less im­port­ant than his dis­missal of the tail risks.

Josh re­ports that fly­ing a plane causes him to think in far mode, much like our some­what biased view of the fu­ture.

It’s been a long time since I’ve flown a plane, but I don’t re­call that ef­fect be­ing sig­ni­fic­ant. I find that a bet­ter way to achieve that ex­per­i­ence is to hike up a moun­tain whose sum­mit is above the clouds. Al­though there are re­l­at­ively few places that have an ap­pro­pri­ate moun­tain nearby, and it takes some­what spe­cial tim­ing where I live to do that.

While re­search­ing this re­view, I found this weird lit­ig­a­tion story: Dis­ney Sued for Not Build­ing Fly­ing “Star Wars” Car.

I of­ten tend to side with tech­no­lo­gical de­term­in­ist views of his­tory, but this book provides some evid­ence against that. Just com­pare Uber with “Uber for planes”—it looks like there’s a good deal of luck in­volved in what pro­gress gets al­lowed.

Josh il­lus­trates the Ma­chiavelli Ef­fect by an ex­ample of ex­pert ad­vice that fat is un­healthy, and he com­plains that the ex­perts ig­nore Gary Taubes car­bophobic counter-move­ment. Yet what I see is people on both sides of that de­bate fo­cus­ing on in­ter­ven­tions that are mostly ir­rel­ev­ant.

Josh points out that we can test the ad­vice, and re­ports that he lost a good deal of weight after switch­ing to a high-fat diet. Well, I tried a sim­ilar switch in 2012 from a low-fat diet to a high-fat diet, and it had no ef­fect on my weight (and a ter­rible ef­fect on my homo­cysteine and sdLDL, due to high sat­ur­ated fat). The di­et­ary changes the had the best ef­fects on my weight were al­tern­ate day cal­orie re­stric­tion, cut­ting out junk food (mainly via pa­leo heur­ist­ics), and eat­ing less kelp (which was de­press­ing my thyroid via ex­cess iod­ine).

He cites Scott Al­ex­an­der in other con­texts, but ap­par­ently missed this post point­ing out ser­i­ous flaws in Taubes’ claims. Note also that Taubes re­acted poorly to evid­ence against his the­ory.

Mis­cel­laneous ques­tions promp­ted by the book

The book hints that cul­tural be­liefs have im­port­ant in­flu­ences on where smart people ap­ply their tal­ents. This mostly seems hard to ana­lyze. Would Elon Musk be swayed by er­go­pho­bia or Green fun­da­ment­al­ism? That seems like the main ex­ample I can gen­er­ate about a com­pet­ent tech leader whose plans seem some­what in­flu­enced by what pop­u­lar be­liefs about where tech­no­logy should head. Tesla and SolarCity ar­gu­ably fit a pat­tern of Musk be­ing in­flu­enced by Green vis­ions. But SpaceX looks more like pan­der­ing to the vis­ions of er­go­philes.

The book left me won­der­ing: where does high mod­ern­ism fit into this story? I see many sim­il­ar­it­ies between high mod­ern­ism and this book’s no­tion of who the bad guys are. Yet high mod­ern­ism star­ted to crumble a bit be­fore the worst parts of the Great Stran­gu­la­tion star­ted (i.e. around 1970). The book hints at a semi-sat­is­fy­ing an­swer: Chris­tian­ity and high mod­ern­ism pro­duced a de­cent bal­ance of power where each ideo­logy checked the oth­ers’ ex­cesses, but Green fun­da­ment­al­ism eroded the good as­pects of high mod­ern­ism while strength­en­ing the worst as­pects.

Did oil prices rise in the 1970s due to evid­ence that nuc­lear prices were rising? I can al­most ima­gine OPEC be­ing pres­ci­ent enough to see that nuc­lear reg­u­la­tion saved them from im­port­ant com­pet­i­tion. The tim­ing of OPEC’s ini­tial ef­fects on the mar­ket seems to closely co­in­cide with the nuc­lear in­dustry de­vel­op­ing cost dis­ease. But I don’t quite ex­pect that OPEC lead­ers were that smart.

Another odd hy­po­thesis: in­creas­ing mo­bil­ity en­abled people to move too eas­ily to bet­ter jur­is­dic­tions. This scared lots of spe­cial in­terests (e.g. local gov­ern­ments, com­pan­ies with a local mono­poly, etc., whose power de­pended on cap­tive cus­tom­ers), who re­acted by ad­voc­at­ing policies which re­duced mo­bil­ity (e.g. stifling trans­port­a­tion, en­cour­aging home own­er­ship in­stead of rent­ing).

Quotes

I’ve only tried to sum­mar­ize and ana­lyze the more mod­est and ba­sic parts of the book here. Some parts of the book are too strange for me to want to re­view. I will close with some quotes from them:

Hmmm. This might ex­plain some of the book’s pe­cu­li­ar­it­ies: “ideation re­capit­u­lates in­ebri­ation!“.

“The hu­man of the fu­ture will have more and bet­ter senses, be stronger and be ad­apt­able to a much wider range of en­vir­on­ments, and last but not least have the bio­sphere atom-re­arran­ging cap­ab­il­ity built in. The hu­man of the fu­ture need not have any eco­lo­gical foot­print at all.”

His fa­vor­ite form of re­new­able en­ergy is nuc­lear: “In other words, if we start tak­ing uranium out of sea­wa­ter and use it for the en­tire world’s en­ergy eco­nomy, in­deed a ro­bustly grow­ing en­ergy eco­nomy, the con­cen­tra­tion in sea­wa­ter will not de­cline for lit­er­ally mil­lions of years.”

“In the Se­cond Atomic Age, Litven­enko would have got­ten a text from his left kid­ney telling him that it had col­lec­ted 26.5 mi­cro­grams of Po­lonium-210, and what would he like to do with it?”

He asks us not to call this a green­house: “The LEDs emit only the fre­quen­cies used by chloro­phyll, so they are an ap­par­ently whim­sical purple. The air is moist, warm, and has a sig­ni­fic­antly higher frac­tion of CO2 than nat­ural air … the plants do not need pesti­cides be­cause in­sects simply can’t get to them. … you get some­thing like 300 times as much lettuce per square foot of ground than the pre-in­dus­trial mule-and-plow dirt farmer. All you need is power, to have fresh local straw­ber­ries in Janu­ary in the Yukon or in August in Antarc­tica.”

And he likes tall build­ings. I don’t want to clas­sify this com­ment:

A ten-mile tower might have a foot­print of a square mile and could house 40 mil­lion people. Eight such build­ings would house the en­tire cur­rent pop­u­la­tion of the Un­ited States, leav­ing 2,954,833 square miles of land avail­able for or­ganic lav­ender farms.

Com­pared to the sky­hook (geo­sta­tion­ary or­bital tower), which is just barely pos­sible even with the the­or­et­ical best ma­ter­ial prop­er­ties, a tower 100 km high is easy. Flaw­less dia­mond, with a com­press­ive strength of 50 GPa, does not even need a taper at all for a 100 km tower; a 100-km column of dia­mond weights 3.5 bil­lion new­tons per square meter but can sup­port 50 bil­lion. Even com­mer­cially avail­able poly­crys­tal­line syn­thetic dia­mond with ad­vert­ised strengths of 5 GPa would work.

A Weather Machine could prob­ably double global GDP simply by re­gional cli­mate con­trol. … You could make land in lots of places, such as North­ern Canada and Rus­sia, as valu­able as Cali­for­nia.

Um, don’t for­get the mil­it­ary im­plic­a­tions which might off­set that.

I used to be sort of com­fort­able with Reyn­olds num­bers and lift-to-drag ra­tios, but this claim seems to be bey­ond my pay grade:

Given the ri­dicu­lous wing­span and the vir­tu­ally in­fin­ite Reyn­olds num­ber, we might get a lift-to-drag ra­tio of 100; we would need 1 bil­lion pounds of thrust.

He’s in­ter­ested in cold fu­sion, but ad­mits it’s hard:

But we would like a the­ory in which whenever some mech­an­ism causes mir­acle 1 to hap­pen, it al­most al­ways causes mir­acles 2 and 3. … It seems at first blush that say­ing there might be a quantum coup­ling between phon­ons and some nuc­lear de­gree of free­dom is in­dis­tin­guish­able from ma­gic. But if you look closely, it’s not com­pletely in­sane.

I’ll take his word on that for now, since “look closely” ap­pears to re­quire way more phys­ics than I’m up for.

Bi­otech gets ap­prox­im­ately one para­graph, in­clud­ing: “Ex­pect Astro the talk­ing dog be­fore 2062. Ex­pect to live long enough to see him.”

One of the hard­est jobs that hu­mans do, some well and some poorly, is man­age­ment of other hu­mans. One of the ma­jor reas­ons this is hard is that hu­mans are selfish, un­reas­on­able, frac­tious, and just plain ornery. … On the other hand, man­aging ro­bots with hu­man-level com­pet­ence will be fall­ing-down easy. In the next couple of dec­ades, ro­bots will be climb­ing up the levels of com­pet­ence to com­pete with hu­mans at one job and an­other. Until they be­come spec­tac­u­larly bet­ter, though, I sus­pect that the ma­jor ef­fect will be to make man­age­ment easier—per­haps so easy a ro­bot could do it! Once we build trust­worthy IQ 200 ma­chines, only an idiot will trust any hu­man to make any de­cision that mattered …

What then are we hu­mans sup­posed to do?

Don’t look at me! We already know that only a fool would ask a hu­man such an im­port­ant ques­tion. Ask the ma­chines.

when someone in­vents a method of turn­ing a Ni­caragua into a Nor­way, ex­tract­ing only a 1% profit from the im­prove­ment, they will be­come rich bey­ond the dreams of av­arice and the world will be­come a much bet­ter, hap­pier, place. Wise in­cor­rupt­ible ro­bots may have some­thing to do with it.

Footnotes

[1] - I haven’t read The Great Stag­na­tion, so I’m com­ment­ing based on simple sum­mar­ies of it. Based on what I know of Cowen, the books are of su­per­fi­cially sim­ilar qual­ity. Cowen does an un­usual amount of broad but shal­low re­search, whereas Josh is less pre­dict­able about his re­search qual­ity, but his re­search is of­ten much deeper than Cowen’s. E.g. for this book, it in­cluded learn­ing how to fly, and buy­ing a plane. That re­search alone likely cost him more money than he’ll make from the book (not to men­tion hun­dreds of hours of his time), and it’s not the only way in which his re­search is sur­pris­ingly deep.

[2] - not in quite the same sense as what people who call them­selves Green fun­da­ment­al­ists mean, but pretty close. Both sides seems to agree that a key is­sue is whether in­dus­trial growth is good or bad.

Some of what Josh dis­likes about the worst Greens:

My own doubts came when DDT was in­tro­duced. In Guyana, within two years, it had al­most elim­in­ated mal­aria. So my chief quar­rel with DDT, in hind­sight, is that it has greatly ad­ded to the pop­u­la­tion prob­lem.

[3] - at least in many coun­tries. South Korea’s nuc­lear costs have con­tin­ued to de­cline. The vari­ation in when cost dis­ease hits sug­gests some­thing other than en­gin­eer­ing prob­lems.

I got con­cerned about the lack of data from Ch­ina. I couldn’t find com­par­able Chinese data, so I used fin­an­cial data from CGN Power Com­pany (2011 data here, first half 2018 data here) to show, if my math is right, that CGN sold power at RMB0.3695 ($0.0558) /​ KWh in 2011 versus RMB0.2966 ($0.0448) /​ KWh in 2018, a de­cline of nearly 20%. I.e. no cost dis­ease there.

Note: I own stock in CGN Power.

Josh claims that the Navy’s nuc­lear power pro­gram avoided stran­gu­la­tion. Where can I get data about the cost trends there?

[4] - I’ve looked for anti-nuke ar­gu­ments about the cost of nuc­lear power, and most seem to as­sume that cost dis­ease is in­ev­it­able. A few look for signs that nuc­lear power has been treated un­fairly, and fo­cus on things like sub­sidies or car­bon taxes.

It seems quite plaus­ible that they start with the as­sump­tion that most wealth is a gift from Mother Nature, and con­clude that most im­port­ant con­flicts are zero-sum struggles over who gets those gifts. They don’t see any­thing that looks like tak­ing re­sources away from nuc­lear power, and con­clude that nuc­lear power has been reg­u­lated fairly.

Let me sug­gest an ana­logy: ima­gine the early days of the dot-com boom, when the be­ne­fits of Google search were not widely un­der­stood. Ima­gine also a co­ali­tion of mu­sic dis­trib­ut­ors, and people who are de­voted to com­munity-build­ing via pro­mot­ing so­cial in­ter­ac­tion in local lib­rar­ies. Such a co­ali­tion might see Google as a threat, and point to the risks that Google would make porn more abund­ant. Such a co­ali­tion might well pro­mote laws re­quir­ing Google to check each search res­ult for porn (e.g. via manual in­spec­tion, or by only in­dex­ing pages of com­pan­ies who take re­spons­ib­il­ity for keep­ing porn off their sites). It would be ob­vi­ous that Google needs to charge a mod­er­ately high sub­scrip­tion fee for its search—surely the new rules would only in­crease the sub­scrip­tion fees by a small frac­tion. [It ac­tu­ally seemed ob­vi­ous to most hy­per­text en­thu­si­asts up through about 1995 that a com­pany like Google would need to charge users for its ser­vice.] Oh, and Xanadu has some in­ter­est­ing ideas for how to use mi­cro­pay­ments to more eas­ily charge for that kind of ser­vice—maybe Google can run un­der Xanadu?

A per­son who had no per­sonal ex­per­i­ence of be­ne­fit­ing from Google might not no­tice much harm from such a reg­u­la­tion, or might as­sume it has a neg­li­gible ef­fect on Google’s costs. And someone who ima­gines that Mother Nature is the primary source of free lunches is likely to ser­i­ously un­der­es­tim­ate the be­ne­fits of Google.

I’ve seen oc­ca­sional hints that people at­trib­ute the cost in­creases to valu­able safety meas­ures that had been miss­ing from early re­act­ors, but I haven’t found any­one say­ing that who seems aware of the risks of keep­ing other en­ergy sources in busi­ness. So I’m in­clined to treat that the way I treat con­cerns about the safety of con­sumers pump­ing gas, or the dangers of caf­fein­ated driv­ing.

[5] - note that the high in­fla­tion of the time com­plic­ates that pic­ture. A more sim­pli­fied model would go like this: ima­gine 0% in­fla­tion, and the com­pany bor­rows money at an in­terest rate of 5%. Then a 5-year delay causes the cost of cap­ital to rise 27.6% (1.05^5). Cost of cap­ital is one of the lar­ger costs of nuc­lear power, so the delays alone look suf­fi­cient to turn nuc­lear power from quite com­pet­it­ive to fairly un­com­pet­it­ive.

I ex­pect that people who are un­fa­mil­iar with fin­ance will un­der­es­tim­ate the sig­ni­fic­ance of this.

[6] - Josh says this is an ex­ample of how sci­ence works pretty well: so­cial sci­ent­ists are likely quite biased against this con­clu­sion, but keep up­hold­ing it.