Harper’s Magazine article on LW/​MIRI/​CFAR and Ethereum

Cover ti­tle: “Power and para­noia in Sili­con Valley”; ar­ti­cle ti­tle: “Come with us if you want to live: Among the apoc­a­lyp­tic liber­tar­i­ans of Sili­con Valley” (mir­rors: 1, 2, 3), by Sam Frank; Harper’s Magaz­ine, Jan­uary 2015, pg26-36 (~8500 words). The be­gin­ning/​end­ing are fo­cused on Ethereum and Vi­talik Bu­terin, so I’ll ex­cerpt the LW/​MIRI/​CFAR-fo­cused mid­dle:

…Blake Masters-the name was too perfect-had, ob­vi­ously, ded­i­cated him­self to the com­mand of self and uni­verse. He did CrossFit and ate Bul­let­proof, a tech-world var­i­ant of the pa­leo diet. On his Tum­blr’s About page, since rewrit­ten, the anti-be­lief be­lief sys­tems mul­ti­plied, hy­per­linked to Wikipe­dia pages or to the con­found­ingly scholas­tic web­site Less Wrong: “Liber­tar­ian (and not con­vinced there’s ir­rec­on­cilable fis­sure be­tween de­on­tolog­i­cal and con­se­quen­tial­ist camps). Aspiring ra­tio­nal­ist/​Bayesian. Sec­u­larist/​ag­nos­tic/​ ig­nos­tic . . . Hayekian. As im­por­tant as what we know is what we don’t. Ad­mit­tedly ec­cen­tric.” Then: “Really, re­ally ex­cited to be in Sili­con Valley right now, work­ing on fas­ci­nat­ing stuff with an amaz­ing team.” I was star­tled that all these nega­tive ide­olo­gies could be con­densed so eas­ily into a pos­i­tive wor­ld­view. …I saw the utopi­anism la­tent in cap­i­tal­ism-that, as Bernard Man­dev­ille had it three cen­turies ago, it is a sys­tem that man­u­fac­tures pub­lic benefit from pri­vate vice. I started CrossFit and be­gan tin­ker­ing with my diet. I browsed ve­nal tech-trade pub­li­ca­tions, and tried and failed to read Less Wrong, which was writ­ten as if for aliens.

…I left the au­di­to­rium of Alice Tully Hall. Bleary beside the silver coffee urn in the nearly empty lobby, I was but­ton­holed by a man whose name tag read MICHAEL VASSAR, METAMED re­search. He wore a black-and-white paisley shirt and a jacket that was slightly too big for him. “What did you think of that talk?” he asked, with­out in­tro­duc­ing him­self. “Di­sor­ga­nized, wasn’t it?” A the­ory of ev­ery­thing fol­lowed. Heroes like Elon and Peter (did I have to ask? Musk and Thiel). The rel­a­tive abil­ities of physi­cists and biol­o­gists, their stan­dard de­vi­a­tions calcu­lated out loud. How ex­actly Vas­sar would save the world. His left eye­lid twitched, his full face winced with effort as he told me about his “per­sonal war against the uni­verse.” My brain hurt. I backed away and headed home. But Vas­sar had spo­ken like no one I had ever met, and af­ter Kurzweil’s keynote the next morn­ing, I sought him out. He con­tinued as if un­in­ter­rupted. Among the acolytes of eter­nal life, Vas­sar was an es­cha­tol­o­gist. “There are all of these differ­ent count­downs go­ing on,” he said. “There’s the count­down to the broad post­mod­ern meme­plex un­der­min­ing our civ­i­liza­tion and caus­ing ev­ery­thing to break down, there’s the count­down to the broad mod­ernist meme­plex de­stroy­ing our en­vi­ron­ment or kil­ling ev­ery­one in a nu­clear war, and there’s the count­down to the mod­ernist civ­i­liza­tion learn­ing to cri­tique it­self fully and cre­at­ing an ar­tifi­cial in­tel­li­gence that it can’t con­trol. There are so many differ­ent—on differ­ent time-scales—ways in which the self-mod­ify­ing in­tel­li­gent pro­cesses that we are em­bed­ded in un­der­mine them­selves. I’m try­ing to figure out ways of dis­en­tan­gling all of that. . . .I’m not sure that what I’m try­ing to do is as hard as found­ing the Ro­man Em­pire or the Catholic Church or some­thing. But it’s harder than peo­ple’s nor­mal big-pic­ture am­bi­tions, like mak­ing a billion dol­lars.” Vas­sar was thirty-four, one year older than I was. He had gone to col­lege at sev­en­teen, and had worked as an ac­tu­ary, as a teacher, in nan­otech, and in the Peace Corps. He’d founded a mu­sic-li­cens­ing start-up called Sir Groovy. Early in 2012, he had stepped down as pres­i­dent of the Sin­gu­lar­ity In­sti­tute for Ar­tifi­cial In­tel­li­gence, now called the Ma­chine In­tel­li­gence Re­search In­sti­tute (MIRI), which was cre­ated by an au­to­di­dact named Eliezer Yud­kowsky, who also started Less Wrong. Vas­sar had left to found Me­taMed, a per­son­al­ized-medicine com­pany, with Jaan Tal­linn of Skype and Kazaa, $500,000 from Peter Thiel, and a staff that in­cluded young ra­tio­nal­ists who had cut their teeth ar­gu­ing on Yud­kowsky’s web­site. The idea be­hind Me­taMed was to ap­ply ra­tio­nal­ity to medicine-“ra­tio­nal­ity” here defined as the abil­ity to prop­erly re­search, weight, and syn­the­size the flawed med­i­cal in­for­ma­tion that ex­ists in the world. Prices ranged from $25,000 for a liter­a­ture re­view to a few hun­dred thou­sand for a per­son­al­ized study. “We can save lots and lots and lots of lives,” Vas­sar said (if mostly moneyed ones at first). “But it’s the sig­nal-it’s the ‘Hey! Rea­son works!’-that mat­ters. . . . It’s not re­ally about medicine.” Our whole so­ciety was sick—root, branch, and meme­plex—and ra­tio­nal­ity was the only cure. …I asked Vas­sar about his friend Yud­kowsky. “He has worse aes­thet­ics than I do,” he replied, “and is ac­tu­ally in­com­pre­hen­si­bly smart.” We agreed to stay in touch.

One month later, I boarded a plane to San Fran­cisco. I had spent the in­terim tak­ing a sec­ond look at Less Wrong, try­ing to parse its lore and jar­gon: “scope in­sen­si­tivity,” “ugh field,” “af­fec­tive death spiral,” “typ­i­cal mind fal­lacy,” “coun­ter­fac­tual mug­ging,” “Roko’s basilisk.” When I ar­rived at the MIRI offices in Berkeley, young men were sprawled on bean­bags, sur­rounded by white­boards half black with equa­tions. I had come cos­tumed in a Fer­mat’s Last The­o­rem T-shirt, a sum­mary of the proof on the front and a bibliog­ra­phy on the back, printed for the num­ber-the­ory camp I had at­tended at fif­teen. Yud­kowsky ar­rived late. He led me to an empty office where we sat down in mis­matched chairs. He wore glasses, had a short, dark beard, and his heavy body seemed slightly alien to him. I asked what he was work­ing on. “Should I as­sume that your shirt is an ac­cu­rate re­flec­tion of your abil­ities,” he asked, “and start blab­bing math at you?” Eight min­utes of prob­a­bil­ity and game the­ory fol­lowed. Cog­i­tat­ing be­fore me, he kept gri­mac­ing as if not quite in con­trol of his face. “In the very long run, ob­vi­ously, you want to solve all the prob­lems as­so­ci­ated with hav­ing a sta­ble, self-im­prov­ing, benefi­cial-slash-benev­olent AI, and then you want to build one.” What hap­pens if an ar­tifi­cial in­tel­li­gence be­gins im­prov­ing it­self, chang­ing its own source code, un­til it rapidly be­comes—foom! is Yud­kowsky’s preferred ex­pres­sion—or­ders of mag­ni­tude more in­tel­li­gent than we are? A canon­i­cal thought ex­per­i­ment de­vised by Oxford philoso­pher Nick Bostrom in 2003 sug­gests that even a mun­dane, in­dus­trial sort of AI might kill us. Bostrom posited a “su­per­in­tel­li­gence whose top goal is the man­u­fac­tur­ing of pa­per-clips.” For this AI, known fondly on Less Wrong as Clippy, self-im­prove­ment might en­tail re­ar­rang­ing the atoms in our bod­ies, and then in the uni­verse—and so we, and ev­ery­thing else, end up as office sup­plies. Noth­ing so mis­an­thropic as Skynet is re­quired, only in­differ­ence to hu­man­ity. What is ur­gently needed, then, claims Yud­kowsky, is an AI that shares our val­ues and goals. This, in turn, re­quires a cadre of highly ra­tio­nal math­e­mat­i­ci­ans, philoso­phers, and pro­gram­mers to solve the prob­lem of “friendly” AI—and, in­ci­den­tally, the prob­lem of a uni­ver­sal hu­man ethics—be­fore an in­differ­ent, un­friendly AI es­capes into the wild.

Among those who study ar­tifi­cial in­tel­li­gence, there’s no con­sen­sus on ei­ther point: that an in­tel­li­gence ex­plo­sion is pos­si­ble (rather than, for in­stance, a pro­lifer­a­tion of weaker, more limited forms of AI) or that a heroic team of ra­tio­nal­ists is the best defense in the event. That MIRI has as much sup­port as it does (in 2012, the in­sti­tute’s an­nual rev­enue broke $1 mil­lion for the first time) is a tes­ta­ment to Yud­kowsky’s rhetor­i­cal abil­ity as much as to any tech­ni­cal skill. Over the course of a decade, his writ­ing, along with that of Bostrom and a hand­ful of oth­ers, has im­pressed the dan­gers of un­friendly AI on a grow­ing num­ber of peo­ple in the tech world and be­yond. In Au­gust, af­ter read­ing Su­per­in­tel­li­gence, Bostrom’s new book, Elon Musk tweeted, “Hope we’re not just the biolog­i­cal boot loader for digi­tal su­per­in­tel­li­gence. Un­for­tu­nately, that is in­creas­ingly prob­a­ble.” In 2000, when Yud­kowsky was twenty, he founded the Sin­gu­lar­ity In­sti­tute with the sup­port of a few peo­ple he’d met at the Fore­sight In­sti­tute, a Palo Alto nan­otech think tank. He had already writ­ten pa­pers on “The Plan to Sin­gu­lar­ity” and “Cod­ing a Tran­shu­man AI,” and posted an au­to­bi­og­ra­phy on his web­site, since re­moved, called “Eliezer, the Per­son.” It re­counted a break­down of will when he was eleven and a half: “I can’t do any­thing. That’s the phrase I used then.” He dropped out be­fore high school and taught him­self a mess of evolu­tion­ary psy­chol­ogy and cog­ni­tive sci­ence. He be­gan to “neuro-hack” him­self, sys­tem­atiz­ing his in­tro­spec­tion to evade his cog­ni­tive quirks. Yud­kowsky be­lieved he could has­ten the sin­gu­lar­ity by twenty years, cre­at­ing a su­per­hu­man in­tel­li­gence and sav­ing hu­mankind in the pro­cess. He met Thiel at a Fore­sight In­sti­tute din­ner in 2005 and in­vited him to speak at the first an­nual Sin­gu­lar­ity Sum­mit. The in­sti­tute’s paid staff grew. In 2006, Yud­kowsky be­gan writ­ing a hy­dra-headed se­ries of blog posts: sci­ence-fic­tion­ish parables, thought ex­per­i­ments, and ex­plain­ers en­com­pass­ing cog­ni­tive bi­ases, self-im­prove­ment, and many-wor­lds quan­tum me­chan­ics that fun­neled lay read­ers into his the­ory of friendly AI. Ra­tion­al­ity work­shops and Mee­tups be­gan soon af­ter. In 2009, the blog posts be­came what he called Se­quences on a new web­site: Less Wrong. The next year, Yud­kowsky be­gan pub­lish­ing Harry Pot­ter and the Meth­ods of Ra­tion­al­ity at fan­fic­tion.net. The Harry Pot­ter cat­e­gory is the site’s most pop­u­lar, with al­most 700,000 sto­ries; of these, HPMoR is the most re­viewed and the sec­ond-most fa­vor­ited. The last com­ment that the pro­gram­mer and ac­tivist Aaron Swartz left on Red­dit be­fore his suicide in 2013 was on /​r/​hp­mor. In Yud­kowsky’s tel­ling, Harry is not only a ma­gi­cian but also a sci­en­tist, and he needs just one school year to ac­com­plish what takes canon-Harry seven. HPMoR is se­ri­al­ized in arcs, like a TV show, and runs to a few thou­sand pages when printed; the book is still un­finished. Yud­kowsky and I were talk­ing about liter­a­ture, and Swartz, when a col­lege stu­dent wan­dered in. Would Eliezer sign his copy of HPMoR? “But you have to, like, write some­thing,” he said. “You have to write, ‘I am who I am.’ So, ‘I am who I am’ and then sign it.” “Alrighty,” Yud­kowsky said, signed, con­tinued. “Have you ac­tu­ally read Meth­ods of Ra­tion­al­ity at all?” he asked me. “I take it not.” (I’d been found out.) “I don’t know what sort of a dead­line you’re on, but you might con­sider tak­ing a look at that.” (I had taken a look, and hated the lit­tle I’d man­aged.) “It has a leg­endary nerd-sniping effect on some peo­ple, so be warned. That is, it causes you to read it for sixty hours straight.”

The nerd-sniping effect is real enough. Of the 1,636 peo­ple who re­sponded to a 2013 sur­vey of Less Wrong’s read­ers, one quar­ter had found the site thanks to HPMoR, and many more had read the book. Their av­er­age age was 27.4, their av­er­age IQ 138.2. Men made up 88.8% of re­spon­dents; 78.7% were straight, 1.5% trans­gen­der, 54.7 % Amer­i­can, 89.3% athe­ist or ag­nos­tic. The catas­tro­phes they thought most likely to wipe out at least 90% of hu­man­ity be­fore the year 2100 were, in de­scend­ing or­der, pan­demic (bio­eng­ineered), en­vi­ron­men­tal col­lapse, un­friendly AI, nu­clear war, pan­demic (nat­u­ral), eco­nomic/​poli­ti­cal col­lapse, as­ter­oid, nan­otech/​gray goo. Forty-two peo­ple, 2.6 %, called them­selves futarchists, af­ter an idea from Robin Han­son, an economist and Yud­kowsky’s former coblog­ger, for reeng­ineer­ing democ­racy into a set of pre­dic­tion mar­kets in which spec­u­la­tors can bet on the best poli­cies. Forty peo­ple called them­selves re­ac­tionar­ies, a grab bag of former liber­tar­i­ans, ethno-na­tion­al­ists, So­cial Dar­winists, sci­en­tific racists, pa­tri­archists, pickup artists, and atavis­tic “tra­di­tion­al­ists,” who In­ter­net-ar­gue about an­tidemo­cratic fu­tures, plump­ing var­i­ously for fas­cism or monar­chism or cor­po­ratism or rule by an all-pow­er­ful, gold-seek­ing alien named Fnargl who will free the mar­kets and sta­bi­lize ev­ery­thing else. At the bot­tom of each year’s list are sug­ges­tive statis­ti­cal ir­rele­van­cies: “ev­ery op­ti­miz­ing sys­tem’s a dic­ta­tor and i’m not sure which one i want in charge,” “Au­toc­racy (im­por­tant: my­self as au­to­crat),” “Bayesian (as­piring) Ra­tion­al­ist. Tech­no­cratic. Hu­man-cen­tric Ex­tropian Co­her­ent Ex­trap­o­lated Vo­li­tion.” “Bayesian” refers to Bayes’s The­o­rem, a math­e­mat­i­cal for­mula that de­scribes un­cer­tainty in prob­a­bil­is­tic terms, tel­ling you how much to up­date your be­liefs when given new in­for­ma­tion. This is a for­mal­iza­tion and cal­ibra­tion of the way we op­er­ate nat­u­rally, but “Bayesian” has a spe­cial sta­tus in the ra­tio­nal­ist com­mu­nity be­cause it’s the least im­perfect way to think. “Ex­tropy,” the antonym of “en­tropy,” is a decades-old doc­trine of con­tin­u­ous hu­man im­prove­ment, and “co­her­ent ex­trap­o­lated vo­li­tion” is one of Yud­kowsky’s pet con­cepts for friendly ar­tifi­cial in­tel­li­gence. Rather than our hav­ing to solve moral philos­o­phy in or­der to ar­rive at a com­plete hu­man goal struc­ture, C.E.V. would com­pu­ta­tion­ally simu­late eons of moral progress, like some kind of Whig­gish Pan­gloss ma­chine. As Yud­kowsky wrote in 2004, “In po­etic terms, our co­her­ent ex­trap­o­lated vo­li­tion is our wish if we knew more, thought faster, were more the peo­ple we wished we were, had grown up farther to­gether.” Yet can even a sin­gle hu­man’s vo­li­tion co­here or com­pute in this way, let alone hu­man­ity’s? We stood up to leave the room. Yud­kowsky stopped me and said I might want to turn my recorder on again; he had a fi­nal thought. “We’re part of the con­tinu­a­tion of the En­light­en­ment, the Old En­light­en­ment. This is the New En­light­en­ment,” he said. “Old pro­ject’s finished. We ac­tu­ally have sci­ence now, now we have the next part of the En­light­en­ment pro­ject.”

In 2013, the Sin­gu­lar­ity In­sti­tute changed its name to the Ma­chine In­tel­li­gence Re­search In­sti­tute. Whereas MIRI aims to en­sure hu­man-friendly ar­tifi­cial in­tel­li­gence, an as­so­ci­ated pro­gram, the Cen­ter for Ap­plied Ra­tion­al­ity, helps hu­mans op­ti­mize their own minds, in ac­cor­dance with Bayes’s The­o­rem. The day af­ter I met Yud­kowsky, I re­turned to Berkeley for one of CFAR’s long-week­end work­shops. The color scheme at the Rose Gar­den Inn was red and green, and ev­ery­thing was bro­caded. The at­ten­dees were mostly in their twen­ties: math­e­mat­i­ci­ans, soft­ware en­g­ineers, quants, a sci­en­tist study­ing soot, em­ploy­ees of Google and Face­book, an eigh­teen-year-old Thiel Fel­low who’d been paid $100,000 to leave Bos­ton Col­lege and start a com­pany, pro­fes­sional athe­ists, a Mor­mon turned athe­ist, an athe­ist turned Catholic, an Ob­jec­tivist who was pho­tographed at the pre­miere of At­las Shrugged II: The Strike. There were about three men for ev­ery woman. At the Fri­day-night meet and greet, I talked with Benja, a Ger­man who was study­ing math and be­hav­ioral biol­ogy at the Univer­sity of Bris­tol, whom I had spot­ted at MIRI the day be­fore. He was in his early thir­ties and quite tall, with bad pos­ture and a pony­tail past his shoulders. He wore socks with san­dals, and wor­ried a pa­per cup as we talked. Benja had felt death was ter­rible since he was a small child, and wanted his ag­ing par­ents to sign up for cry­on­ics, if he could figure out how to pay for it on a grad-stu­dent stipend. He was un­sure about the risks from un­friendly AI—“There is a part of my brain,” he said, “that sort of goes, like, ‘This is crazy talk; that’s not go­ing to hap­pen’”—but the prob­a­bil­ities had per­suaded him. He said there was only about a 30% chance that we could make it an­other cen­tury with­out an in­tel­li­gence ex­plo­sion. He was at CFAR to stop pro­cras­ti­nat­ing. Ju­lia Galef, CFAR’s pres­i­dent and cofounder, be­gan a ses­sion on Satur­day morn­ing with the first of many brain-as-com­puter metaphors. We are “run­ning ra­tio­nal­ity on hu­man hard­ware,” she said, not su­per­com­put­ers, so the goal was to be­come in­cre­men­tally more self-re­flec­tive and Bayesian: not perfectly ra­tio­nal agents, but “agent-y.” The work­shop’s classes lasted six or so hours a day; ac­tivi­ties and con­ver­sa­tions went well into the night. We got a con­densed treat­ment of con­tem­po­rary neu­ro­science that fo­cused on hack­ing our brains’ var­i­ous sys­tems and mod­ules, and at­tended ses­sions on habit train­ing, urge prop­a­ga­tion, and del­e­gat­ing to fu­ture selves. We heard a lot about Daniel Kah­ne­man, the No­bel Prize-win­ning psy­chol­o­gist whose work on cog­ni­tive heuris­tics and bi­ases demon­strated many of the ways we are ir­ra­tional. Ge­off An­ders, the founder of Lev­er­age Re­search, a “meta-level non­profit” funded by Thiel, taught a class on goal fac­tor­ing, a pro­cess of in­tro­spec­tion that, af­ter many tens of hours, maps out ev­ery one of your goals down to root-level mo­ti­va­tions-the un­change­able “in­trin­sic goods,” around which you can re­build your life. Goal fac­tor­ing is an ap­pli­ca­tion of Con­nec­tion The­ory, An­ders’s model of hu­man psy­chol­ogy, which he de­vel­oped as a Rut­gers philos­o­phy stu­dent dis­sert­ing on Descartes, and Con­nec­tion The­ory is just the start of a uni­ver­sal ren­o­va­tion. Lev­er­age Re­search has a mas­ter plan that, in the most re­cent pub­lic ver­sion, con­sists of nearly 300 steps. It be­gins from first prin­ci­ples and scales up from there: “Ini­ti­ate a philo­soph­i­cal in­ves­ti­ga­tion of philo­soph­i­cal method”; “Dis­cover a suffi­ciently good philo­soph­i­cal method”; have 2,000-plus “ac­tively and sta­bly benev­olent peo­ple suc­cess­fully seek enough power to be able to sta­bly guide the world”; “Peo­ple achieve their ul­ti­mate goals as far as pos­si­ble with­out harm­ing oth­ers”; “We have an op­ti­mal world”; “Done.” On Satur­day night, An­ders left the Rose Gar­den Inn early to su­per­vise a polypha­sic-sleep ex­per­i­ment that some Lev­er­age staff mem­bers were con­duct­ing on them­selves. It was a sched­ule called the Every­man 3, which com­presses sleep into three twenty-minute REM naps each day and three hours at night for slow-wave. An­ders was already polypha­sic him­self. Oper­at­ing by the lights of his own best prac­tices, goal-fac­tored, co­her­ent, and con­nected, he was able to work 105 hours a week on world op­ti­miza­tion. For the rest of us, for me, these were dis­tant as­pira­tions. We were nerdy and un­perfected. There was in­tense dis­cus­sion at ev­ery free mo­ment, and a gen­uine in­ter­est in new ideas, if es­pe­cially in testable, ver­ifi­able ones. There was joy in meet­ing peers af­ter years of iso­la­tion. CFAR was also in­su­lar, over­hy­gienic, and with­er­ingly fo­cused on pro­duc­tivity. Al­most ev­ery­one found poli­tics to be tribal and viscer­ally up­set­ting. Dis­cus­sions quickly turned back to philos­o­phy and math. By Mon­day af­ter­noon, things were wrap­ping up. An­drew Critch, a CFAR cofounder, gave a fi­nal speech in the lounge: “Re­mem­ber how you got started on this path. Think about what was the time for you when you first asked your­self, ‘How do I work?’ and ‘How do I want to work?’ and ‘What can I do about that?’ . . . Think about how many peo­ple through­out his­tory could have had that mo­ment and not been able to do any­thing about it be­cause they didn’t know the stuff we do now. I find this very up­set­ting to think about. It could have been re­ally hard. A lot harder.” He was cry­ing. “I kind of want to be grate­ful that we’re now, and we can share this knowl­edge and stand on the shoulders of gi­ants like Daniel Kah­ne­man . . . I just want to be grate­ful for that. . . . And be­cause of those gi­ants, the kinds of con­ver­sa­tions we can have here now, with, like, psy­chol­ogy and, like, al­gorithms in the same para­graph, to me it feels like a new fron­tier. . . . Be ex­plor­ers; take ad­van­tage of this vast new land­scape that’s been opened up to us in this time and this place; and bear the torch of ap­plied ra­tio­nal­ity like brave ex­plor­ers. And then, like, keep in touch by email.” The work­shop at­ten­dees put gi­ant Post-its on the walls ex­press­ing the les­sons they hoped to take with them. A blue one read RATIONALITY IS SYSTEMATIZED WINNING. Above it, in pink: THERE ARE OTHER PEOPLE WHO THINK LIKE ME. I AM NOT ALONE.

That night, there was a party. Alumni were in­vited. Net­work­ing was en­couraged. Post-its pro­lifer­ated; one, by the beer cooler, read SLIGHTLY ADDICTIVE. SLIGHTLY MIND-ALTERING. Another, a few feet to the right, over a dou­ble stack of bound copies of Harry Pot­ter and the Meth­ods of Ra­tion­al­ity: VERY ADDICTIVE. VERY MIND-ALTERING. I talked to one of my room­mates, a Google sci­en­tist who worked on neu­ral nets. The CFAR work­shop was just a whim to him, a tourist week­end. “They’re the nicest peo­ple you’d ever meet,” he said, but then he qual­ified the com­pli­ment. “Look around. If they were effec­tive, ra­tio­nal peo­ple, would they be here? Some­thing a lit­tle weird, no?” I walked out­side for air. Michael Vas­sar, in a cling­ing red sweater, was talk­ing to an ac­tu­ary from Florida. They dis­cussed time­less de­ci­sion the­ory (ap­prox­i­mately: in­tel­li­gent agents should make de­ci­sions on the ba­sis of the fu­tures, or pos­si­ble wor­lds, that they pre­dict their de­ci­sions will cre­ate) and the simu­la­tion ar­gu­ment (es­sen­tially: we’re liv­ing in one), which Vas­sar traced to Schopen­hauer. He re­cited lines from Ki­pling’s “If-” in no par­tic­u­lar or­der and ad­vised the ac­tu­ary on how to change his life: Be­come a pro poker player with the $100k he had in the bank, then hit the Magic: The Gather­ing pro cir­cuit; make more money; de­velop more ra­tio­nal­ity skills; launch the first Costco in North­ern Europe. I asked Vas­sar what was hap­pen­ing at Me­taMed. He told me that he was rais­ing money, and was in dis­cus­sions with a big HMO. He wanted to show up Peter Thiel for not in­vest­ing more than $500,000. “I’m ba­si­cally hop­ing that I can run the largest con­vert­ible-debt offer­ing in the his­tory of fi­nance, and I think it’s kind of rea­son­able,” he said. “I like Peter. I just would like him to no­tice that he made a mis­take . . . I imag­ine a hun­dred mil­lion or a billion will cause him to no­tice . . . I’d like to have a pi-billion-dol­lar val­u­a­tion.” I won­dered whether Vas­sar was drunk. He was about to drive one of his cowork­ers, a young woman named Alyssa, home, and he asked whether I would join them. I sat silently in the back of his musty BMW as they talked about po­ten­tial in­vestors and hires. Vas­sar al­most ran a red light. After Alyssa got out, I rode shot­gun, and we headed back to the ho­tel.

It was get­ting late. I asked him about the ra­tio­nal­ist com­mu­nity. Were they re­ally go­ing to save the world? From what? “Imag­ine there is a set of skills,” he said. “There is a myth that they are pos­sessed by the whole pop­u­la­tion, and there is a cyn­i­cal myth that they’re pos­sessed by 10% of the pop­u­la­tion. They’ve ac­tu­ally been wiped out in all but about one per­son in three thou­sand.” It is im­por­tant, Vas­sar said, that his peo­ple, “the frag­ments of the world,” lead the way dur­ing “the fairly pre­dictable, fairly to­tal cul­tural tran­si­tion that will pre­dictably take place be­tween 2020 and 2035 or so.” We pul­led up out­side the Rose Gar­den Inn. He con­tinued: “You have these weird phe­nom­ena like Oc­cupy where peo­ple are protest­ing with no goals, no the­ory of how the world is, around which they can struc­ture a protest. Ba­si­cally this in­cred­ibly, weirdly, thor­oughly dis­em­pow­ered group of peo­ple will have to in­herit the power of the world any­way, be­cause sooner or later ev­ery­one older is go­ing to be too old and too tech­nolog­i­cally ob­so­lete and too bankrupt. The old in­sti­tu­tions may largely break down or they may be handed over, but ei­ther way they can’t just freeze. Th­ese peo­ple are go­ing to be in charge, and it would be helpful if they, as they come into their own, crys­tal­lize an iden­tity that con­tains cer­tain cul­tural strengths like ar­gu­ment and rea­son.” I didn’t ar­gue with him, ex­cept to press, gen­tly, on his par­tic­u­lar form of elitism. His ra­tio­nal­ism seemed so limited to me, so in­com­plete. “It is un­for­tu­nate,” he said, “that we are in a situ­a­tion where our cul­tural her­i­tage is pos­sessed only by peo­ple who are ex­tremely un­ap­peal­ing to most of the pop­u­la­tion.” That hadn’t been what I’d meant. I had meant ra­tio­nal­ism as it­self a failure of the imag­i­na­tion. “The cur­rent ecosys­tem is so to­tally fucked up,” Vas­sar said. “But if you have con­ver­sa­tions here”-he ges­tured at the ho­tel-“peo­ple change their mind and learn and up­date and change their be­hav­iors in re­sponse to the things they say and learn. That never hap­pens any­where else.” In a hal­lway of the Rose Gar­den Inn, a former high-fre­quency trader started ar­gu­ing with Vas­sar and Anna Sala­mon, CFAR’s ex­ec­u­tive di­rec­tor, about whether peo­ple op­ti­mize for he­dons or utilons or nei­ther, about moun­tain climbers and other high-end masochists, about whether world hap­piness is cur­rently net pos­i­tive or nega­tive, in­creas­ing or de­creas­ing. Vas­sar was eat­ing and drink­ing ev­ery­thing within reach. My record­ing ends with some­one say­ing, “I just heard ‘he­dons’ and then was go­ing to ask whether any­one wants to get high,” and Vas­sar re­ply­ing, “Ah, that’s a good point.” Other voices: “When in Cal­ifor­nia . . .” “We are in Cal­ifor­nia, yes.”

…Back on the East Coast, sum­mer turned into fall, and I took an­other shot at read­ing Yud­kowsky’s Harry Pot­ter fan­fic. It’s not what I would call a novel, ex­actly, rather an un­end­ing, self-satis­fied parable about ra­tio­nal­ity and trans-hu­man­ism, with jokes.

…I flew back to San Fran­cisco, and my friend Court­ney and I drove to a cul-de-sac in Ather­ton, at the end of which sat the promised man­sion. It had been re­pur­posed as co­hous­ing for chil­dren who were try­ing to build the fu­ture: start-up founders, sin­gu­lar­i­tar­i­ans, a teenage ven­ture cap­i­tal­ist. The woman who coined the term “open source” was there, along with a Less Wronger and Thiel Cap­i­tal em­ployee who had re­named him­self Eden. The Day of the Ideal­ist was a day for self-ac­tu­al­iza­tion and net­work­ing, like the CFAR work­shop with­out the rigor. We were to set “mega goals” and pick a “core good” to build on in the com­ing year. Every­one was a cap­i­tal­ist; ev­ery­one was post­poli­ti­cal. I squab­bled with a young man in a Tesla jacket about anti-Google ac­tivism. No one has a right to hous­ing, he said; pro­gram­mers are the peo­ple who mat­ter; the protesters’ an­tag­o­nis­tic tac­tics had to­tally dis­cred­ited them.

…Thiel and Vas­sar and Yud­kowsky, for all their far-out rhetoric, take it on faith that cor­po­rate cap­i­tal­ism, unchecked just a lit­tle longer, will bring about this era of wide­spread abun­dance. Progress, Thiel thinks, is threat­ened mostly by the poli­ti­cal power of what he calls the “un­think­ing de­mos.”


Poin­ter thanks to /​u/​Vul­ture.