Bets on an Extreme Future

Bet­ting on the fu­ture is a good way to re­veal true be­liefs.

As one ex­am­ple of such a bet on a key de­bate about a post-hu­man fu­ture, I’d like to an­nounce here that Robin Han­son and I have made the fol­low­ing agree­ment. (See also Robin’s post at Over­com­ing Bias):

We, Robin Han­son and Joshua Fox, agree to bet on which kind of ar­tifi­cial gen­eral in­tel­li­gence (AGI) will dom­i­nate first, once some kind of AGI dom­i­nates hu­mans. If the AGI are closely based on or de­rived from em­u­la­tions of hu­man brains, Robin wins, oth­er­wise Joshua wins. To be pre­cise, we fo­cus on the first point in time when more com­put­ing power (gate-op­er­a­tions-per-sec­ond) is (rou­tinely, typ­i­cally) con­trol­led rel­a­tively-di­rectly by non-biolog­i­cal hu­man-level-or-higher gen­eral in­tel­li­gence than by or­di­nary biolog­i­cal hu­mans. (Hu­man brains have gate-op­er­a­tion equiv­a­lents.)

If at that time more of that com­put­ing power is con­trol­led by em­u­la­tion-based AGI, Joshua owes Robin what­ever $3000 in­vested to­day in S&P500-like funds to­day is worth then. If more is con­trol­led by AGI not closely based on em­u­la­tions, Robin owes Joshua that amount. The bet is void if the terms of this bet make lit­tle sense then, such as if it be­comes too hard to say if ca­pa­ble non-biolog­i­cal in­tel­li­gence is gen­eral or hu­man-level, if AGI is em­u­la­tion-based, what de­vices con­tain com­put­ing power, or what de­vices con­trol what other de­vices. But we in­tend to tol­er­ate mod­est lev­els of am­bi­guity in such things.

[Added Aug. 17:] To judge if “AGI are closely based on or de­rived from em­u­la­tions of hu­man brains,” judge which end of the fol­low­ing spec­trum is closer to the ac­tual out­come. The two ends are 1) an em­u­la­tion of the spe­cific cell con­nec­tions in a par­tic­u­lar hu­man brain, and 2) gen­eral al­gorithms of the sort that typ­i­cally ap­pear in AI jour­nals to­day.

It’s a bet on the old ques­tion: ems vs. de novo AGI. Kurzweil and Ka­por bet on an­other well-known de­bate: Will ma­chines pass the Tur­ing Test. It would be in­ter­est­ing to list some other key de­bates that we could bet on.

But it’s hard to make a bet when set­tling the bet may oc­cur in ex­treme con­di­tions:

  • af­ter hu­man ex­tinc­tion,

  • in an ex­treme utopia,

  • in an ex­treme dystopia or,

  • af­ter the bet­tors’ minds have been ma­nipu­lated in ways that re­define their per­son­hood: copied thou­sands of times, merged with other minds, etc.

MIRI has a “techno-volatile” world-view: We’re not just op­ti­mistic or pes­simistic about the im­pact of tech­nol­ogy on our fu­ture. In­stead, we pre­dict that tech­nol­ogy will have an ex­treme im­pact, good or bad, on the fu­ture of hu­man­ity. In these ex­treme fu­tures, the fun­da­men­tal com­po­nents of a bet—the bet­tors and the pay­ment cur­rency—may be miss­ing or al­tered be­yond recog­ni­tion.

So, how can we cal­ibrate our prob­a­bil­ity es­ti­mates about ex­treme events? One way is by bet­ting on how peo­ple will bet in the fu­ture when they are closer to the events, on the as­sump­tion that they’ll know bet­ter than we do. Though this is an in­di­rect and im­perfect method, it might be the best we have for cal­ibrat­ing our be­liefs about ex­treme fu­tures.

For ex­am­ple, Robin Han­son has sug­gested a mar­ket on tick­ets to a sur­vival shelter as a way of bet­ting on an apoc­a­lypse. How­ever, this only rele­vant for fu­tures where shelters can help; and where there is time to get to one while the ticket holder is al­ive, and while the so­cial norm of hon­or­ing tick­ets still ap­plies.

We could also define bets on the progress of MIRI and similar or­ga­ni­za­tions. Look­ing back on the years since 2005, when I started track­ing this, I would have liked to bet on, or at least dis­cuss, cer­tain mile­stones be­fore they hap­pened. They served as (albeit weak) ar­gu­ments from au­thor­ity or from so­cial proof for the val­idity of MIRI’s ideas. Some ex­am­ples of mile­stones that have already been reached:

  • SIAI’s bud­get pass­ing $500K per annum

  • SIAI get­ting 4 full-time-equiv­a­lent employees

  • SIAI pub­lish­ing its fourth peer-re­viewed paper

  • The es­tab­lish­ment of a uni­ver­sity re­search cen­ter in rele­vant fields

  • The first lec­ture on the core FAI the­sis in an ac­cred­ited uni­ver­sity course

  • The first ar­ti­cle on the core FAI the­sis in a pop­u­lar sci­ence magazine

  • The first men­tion of the core FAI the­sis (or of SIAI as an or­ga­ni­za­tion) in var­i­ous types of main­stream me­dia, with a fo­cus on the most pres­ti­gious (NPR for ra­dio, New York Times for news­pa­pers).

  • The first (in­di­rect/​di­rect) gov­ern­ment fund­ing for SIAI

Look­ing to the fu­ture, we can bet on some other FAI mile­stones. For ex­am­ple, we could bet on these com­ing true by a cer­tain year.

  • FAI re­search in gen­eral (or: or­ga­ni­za­tion X) will have Y dol­lars in fund­ing per an­num (or: Z full-time re­searchers).

  • Eliezer Yud­kowsky will still be work­ing on FAI.

  • The in­tel­li­gence ex­plo­sion will be dis­cussed on the floor of Congress (or: in some par­li­a­ment; or: by a head of state some­where in the world).

  • The first aca­demic mono­graph on the core FAI the­sis will be pub­lished (ap­par­ently that will be Nick Bostrom’s).

  • The first mas­ter’s the­sis/​PhD dis­ser­ta­tion on the core FAI the­sis will be com­pleted.

  • “Bill Gates will read at least one of ‘Our Fi­nal In­ven­tion’ or ‘Su­per­in­tel­li­gence’ in the next 2 years” (This already ap­pears on Pre­dic­tionBazaar.)

(Some of these will need more re­fine­ment be­fore we can bet on them.)

Another ap­proach is to bet on tech­nol­ogy trends: brain scan­ning re­s­olu­tion; prices for com­put­ing power; etc. But these bets are about a Kurzweillian Law of Ac­cel­er­at­ing Re­turns, which may be quite dis­tinct from the In­tel­li­gence Ex­plo­sion and other ex­treme fu­tures we are in­ter­ested in.

Many bets only make sense if you be­lieve that a soft take­off is likely. If you be­lieve that, you could bet on AI events while still al­low­ing the bet­tors a few years to en­joy their win­nings.

You can make a bet on hard vs. soft take­off sim­ply by set­ting your dis­count rate. If you’re 20 years old and think that the econ­omy as we know it will end in­stantly in, for ex­am­ple, 2040, then you won’t save for your re­tire­ment. (See my ar­ti­cle at H+Magaz­ine.) But such de­ci­sions don’t pin down your be­liefs very pre­cisely: Most peo­ple who don’t save for their re­tire­ment are sim­ply be­ing im­prov­i­dent. Not sav­ing makes sense if the hu­man race is about to go ex­tinct, but also if we are go­ing to en­ter an ex­treme utopia or dystopia where your sav­ings have no mean­ing. Like­wise, most peo­ple save for re­tire­ment sim­ply out of old-fash­ioned pru­dence, but you might build up your wealth in or­der to en­joy it pre-Sin­gu­lar­ity, or in or­der to take it with you to a post-Sin­gu­lar­ity world in which “old money” is still valuable.

I’d like to get your opinion: What are the best bets we can use for cal­ibrat­ing our be­liefs about the ex­treme events we are in­ter­ested in? Can you sug­gest some more of these in­di­rect mark­ers, or a differ­ent way of bet­ting?