Bets on an Extreme Future

Betting on the future is a good way to reveal true beliefs.

As one example of such a bet on a key debate about a post-human future, I’d like to announce here that Robin Hanson and I have made the following agreement. (See also Robin’s post at Overcoming Bias):

We, Robin Hanson and Joshua Fox, agree to bet on which kind of artificial general intelligence (AGI) will dominate first, once some kind of AGI dominates humans. If the AGI are closely based on or derived from emulations of human brains, Robin wins, otherwise Joshua wins. To be precise, we focus on the first point in time when more computing power (gate-operations-per-second) is (routinely, typically) controlled relatively-directly by non-biological human-level-or-higher general intelligence than by ordinary biological humans. (Human brains have gate-operation equivalents.)

If at that time more of that computing power is controlled by emulation-based AGI, Joshua owes Robin whatever $3000 invested today in S&P500-like funds today is worth then. If more is controlled by AGI not closely based on emulations, Robin owes Joshua that amount. The bet is void if the terms of this bet make little sense then, such as if it becomes too hard to say if capable non-biological intelligence is general or human-level, if AGI is emulation-based, what devices contain computing power, or what devices control what other devices. But we intend to tolerate modest levels of ambiguity in such things.

[Added Aug. 17:] To judge if “AGI are closely based on or derived from emulations of human brains,” judge which end of the following spectrum is closer to the actual outcome. The two ends are 1) an emulation of the specific cell connections in a particular human brain, and 2) general algorithms of the sort that typically appear in AI journals today.

It’s a bet on the old question: ems vs. de novo AGI. Kurzweil and Kapor bet on another well-known debate: Will machines pass the Turing Test. It would be interesting to list some other key debates that we could bet on.

But it’s hard to make a bet when settling the bet may occur in extreme conditions:

  • after human extinction,

  • in an extreme utopia,

  • in an extreme dystopia or,

  • after the bettors’ minds have been manipulated in ways that redefine their personhood: copied thousands of times, merged with other minds, etc.

MIRI has a “techno-volatile” world-view: We’re not just optimistic or pessimistic about the impact of technology on our future. Instead, we predict that technology will have an extreme impact, good or bad, on the future of humanity. In these extreme futures, the fundamental components of a bet—the bettors and the payment currency—may be missing or altered beyond recognition.

So, how can we calibrate our probability estimates about extreme events? One way is by betting on how people will bet in the future when they are closer to the events, on the assumption that they’ll know better than we do. Though this is an indirect and imperfect method, it might be the best we have for calibrating our beliefs about extreme futures.

For example, Robin Hanson has suggested a market on tickets to a survival shelter as a way of betting on an apocalypse. However, this only relevant for futures where shelters can help; and where there is time to get to one while the ticket holder is alive, and while the social norm of honoring tickets still applies.

We could also define bets on the progress of MIRI and similar organizations. Looking back on the years since 2005, when I started tracking this, I would have liked to bet on, or at least discuss, certain milestones before they happened. They served as (albeit weak) arguments from authority or from social proof for the validity of MIRI’s ideas. Some examples of milestones that have already been reached:

  • SIAI’s budget passing $500K per annum

  • SIAI getting 4 full-time-equivalent employees

  • SIAI publishing its fourth peer-reviewed paper

  • The establishment of a university research center in relevant fields

  • The first lecture on the core FAI thesis in an accredited university course

  • The first article on the core FAI thesis in a popular science magazine

  • The first mention of the core FAI thesis (or of SIAI as an organization) in various types of mainstream media, with a focus on the most prestigious (NPR for radio, New York Times for newspapers).

  • The first (indirect/​direct) government funding for SIAI

Looking to the future, we can bet on some other FAI milestones. For example, we could bet on these coming true by a certain year.

  • FAI research in general (or: organization X) will have Y dollars in funding per annum (or: Z full-time researchers).

  • Eliezer Yudkowsky will still be working on FAI.

  • The intelligence explosion will be discussed on the floor of Congress (or: in some parliament; or: by a head of state somewhere in the world).

  • The first academic monograph on the core FAI thesis will be published (apparently that will be Nick Bostrom’s).

  • The first master’s thesis/​PhD dissertation on the core FAI thesis will be completed.

  • “Bill Gates will read at least one of ‘Our Final Invention’ or ‘Superintelligence’ in the next 2 years” (This already appears on PredictionBazaar.)

(Some of these will need more refinement before we can bet on them.)

Another approach is to bet on technology trends: brain scanning resolution; prices for computing power; etc. But these bets are about a Kurzweillian Law of Accelerating Returns, which may be quite distinct from the Intelligence Explosion and other extreme futures we are interested in.

Many bets only make sense if you believe that a soft takeoff is likely. If you believe that, you could bet on AI events while still allowing the bettors a few years to enjoy their winnings.

You can make a bet on hard vs. soft takeoff simply by setting your discount rate. If you’re 20 years old and think that the economy as we know it will end instantly in, for example, 2040, then you won’t save for your retirement. (See my article at H+Magazine.) But such decisions don’t pin down your beliefs very precisely: Most people who don’t save for their retirement are simply being improvident. Not saving makes sense if the human race is about to go extinct, but also if we are going to enter an extreme utopia or dystopia where your savings have no meaning. Likewise, most people save for retirement simply out of old-fashioned prudence, but you might build up your wealth in order to enjoy it pre-Singularity, or in order to take it with you to a post-Singularity world in which “old money” is still valuable.

I’d like to get your opinion: What are the best bets we can use for calibrating our beliefs about the extreme events we are interested in? Can you suggest some more of these indirect markers, or a different way of betting?