Failures in technology forecasting? A reply to Ord and Yudkowsky

In The Precipice, Toby Ord writes:

we need to remember how quickly new technologies can be upon us, and to be wary of assertions that they are either impossible or so distant in time that we have no cause for concern. Confident denouncements by eminent scientists should certainly give us reason to be sceptical of a technology, but not to bet our lives against it—their track record just isn’t good enough for that.

I strongly agree with those claims, think they’re very important in relation to estimating existential risk,[1] and appreciate the nuanced way in which they’re stated. (There’s also a lot more nuance around this passage which I haven’t quoted.) I also largely agree with similar claims made in Eliezer Yudkowsky’s earlier essay There’s No Fire Alarm for Artificial General Intelligence.

But both Ord and Yudkowsky provide the same set of three specific historical cases as evidence of the poor track record of such “confident denouncements”. And I think those cases provide less clear evidence than those authors seem to suggest. So in this post, I’ll:

  • Quote Ord and/​or Yudkowsky’s descriptions of those three cases, as well as one case mentioned by Yudkowsky but not Ord

  • Highlight ways in which those cases may be murkier than Ord and Yudkowsky suggest

  • Discuss how much we could conclude about technology forecasting in general from such a small and likely unrepresentative sample of cases, even if those cases weren’t murky

I should note that I don’t think that these historical cases are necessary to support claims like those Ord and Yudkowsky make. And I suspect there might be better evidence for those claims out there. But those cases were the main evidence Ord provided, and among the main evidence Yudkowsky provided. So those cases are being used as key planks supporting beliefs that are important to many EAs and longtermists. Thus, it seems healthy to prod at each suspicious plank on its own terms, and update incrementally.

Case: Rutherford and atomic energy

Ord writes:

One night in 1933, the world’s pre-eminent expert on atomic science, Ernest Rutherford, declared the idea of harnessing atomic energy to be ‘moonshine’. And the very next morning Leo Szilard discovered the idea of the chain reaction.

Yudkowsky also uses the same case to support similar claims to Ord’s.

However, in a footnote, Ord adds:

[Rutherford’s] prediction was in fact partly self-defeating, as its confident pessimism grated on Szilard, inspiring him to search for a way to achieve what was said to be impossible.

To me, the phrase “the very next morning” in the main text made this sound like an especially clear example of just how astoundingly off the mark an expert’s prediction could be. But it turns out that the technology didn’t just happen to be just about to be discovered in any case. Instead, there was a direct connection between the prediction and its undoing. In my view, that makes the prediction less “surprisingly” incorrect.[2]

Additionally, in the same footnote, Ord suggests that Szilard’s discovery may not even have been “the very next day”:

There is some debate over the exact timing of Szilard’s discovery and exactly how much of the puzzle he had solved

Finally, the same footnote states:

There is a fascinating possibility that [Rutherford] was not wrong, but deliberately obscuring what he saw as a potential weapon of mass destruction (Jenkins, 2011). But the point would still stand that confident public assertions of the leading authorities were not to be trusted.

This is a very interesting point, and I appreciate Ord acknowledging it. But I don’t quite agree with his last sentence. I’d instead say:

This possibility may weaken the evidence this case provides for the claim that we should often have limited trust in confident public assertions of the leading authorities. But it may not weaken the evidence this case provides for that claim in situations where it’s plausible that those assertions might be based less on genuine beliefs and more on a desire to e.g. mitigate attention hazards.

To be clear, I do think the Rutherford case provides some evidence for Ord and Yudkowsky’s claims. But I think the evidence is weaker than those authors suggested (especially if we focus only on Ord’s main text, but even when also considering his footnote).

Case: Fermi and chain reactions

Ord writes:

In 1939, Enrico Fermi told Szilard the chain reaction was but a ‘remote possibility’, and four years later Fermi was personally overseeing the world’s first nuclear reaction.

Both Yudkowsky and Stuart Russell also use the same case to support similar claims to Ord’s.

However, in a footnote, Ord writes:

Fermi was asked to clarify the ‘remote possibility’ and ventured ‘ten percent’. Isidor Rabi, who was also present, replied, ‘Ten percent is not a remote possibility if it means that we may die of it. If I have pneumonia and the doctor tells me that there is a remote possibility that I might die, and it’s ten percent, I get excited about it’

I think that that footnote itself contains an excellent lesson (and an excellent quote) regarding failures of communication, and regarding the potential value of quantifying estimates (see also). Relatedly, this case seems to support the claim that we should be wary of trusting qualitatively stated technology forecasts (even from experts).

But the footnote also suggests to me that this may not have been a failure of forecasting at all, or only a minor one. Hearing that Fermi thought that something that ended up happening was only a “remote possibility” seems to suggest he was wildly off the mark. But if he actually thought the chance was 10%, perhaps he was “right” in some sense—e.g., perhaps he was well-calibrated—and this just happened to be one of the 1 in 10 times that a 10% likely outcome occurs.

To know whether that’s the case, we’d have to see a larger range of Fermi’s predictions, and ensure we’re sampling in an unbiased way, rather than being drawn especially to apparent forecasting failures. I’ll return to these points later.

Case: Nuclear engineering more broadly

As evidence for claims similar to Ord’s, Yudkowsky also uses the development of nuclear engineering more broadly (i.e., not just the above-mentioned statements by Rutherford and Fermi). For example, Yudkowsky writes:

And of course if you’re not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima.

And:

Fermi wasn’t still thinking that net nuclear energy was impossible or decades away by the time he got to 3 months before he built the first pile, because at that point Fermi was looped in on everything and saw how to do it. But anyone not looped in probably still felt like it was fifty years away while the actual pile was fizzing away in a squash court at the University of Chicago

And, in relation to the development of AGI:

I do put a significant chunk of probability mass on “There’s not much sign visible outside a Manhattan Project until Hiroshima,” because that scenario is simple.

And:

I do predict in a very general sense that there will be no fire alarm [roughly, a clear signal of AGI being soon] that is not an actual running AGI—no unmistakable sign before then that everyone knows and agrees on, that lets people act without feeling nervous about whether they’re worrying too early. That’s just not how the history of technology has usually played out in much simpler cases like flight and nuclear engineering, let alone a case like this one where all the signs and models are disputed.

I largely agree with, or at least find plausible, Yudkowsky’s claims that there’ll be no “fire alarm” for AGI, and consider that a quite important insight. And I think the case of the development of nuclear weapons probably lends support to some version of those claims. But, for two reasons, I think Yudkowsky may paint an overly simple and confident picture of how this case supports his claims. (Note that I’m not an expert in this period of history, and most of what follows is based on some quick Googling and skimming.)

Firstly, I believe the development of nuclear weapons was highly militarised and secretive from early on, and to a much greater extent than AI development is. My impression is that the general consensus is that non-military labs such as DeepMind and OpenAI truly are leading the field, and that, if anything, AI development is worryingly open, rather than highly secretive (see e.g. here). So it seems there are relevant disanalogies between the case of nuclear weapons development and AI development (or indeed, most technological development), and that we should be substantially uncertain when trying to infer from the former case to the latter.

Secondly, I believe the group of people who did know about nuclear weapons before the bombing of Hiroshima, or who believed such weapons may be developed soon, was (somewhat) larger than one might think from reading Yudkowsky’s essay. In particular, the British, Germans, and Soviets each had their own nuclear weapons programs, and Soviet leaders knew of both the German and US efforts. And I don’t know of any clear evidence either way regarding whether scientists, policymakers, and members of the public who didn’t know of these programs would’ve assumed nuclear weapons were impossible or many decades away.

That said, it is true that:

  • the group of people who knew of these nuclear weapons programs before Hiroshima was very small

  • even Truman wasn’t told the US was developing nuclear weapons during his short time as Vice President (he was only told when he was sworn in as President)

  • even the vast majority of people working on the Manhattan Project didn’t know its true purpose

So I do think this case provides evidence that technological developments can take a lot of people outside of various “inner circles” by surprise, at least in cases of highly secretive developments during wartime.

Case: The Wrights and flight

Ord writes:

The staggering list of eminent scientists who thought heavier-than-air flight to be impossible or else decades away is so well rehearsed as to be cliché.

I haven’t looked into that claim, and Ord gives no examples or sources. Yudkowsky references this list of “famous people and scientists proclaiming that heavier-than-air flight was impossible”. That too gives no sources. As a spot check, I googled the first and last of the quotes on that list. The first appears to be substantiated. For the last, the first page of results seemed to all just be other pages also using the quote in “inspirational” ways without a giving source. Ultimately, I wouldn’t be surprised if there’s indeed a staggering list of such proclamations, but also wouldn’t be surprised if a large portion of them are apocryphal (even if “well rehearsed”).

Ord follows the above sentence with:

But fewer know that even Wilbur Wright himself predicted [heavier-than-air flight] was at least fifty years away—just two years before he invented it.

The same claim is also made by Yudkowsky.

But in a footnote, Ord writes:

Wilbur Wright explained to the Aero-club de France in 1908: ‘Scarcely ten years ago, all hope of flying had almost been abandoned; even the most convinced had become doubtful, and I confess that in 1901 I said to my brother Orville that men would not fly for 50 years. Two years later, we ourselves were making flight.’

Thus, it seems our evidence here is a retrospective account, from the inventor himself, of a statement he once made. One possible explanation of Wright’s 1908 comments is that a genuine, failed attempt at forecasting occurred in 1901. Here are three alternative possible explanations:

  1. Wright just made this story up after the fact, because the story makes his achievement sound all the more remarkable and unexpected.

  2. In 1908, Wright did remember making this statement, but this memory resulted from gradual distortions or embellishments of memory over time.

  3. The story is true, but it’s a story of one moment in which Wright said men would not fly for 50 years, as something like an expression of frustration or hyperbole; it’s not a genuine statement of belief or a genuine attempt at prediction.

Furthermore, even if that was a genuine prediction Wright made at the time, it seems it was a prediction made briefly, once, during many years of working on a topic, and which wasn’t communicated publicly. Thus, even if it was a genuine prediction, it may have little bearing on the trustworthiness in general of publicly made forecasts about technological developments.

Sample size and representativeness

Let’s imagine that all of my above points turn out to be unfounded or unimportant, and that the above cases turn out to all be clear-cut examples of failed technology forecasts by relevant experts. What, then, could we conclude from that?

Most importantly, that’d provide very strong evidence that experts saying a technological development is impossible or far away doesn’t guarantee that that’s the case. And it would provide some evidence that such forecasts may often be mistaken. In places, this is all Ord and Yudkowsky are claiming, and it might be sufficient to support some of their broader conclusions (e.g., that it makes sense to work on AI safety now). And in any case, those broader conclusions can also be supported by other arguments.

But it’s worth considering that these are just four cases, out of the entire history of predictions made about technological developments. That’s a very small sample.

That said, as noted in this post and this comment, we can often learn a lot about what’s typical of some population (e.g., all expert technology forecasts) using just a small sample from that population. What’s perhaps more is whether the sample is representative of the population. So it’s worth thinking about how one’s sample was drawn from the population. I’d guess that the sampling process for these historical cases wasn’t random, but instead looked more like one of the following scenarios:

  1. Ord and Yudkowsky had particular points to make, and went looking for past forecasts that supported those points.

  2. When they came to make their points, they already happened to know of those forecasts due to prior searches motivated by similar goals, either by themselves or by others in their communities.

    • E.g., I’d guess that Ord was influenced by Yudkowsky’s piece.

  3. They already happened to know of many past technology forecasts, and mentioned the subset that suited their points.

If so, then this was a biased rather than representative sample.[3] Thus, if so, we should be very careful in drawing conclusions from it about what is standard, rather than about the plausibility of such failures occurring on occasion.[4]

It’s also interesting to note that Ord, Yudkowsky, and Russell all wished to make similar points, and all drew from the same set of four cases. I would guess that this is purely because those authors were influenced by each other (or some other shared source). But it may also be because those cases are among the cases that most clearly support their points. And, above, I argued that each case provides less clear evidence for their points than one might think. So it seems possible that the repeated reaching for these murky examples is actually weak evidence that it’s hard to find clear-cut evidence of egregious technology forecasting failures by relevant experts. (But it’d be better to swap my speculation for an active search for such evidence, which I haven’t taken the time to do.)

Conclusion

Both Ord and Yudkowsky’s discussions of technology forecasting are much more nuanced than saying long-range forecasting is impossible or that we should pay no attention at all to experts’ technology forecasts. And they both cover more arguments and evidence than just the handful of cases discussed here. And as I said earlier, I largely agree with their claims, and overall see them as very important.

But both authors do prominently feature this small set of cases, and, in my opinion, imply these cases support their claims more clearly than they do. And that seems worth knowing, even if the same or similar claims could be supported using other evidence. (If you know of other relevant evidence, please mention it in the comments!)

Overall, I find myself mostly just very uncertain of the trustworthiness of experts’ forecasts about technological developments, as well as about how trustworthy forecasts could be given better conditions (e.g., better incentives, calibration training). And I don’t think we should update much based on the cases described by Ord and Yudkowsky, unless our starting position was “Experts are almost certainly right” (which, to be fair, may indeed be many people’s implicit starting position, and is at times the key notion Ord and Yudkowsky are very valuably countering).

Note that this post is far from a comprehensive discussion on the efficacy, pros, cons, and best practices for long-range or technology-focused forecasting. For something closer to that, see Muehlhauser,[5] who writes, relevantly:

Most arguments I’ve seen about the feasibility of long-range forecasting are purely anecdotal. If arguing that long-range forecasting is feasible, the author lists a few example historical forecasts that look prescient in hindsight. But if arguing that long-range forecasting is difficult or impossible, the author lists a few examples of historical forecasts that failed badly. How can we do better?

I also discuss similar topics, and link to other sources, in my post introducing a database of existential risk estimates.

This is one of a series of posts I plan to write that summarise, comment on, or take inspiration from parts of The Precipice. You can find a list of all such posts here.

This post is related to my work with Convergence Analysis, but the views I expressed in it are my own. My thanks to David Kristoffersson and Justin Shovelain for useful comments on an earlier draft.


  1. ↩︎

    A related topic for which these claims are relevant is the likely timing and discontinuity of AI developments. This post will not directly focus on that topic. Some sources relevant to that topic are listed here.

  2. ↩︎

    This may not reduce the strength of the evidence this case provides for certain claims. One such claim would be that we should put little trust in experts’ forecasts of AGI being definitely a long way off, and this is specifically because such forecasts may themselves annoy other researchers and spur them to develop AGI faster. But Ord and Yudkowsky didn’t seem to be explicitly making claims like that.

  3. ↩︎

    I don’t mean “biased” as a loaded term, and I’m not applying that term to Ord or Yudkowsky, just to their samples of historical cases.

  4. ↩︎

    Basically, I’d guess that the evidence we have is “people who were looking for examples of experts making technology forecasting mistakes were able to find 4 cases as clear-cut as the cases Yudkowsky gives”. This evidence seems almost as likely conditional on “experts’ technology forecasts are right 99% of the time” as conditional on “experts’ technological forecasts are right 1% of the time” (for two examples of possible hypotheses we might hold). Thus, I don’t see the evidence as providing much Bayesian evidence about which of those hypotheses is more likely. I wouldn’t say the same if our evidence was instead “the first four of the four cases we randomly sampled were as clear-cut as the cases Yudkowsky gives”.

  5. ↩︎

    Stuart Armstrong’s recent post Assessing Kurzweil predictions about 2019: the results is also somewhat relevant.