Counterfactual resiliency test for non-causal models

Non-causal models

Non-causal models are quite common in many fields, and can be quite accurate. Here predictions are made, based on (a particular selection of) past trends, and it is assumed that these trends will continue in future. There is no causal explanation offered for the trends under consideration: it’s just assumed they will go on as before. Non-causal models are thus particularly useful when the underlying causality is uncertain or contentious. To illustrate the idea, here are three non-causal models in computer development:

  1. Moore’s laws about the regular doubling of processing speed/​hard disk size/​other computer related parameter.

  2. Robin Hanson’s model where the development of human brains, hunting, agriculture and the industrial revolution are seen as related stages of accelerations of the underlying economic rate of growth, leading to the conclusion that there will be another surge during the next century (likely caused by whole brain emulations or AI).

  3. Ray Kurzweil’s law of time and chaos, leading to his law of accelerating returns. Here the inputs are the accelerating evolution of life on earth, the accelerating ‘evolution’ of technology, followed by the accelerating growth in the power of computing across many different substrates. This leads to a consequent ‘singularity’, an explosion of growth, at some point over the coming century.

Before anything else, I should thank Moore, Hanson and Kurzweil for having the courage to publish their models and put them out there where they can be critiqued, mocked or praised. This is a brave step, and puts them a cut above most of us.

That said, though I find the first argument quite convincing, I find have to say I find the other two dubious. Now, I’m not going to claim they’re misusing the outside view: if you accuse them of shoving together unrelated processes into a single model, they can equally well accuse you of ignoring the commonalities they have highlighted between these processes. Can we do better than that? There has to be a better guide to the truth that just our own private impressions.

Counterfactual resilience

One thing I’d like to do is test the resilience of the model—how robust are they to change. If model M makes prediction P from trends T and the real outcome will be O, we can test resiliency in two ways. First, we can change the world to change T (and hence P), without changing O, or we can change the world to change O, without changing T (and hence P). If we can do either or both, this is a strong indication that the model doesn’t work.

This all sounds highly dubious—how can we “change the world” in that way? I’m talking about considering counterfactuals: alternate worlds whose history embodies the best of our knowledge as to how the real world works. To pick an extremely trivial example, imagine someone who maintains that the West’s global domination was inevitable four centuries after Luther’s 95 theses thesis in 1517, no matter what else happened outside Europe. Then we can imagine counterfactually diverting huge asteroids to land in the Channel, or import hyper-virulent forms of bird flu from Asiatic Russia. According to everything we know about asteroid impacts, epidemiology and economics, this would not have lead to a dominant West for many centuries afterwards.

That was an example of keeping T and P, and changing the outcome O. It is legitimate: we have preserved everything that went into the initial model, and made the prediction wrong. We could take the reverse approach: changing T and P while preserving the outcome O. To do so, we could imagine moving Luther (or some Luther-like character) to 1217, without changing the rest of European history much. To move Luther back in time, we could perfectly imagine that the Catholic church had started selling and abusing indulgences much earlier than they did—corrupt clerics were hardly an impossible idea in the middle ages. It requires a bit religious and social changes to have the 95 these make sense in the thirteenth century, but not all that much. Then we could imagine that Luther-like character being ignored or burnt, and the rest of Western history happening as usual, without western world dominance happening four centuries after that non-event (which is what M would have predicted). Notice that in both these cases, considering counterfactuals allows us to bring our knowledge or theories about other facts of the world to bear on assessing the model—we are no longer limited to simply debating the assumptions of the model itself.

“Objection!” shouts my original strawman, at both my resiliency tests. “Of course I didn’t specify ‘unless a meteor impacts’; that was implicit and obvious! When you say ‘let’s meet tomorrow’, you don’t generally add ‘unless there’s a nuclear war’! Also, I object to your moving Luther three centuries before and saying my model would predict the same thing in 1217. I was referring to Luther nailing up his theses, in the context of an educated literate population, with printing presses and a political system that was willing to stand up to the Catholic church. Also, I don’t believe you when you say there would need to not be ‘all that much’ religious and social changes for early Luther to exist. You’d have to change so much, that there’s no way you could put history back on the ‘normal’ track afterwards.”

Notice that the conversation has moved on from ‘outside view’ arguments, to making explicit implicit assumptions, extending the model, and arguing about our understanding of causality. Thus if these counterfactual resiliency tests don’t break a model, they’re likely to improve it, our understanding, and the debate.

The resilience of these models

So let’s apply this to Robin Hanson’s and Ray Kurzweil’s models. I’ll start with Robin’s, as it’s much more detailed. The key inputs of Robin’s model are the time differences between the different revolutions (brains, hunting, agriculture, industry), and the growth rates after these revolutions. The prediction is that there is another revolution coming about three centuries after the industrial revolution, and that after this the economy will double every 1-2 weeks. He then makes the point that the only plausible way for this to happen is through the creation of brain emulations or AIs—copyable human capital. I’ll also assume the implicit “no disaster” assumption: meteor strikes or world governments bent on banning AI research. How does this fare in counterfactuals?

It seems rather easy to mess with the inputs T. Weather conditions or continental drifts could confine pre-agricultural humans to hunting essentially indefinitely, followed by a slow evolution to agriculture when the climate improved or more lands became available. Conversely, we could imagine incredibly nutritious crops that were easy to cultivate, and hundreds of domesticable species, rather than the 30-40 we actually had. Combine this with a mass die-off of game and some strong evolutionary pressure, and we could end up with agriculture starting much more rapidly.

This sounds unfair—are these not huge transformations to the human world and the natural world that I’m positing here? Indeed I am, but Robin’s model is that these differential growth rates have predictive ability, not that these differential growth rates combined with a detailed historical analysis of many contingent factors have predictive ability. If the model were to claim that the vagaries of plate tectonics and the number of easily domesticated species in early human development have relevance to how long after the industrial revolution would brain emulations be developed, then something has gone wrong with it.

Continuing on this vein, we can certainly move the industrial revolution back in time. The ancient Greek world, with its steam engines, philosophers and mathematicians, seems an ideal location for a counterfactual. Any philosophical, social or initial technological development that we could label as essential to industrialisation, could at least plausibly have arisen in a Greek city or colony—possibly over a longer period of time.

We can also tweak the speed of economic growth. The yield on hunting can be changed through the availability or absence of convenient prey animals. During the agricultural era, we could posit high-yield crops and an enlightened despot who put in place some understandable-to-ancient-people elements of the green revolution—or conversely, poor yield crops suffering from frequent blight. Easy or difficult access to coal would affect growth during the industrial era, or we could jump ahead by having the internal combustion engine, not the steam engine, as the initial prime driver of industrialisation. The computer era could be brought forwards by having Babbage complete his machines for the British government, or pushed backwards by removing Turing from the equation and assuming the Second World War didn’t happen.

You may disagree with some of these ideas, but it seems to me that there are just too many contingent factors that can mess up the input to the model, leading some putative parallel-universe Robin Hanson to give completely different times to brain emulations. This suggests the model is not very resilient.

Or we can look at the reverse: making whole brain emulations much easier, or much harder, than they are now, without touching the inputs to the model at all (and hence its predictions). For instance, if humans were descendant from a hibernating species, it’s perfectly conceivable that we could have brains that would be easy to fixate and slice up for building emulations. Other changes to our brain design could also make this easier. It might be that our brains had a different architecture, one where it was much simpler to isolate a small “consciousness module” or “decision making module”. Under these assumptions, we could conceivably have had adequate emulations back in the 60s or 70s! Again, these assumptions are false—life didn’t happen like that, it may be impossible for life to happen like that—but knowing that these assumptions are false requires knowledge that is neither explicitly nor implicitly in the model. And of course we have converses: brain architectures too gnarly and delicate to fix and slice. Early or late neuroscience breakthroughs (and greater or lesser technological or medical returns on these breakthroughs). Greater or lesser popular interest in brain architecture.

For these reasons, it seems to me that Robin Hanson’s model fails the counterfactual resiliency test. Ray Kuzweil’s model suffers similarly—since Kurweil’s model includes the whole of evolutionary history (including disasters), we can play around with climate, asteroid collisions and tectonics to make evolution happen at very different rates (one easy change is to kill off all humans in the Toba catastrophe). Shifting around the date of the technological breakthroughs and that of first computer still messes up with the model, and backdating important insights allows us to imagine much earlier AIs.

And then there’s Moore’s law, starting with Moore’s 1965 paper… The difference is immediately obvious, as we start trying to apply the same tricks to Moore’s law. Where even to start? Maybe certain transistors designs are not available? Maybe silicon is hard to get ahold of rather than being ubiquitous? Maybe Intel went bust at an early stage? Maybe no-one discovered photolithography? Maybe some specific use of computers wasn’t thought of, so demand was reduced? Maybe some special new chip design was imagined ahead of time?

None of these seem to clearly lead to situations where Moore’s law would fail. We don’t really know what causes Moore’s law, but it has been robust for moves to very different technologies, and has spanned cultural transformations and changes in the purpose and uses of computers. It seems to lie at the interaction between markets demand, technological development, and implementation. Some trivial change could conceivably throw it off its rails—but we just don’t know what, which means we can’t bring our knowledge about other facts in the world to bear.

In conclusion: more work needed

It was the comparative ease with which we could change the components of the other two models that revealed their lack of resilience; it is the difficulty of doing so with Moore’s law that shows it is resilient.

I’ve never seen this approach used before; more resilience tests only involve changing numerical parameters from inside the model. Certainly the approach needs to be improved: it feels very informal and subjective for the moment. Nevertheless, I feel that it has afforded me some genuine insights, and I’m hoping to improve and formalise it in future—with any feedback I get here, of course.