The Outside View isn’t magic

Crossposted at Less Wrong 2.0.

The planning fallacy is an almost perfect example of the strength of using the outside view. When asked to predict the time taken for a project that they are involved in, people tend to underestimate the time needed (in fact, they tend to predict as if question was how long things would take if everything went perfectly).

Simply telling people about the planning fallacy doesn’t seem to make it go away. So the outside view argument is that you need to put your project into the “reference class” of other projects, and expect time overruns as compared to your usual, “inside view” estimates (which focus on the details you know about the project.

So, for the outside view, what is the best way of estimating the time of a project? Well, to find the right reference class for it: the right category of projects to compare it with. You can compare the project with others that have similar features—number of people, budget, objective desired, incentive structure, inside view estimate of time taken etc… - and then derive a time estimate for the project that way.

That’s the outside view. But to me, it looks a lot like… induction. In fact, it looks a lot like the elements of a linear (or non-linear) regression. We can put those features (at least the quantifiable ones) into a linear regression with a lot of data about projects, shake it all about, and come up with regression coefficients.

At that point, we are left with a decent project timeline prediction model, and another example of human bias. The fact that humans often perform badly in prediction tasks is not exactly new—see for instance my short review on the academic research on expertise.

So what exactly is the outside view doing in all this?

The role of the outside view: model incomplete and bias human

The main use of the outside view, for humans, seems to be to point out either an incompleteness in the model or a human bias. The planning fallacy has both of these: if you did a linear regression comparing your project with all projects with similar features, you’d notice your inside estimate was more optimistic than the regression—your inside model is incomplete. And if you also compared each person’s initial estimate with the ultimate duration of their project, you’d notice a systematically optimistic bias—you’d notice the planning fallacy.

The first type of errors tend to go away with time, if the situation is encountered regularly, as people refine models, add variables, and test them on the data. But the second type remains, as human biases are rarely cleared by mere data.

Reference class tennis

If use of the outside view is disputed, it often develops into a case of reference class tennis—where people with opposing sides insist or deny that a certain example belongs in the reference class (similarly to how, in politics, anything positive is claimed for your side and anything negative assigned to the other side).

But once the phenomena you’re addressing has an explanatory model, there are no issues of reference class tennis any more. Consider for instance Goodhart’s law: “When a measure becomes a target, it ceases to be a good measure”. A law that should be remembered by any minister of education wanting to reward schools according to improvements to their test scores.

This is a typical use of the outside view: if you’d just thought about the system in terms of inside facts—tests are correlated with child performance; schools can improve child performance; we can mandate that test results go up—then you’d have missed several crucial facts.

But notice that nothing mysterious is going on. We understand exactly what’s happening here: schools have ways of upping test scores without upping child performance, and so they decided to do that, weakening the correlation between score and performance. Similar things happen in the failures of command economies; but again, once our model is broad enough to encompass enough factors, we get decent explanations, and there’s no need for further outside views.

In fact, we know enough that we can show when Goodhart’s law fails: when no-one with incentives to game the measure has control of the measure. This is one of the reasons central bank interest rate setting has been so successful. If you order a thousand factories to produce shoes, and reward the managers of each factory for the number of shoes produced, you’re heading to disaster. But consider GDP. Say the central bank wants to increase GDP by a certain amount, by fiddling with interest rates. Now, as a shoe factory manager, I might have preferences about the direction of interest rates, and my sales are a contributor to GDP. But they are a tiny contributor. It is not in my interest to manipulate my sales figures, in the vague hope that, aggregated across the economy, this will falsify GDP and change the central bank’s policy. The reward is too diluted, and would require coordination with many other agents (and coordination is hard).

Thus if you’re engaging in reference class tennis, remember the objective is to find a model with enough variables, and enough data, so that there is no more room for the outside view—a fully understood Goodhart’s law rather than just a law.

In the absence of a successful model

Sometimes you can have a strong trend without a compelling model. Take Moore’s law, for instance. It is extremely strong, going back decades, and surviving multiple changes in chip technology. But it has no clear cause.

A few explanations have been proposed. Maybe it’s a consequence of its own success, of chip companies using it to set their goals. Maybe there’s some natural exponential rate of improvement in any low-friction feature of a market economy. Exponential-type growth in the short term is no surprise—that just means growth in proportional to investment—so maybe it was an amalgamation of various short term trends.

Do those explanations sound unlikely? Possibly, but there is a huge trend in computer chips going back decades that needs to be explained. They are unlikely, but they have to be weighed against the unlikeliness of the situation. The most plausible explanation is a combination of the above and maybe some factors we haven’t thought of yet.

But here’s an explanation that is implausible: little time-travelling angels modify the chips so that they follow Moore’s law. It’s a silly example, but it shows that not all explanations are created equal, even for phenomena that are not fully understood. In fact there are four broad categories of explanations for putative phenomena that don’t have a compelling model:

  1. Unlikely but somewhat plausible explanations.

  2. We don’t have an explanation yet, but we think it’s likely that there is an explanation.

  3. The phenomenon is a coincidence.

  4. Any explanation would go against stuff that we do know, and would be less likely than coincidence.

The explanations I’ve presented for Moore’s law fall into category 1. Even if we hadn’t thought of those explanations, Moore’s law would fall into category 2, because of the depth of evidence for Moore’s law and because a “medium length regular technology trend within a broad but specific category” is something that has is intrinsically likely to have an explanation.

Compare with Kurzweil’s “law of time and chaos” (a generalisation of his “law of accelerating returns”) and Robin Hanson’s model where the development of human brains, hunting, agriculture and the industrial revolution are all points on a trend leading to uploads. I discussed these in a previous post, but I can now better articulate the problem with them.

Firstly, they rely on very few data points (the more recent part of Kurzweil’s law, the part about recent technological trends, has a lot of data, but the earlier part does not). This raises the probability that they are a mere coincidence (we should also consider selection bias in choosing the data points, which increases the probability of coincidence). Secondly, we have strong reasons to suspect that there won’t be any explanation that ties together things like the early evolution of life on Earth, human brain evolution, the agricultural revolution, the industrial revolution, and future technology development. These phenomena have decent local explanations that we already roughly understand (local in time and space to the phenomena described), and these run counter to any explanation that would tie them together.

Human biases and predictions

There is one area where the outside view can still function for multiple phenomena across different eras: when it comes to pointing out human biases. For example, we know that doctors have been authoritative, educated, informed, and useless for most of human history (or possibly much worse than useless). Hence authoritative, educated, and informed statements or people are not to be considered of any value, unless there is some evidence the statement or person is truth tracking. We now have things like expertise research, some primitive betting markets, and track records to try and estimate their experience; these can provide good “outside views”.

And the authors of the models of the previous section have some valid points where bias is concerned. Kurzweil’s point that (paraphrasing) “things can happen a lot faster than some people think” is valid: we can compare predictions with outcomes. Robin has similar valid points in defense of the possibility of the em scenario.

The reason these explanations are more likely valid is because they have a very probable underlying model/​explanation: humans are biased.

Conclusions

  • The outside view is a good reminder for anyone who may be using too narrow a model.

  • If the model explains the data well, then there is no need for further outside views.

  • If there is a phenomena with data but no convincing model, we need to decide if it’s a coincidence or there is an underlying explanation.

  • Some phenomena have features that make it likely that there is an explanation, even if we haven’t found it yet.

  • Some phenomena have features that make it unlikely that there is an explanation, no matter how much we look.

  • Outside view arguments that point at human prediction biases, however, can be generally valid, as they only require the explanation that humans are biased in that particular way.