Assessing Kurzweil: the results

Predictions of the future rely, to a much greater extent than in most fields, on the personal judgement of the expert making them. Just one problem—personal expert judgement generally sucks, especially when the experts don’t receive immediate feedback on their hits and misses. Formal models perform better than experts, but when talking about unprecedented future events such as nanotechnology or AI, the choice of the model is also dependent on expert judgement.

Ray Kurzweil has a model of technological intelligence development where, broadly speaking, evolution, pre-computer technological development, post-computer technological development and future AIs all fit into the same exponential increase. When assessing the validity of that model, we could look at Kurzweil’s credentials, and maybe compare them with those of his critics—but Kurzweil has given us something even better than credentials, and that’s a track record. In various books, he’s made predictions about what would happen in 2009, and we’re now in a position to judge their accuracy. I haven’t been satisfied by the various accuracy ratings I’ve found online, so I decided to do my own assessments.

I first selected ten of Kurzweil’s predictions at random, and gave my own estimation of their accuracy. I found that five were to some extent true, four were to some extent false, and one was unclassifiable

But of course, relying on a single assessor is unreliable, especially when some of the judgements are subjective. So I started a call for volunteers to get assessors. Meanwhile Malo Bourgon set up a separate assessment on Youtopia, harnessing the awesome power of altruists chasing after points.

The results are now in, and they are fascinating. They are...

Ooops, you thought you’d get the results right away? No, before that, as in an Oscar night, I first want to thank assessors William Naaktgeboren, Eric Herboso, Michael Dickens, Ben Sterrett, Mao Shan, quinox, Olivia Schaefer, David Sønstebø and one who wishes to remain anonymous. I also want to thank Malo, and Ethan Dickinson and all the other volunteers from Youtopia (if you’re one of these, and want to be thanked by name, let me know and I’ll add you).

It was difficult deciding on the MVP—no actually it wasn’t, that title and many thanks go to Olivia Schaefer, who decided to assess every single one of Kurzweil’s predictions, because that’s just the sort of gal that she is.

The exact details of the methodology, and the raw data, can be accessed through here. But in summary, volunteers were asked to assess the 172 predictions (from the “Age of Spiritual Machines”) on a five point scale: 1=True, 2=Weakly True, 3=Cannot decide, 4=Weakly False, 5=False. If we total up all the assessments made by my direct volunteers, we have:

As can be seen, most assessments were rather emphatic: fully 59% were either clearly true or false. Overall, 46% of the assessments were false or weakly false, and and 42% were true or weakly true.

But what happens if, instead of averaging across all assessments (which allows assessors who have worked on a lot of predictions to dominate) we instead average across the nine assessors? Reassuringly, this makes very little difference:

What about the Youtopia volunteers? Well, they have a decidedly different picture of Kurzweil’s accuracy:

This gives a combined true score of 30%, and combined false score of 57%! If my own personal assessment was the most positive towards Kurzweil’s predictions, then Youtopia’s was the most negative.

Putting this all together, Kurzweil certainly can’t claim an accuracy above 50% - a far cry from his own self assessment of either 102 out of 108 or 127 out of 147 correct (with caveats that “even the predictions that were considered ‘wrong’ in this report were not all wrong”). And consistently, slightly more than 10% of his predictions are judged “impossible to decide”.

As I’ve said before, these were not binary yes/​no predictions—even a true rate of 30% is much higher that than chance. So Kurzweil remains an acceptable prognosticator, with very poor self-assessment.