The only noticeable difference is that amateurs lacked the upswing at 50 years, and were relatively more likely to push their predictions beyond 75 years. This does not look like good news for the experts—if their performance can’t be distinguished from amateurs, what contribution is their expertise making?
I believe you can put your case even a bit more strongly than this. With this amount of data, the differences you point out are clearly within the range of random fluctuations; the human eye picks them out, but does not see the huge reference class of similarly “different” distributions. I predict with confidence over 95% that a formal statistical analysis would find no difference between the “expert” and “amateur” distributions.
I agree. I didn’t do a formal statistical analysis, simply because with such little data and the potential biases, it would only give us a spurious feeling of certainty.
Perhaps their contribution is in influencing the non experts? It is very likely that the non experts base their estimates on whatever predictions respected experts have made.
Seems pretty unlikely—because you’d then expect the non-experts to have the same predicted dates as the experts, but not the same distribution of time to AI.
Also the examples I saw were mainly of non-experts saying: AI will happen around here because well, I say so. (sometimes spiced with Moore’s law).
It seems quite likely to me a priori that “experts” would be driven to make fewer extreme predictions because they’re more interested in defending their status by adapting a moderate position and also more able to do so.
For the record, I did expect this prior to reading your analysis of the data. But I also expected the data to be more in line with the Maes-Garreau law.
I’m trying to use the outside view to combat it. It is hard for me to think up examples of experts making more extreme-sounding claims than interested amateurs. The only argument the other way that I can think of is that AI itself is so crazy that seeing it occur in less that 100 years is the extreme position, and the other way around is moderate, but I don’t find that very convincing.
In addition, I don’t see reason to believe I’m different from lukeprog or handoflixue.
Philosophy experts are very fond of saying AI is impossible, neuroscientist experts seem to often proclaim it’ll take centuries… By the time you break it down into categories and consider the different audiences and expert cultures, I think we have too little data to say much.
I would a priori assume that “experts” with quote marks are mainly interested in attention, and extreme predictions here are unlikely to get positive attention (Saying AI will happen in 75+ years is boring, saying it will happen tomorrow kills your credibility)
I believe you can put your case even a bit more strongly than this. With this amount of data, the differences you point out are clearly within the range of random fluctuations; the human eye picks them out, but does not see the huge reference class of similarly “different” distributions. I predict with confidence over 95% that a formal statistical analysis would find no difference between the “expert” and “amateur” distributions.
I agree. I didn’t do a formal statistical analysis, simply because with such little data and the potential biases, it would only give us a spurious feeling of certainty.
Perhaps their contribution is in influencing the non experts? It is very likely that the non experts base their estimates on whatever predictions respected experts have made.
Seems pretty unlikely—because you’d then expect the non-experts to have the same predicted dates as the experts, but not the same distribution of time to AI.
Also the examples I saw were mainly of non-experts saying: AI will happen around here because well, I say so. (sometimes spiced with Moore’s law).
It seems quite likely to me a priori that “experts” would be driven to make fewer extreme predictions because they’re more interested in defending their status by adapting a moderate position and also more able to do so.
Is that really a priori? ie did you come up with that idea before seeing this post?
Did I? No.
Would I have? I’m pretty sure.
Then we’ll never know—hindsight bias is the bitchiest of bitches.
For the record, I did expect this prior to reading your analysis of the data. But I also expected the data to be more in line with the Maes-Garreau law.
I’m trying to use the outside view to combat it. It is hard for me to think up examples of experts making more extreme-sounding claims than interested amateurs. The only argument the other way that I can think of is that AI itself is so crazy that seeing it occur in less that 100 years is the extreme position, and the other way around is moderate, but I don’t find that very convincing.
In addition, I don’t see reason to believe I’m different from lukeprog or handoflixue.
Philosophy experts are very fond of saying AI is impossible, neuroscientist experts seem to often proclaim it’ll take centuries… By the time you break it down into categories and consider the different audiences and expert cultures, I think we have too little data to say much.
I would a priori assume that “experts” with quote marks are mainly interested in attention, and extreme predictions here are unlikely to get positive attention (Saying AI will happen in 75+ years is boring, saying it will happen tomorrow kills your credibility)
So, for me at least, yes.