The database does not include the ages of the predictors, unfortunately, but the results seem to contradict the Maes-Garreau law. Estimating that most predictors were likely not in their fifties and sixties, it seems that the majority predicted AI would likely happen some time before their expected demise.
Actually, this was a miscommunication—the database does include them, but they were in a file Stuart wasn’t looking at. Here’s the analysis.
Of the predictions that could be construed to be giving timelines for the creation of human-level AI, 65 predictions either had ages on record, or were late enough that the predictor would obviously be dead by then. I assumed (via gwern’s suggestion) everyone’s life expectency to be 80 and then simply checked whether the predicted date would be before their expected date of death. This was true for 31 of the predictions and false for 34 of them.
Those 66 predictions included several cases where somebody had made multiple predictions over their lifetime, so I also made a comparison where I only picked the earliest prediction of everyone who had made one. This brought the number of predictions down to 46, out of which 19 had AI showing up during the predictor’s lifetime and 27 were not.
Okay, so here I took the the predicted date for AI, and from that I subtracted expected year of death for a person. So if they predict that AI will be created 20 years before their death, this comes out as −20, and if they say it will be created 20 years after their death, 20.
This had the minor issue that I was assuming everyone’s life expectancy to be 80, but some people lived to make predictions after that age. That wasn’t an issue in just calculating true/false values for “will this event happen during one’s lifetime”, but here it was. So I redefined life expectancy to be 80 years if the person is at most 80 years old, or X years if the person is X years old. That’s somewhat ugly, but aside for actually looking up actuarial statistics for each age and year separately, I don’t know of a better solution.
These are the values of that calculation. I used only the data with multiple predictions by the same people eliminated, as doing otherwise would give an undue emphasis on a very small number of individuals and the dataset is small enough as it is:
Is that more or less than what we’d expect if there was no relationship between age and predictions? If you randomly paired predictions with ages (sampling separately from the two distributions), what proportion would be within the “lifetime”?
Actually, this was a miscommunication—the database does include them, but they were in a file Stuart wasn’t looking at. Here’s the analysis.
Of the predictions that could be construed to be giving timelines for the creation of human-level AI, 65 predictions either had ages on record, or were late enough that the predictor would obviously be dead by then. I assumed (via gwern’s suggestion) everyone’s life expectency to be 80 and then simply checked whether the predicted date would be before their expected date of death. This was true for 31 of the predictions and false for 34 of them.
Those 66 predictions included several cases where somebody had made multiple predictions over their lifetime, so I also made a comparison where I only picked the earliest prediction of everyone who had made one. This brought the number of predictions down to 46, out of which 19 had AI showing up during the predictor’s lifetime and 27 were not.
Can you tell how many were close to limit? That’s what the “law” is mainly about.
Okay, so here I took the the predicted date for AI, and from that I subtracted expected year of death for a person. So if they predict that AI will be created 20 years before their death, this comes out as −20, and if they say it will be created 20 years after their death, 20.
This had the minor issue that I was assuming everyone’s life expectancy to be 80, but some people lived to make predictions after that age. That wasn’t an issue in just calculating true/false values for “will this event happen during one’s lifetime”, but here it was. So I redefined life expectancy to be 80 years if the person is at most 80 years old, or X years if the person is X years old. That’s somewhat ugly, but aside for actually looking up actuarial statistics for each age and year separately, I don’t know of a better solution.
These are the values of that calculation. I used only the data with multiple predictions by the same people eliminated, as doing otherwise would give an undue emphasis on a very small number of individuals and the dataset is small enough as it is:
-41, −41, −39, −28, −26, −24, −20, −18, −12, −10, −10, −9, −8, −8, −7, −5, 0, 0, 2, 3, 3, 8, 9, 11, 16, 19, 20, 30, 34, 51, 51, 52, 59, 75, 82, 96, 184.
Eyeballing that, looks pretty evenly distributed to me. Also, here’s a scatterplot of age of predictor vs. time to AI: http://kajsotala.fi/Random/ScatterAgeToAI.jpg
And here’s age of predictor vs. the (prediction-lifetime) figure, showing that younger people are more likely to predict AI within their lifetimes, which makes sense: http://kajsotala.fi/Random/ScatterAgeToPredictionLifetime.jpg
Updated the main post with your new information, thanks!
You’re quite welcome. :-)
I’ll give it a look.
Is that more or less than what we’d expect if there was no relationship between age and predictions? If you randomly paired predictions with ages (sampling separately from the two distributions), what proportion would be within the “lifetime”?
Looks roughly like it—see this comment.