I think it’s also worth keeping in mind that the overall state of the field of “People who make and publish reports forecasting the future of emerging technologies” (which was my field for over a decade) is usually really, really bad (this includes the kinds of reports executives and investors will pay $5k a pop for, to help them make big decisions). When I read AI 2027 and the accompanying documents, it was very easily within the top 1% of such reports I’ve seen in quality, thoughtfulness, thoroughness, and reasonableness-of-assumptions-made.
I’d also add that the value of such reports is usually not in believing the conclusion. People who use them that way tend to have a bad time. It’s in being able to explore the space of possible conclusions by having a toy model good enough to play with at all and in being able to investigate what are the knowns, the unknowns, the assumptions different parties think are plausible, and the like. And in that regard, tying the model more closely to available data can sometimes be actively counterproductive compared to making a good set of clearly-marked assumptions.
Fundamentally, that kind of approach really shouldn’t be too unfamiliar around here. For example:
“Where are you getting all these numbers, Potter?”
“That is the admitted weakness of the method,” Harry said readily. (HPMOR Ch 86)
I think it’s also worth keeping in mind that the overall state of the field of “People who make and publish reports forecasting the future of emerging technologies” (which was my field for over a decade) is usually really, really bad (this includes the kinds of reports executives and investors will pay $5k a pop for, to help them make big decisions). When I read AI 2027 and the accompanying documents, it was very easily within the top 1% of such reports I’ve seen in quality, thoughtfulness, thoroughness, and reasonableness-of-assumptions-made.
I’d also add that the value of such reports is usually not in believing the conclusion. People who use them that way tend to have a bad time. It’s in being able to explore the space of possible conclusions by having a toy model good enough to play with at all and in being able to investigate what are the knowns, the unknowns, the assumptions different parties think are plausible, and the like. And in that regard, tying the model more closely to available data can sometimes be actively counterproductive compared to making a good set of clearly-marked assumptions.
Fundamentally, that kind of approach really shouldn’t be too unfamiliar around here. For example:
or