Is this topic (learning Econ creates AI forecasting blindspots) perhaps a narrow view of a larger problem; some sort of dunning-Kruger “peak of mt stupid”; where econ classes touch enough on AI forecasting that students gain overconfidence?
I’d predict that any curriculum that encompasses some sort of AI forecasting, but which does not have it as a primary focus, ends up with the same forecasting blindspots/problems. As an anecdote, I’ve seen lots of youtube computer coders use their authority as excellent human programmers to overconfidently state that their job is totally secure and their knowledge of deep programming practices leads them to believe that they can treat AI-powered-coding as a paradigm shift only on the scale of a new IDE or code library.
It’s interesting to see the specific breakdown of how it happens to Econ , if anyone has relevent examples for other fields (Law maybe?) I’d be curious to see if they also fall prey to the same problems.
(That drawing of the Dunning-Kruger Effect is a popular misconception—there was a post last week on that, see also here.)
I think there’s “if you have a hammer, everything looks like a nail” stuff going on. Economists spend a lot of time thinking about labor automation, so they often treat AGI as if it will be just another form of labor automation. LLM & CS people spend a lot of time thinking about the LLMs of 2025, so they often treat AGI as if it will be just like the LLMs of 2025. Military people spend a lo of time thinking about weapons, so they often treat AGI as if it will be just another weapon. Etc.
So yeah, this post happens to be targeted at economists, but that’s not because economists are uniquely blameworthy, or anything like that.
Is this topic (learning Econ creates AI forecasting blindspots) perhaps a narrow view of a larger problem; some sort of dunning-Kruger “peak of mt stupid”; where econ classes touch enough on AI forecasting that students gain overconfidence?
I’d predict that any curriculum that encompasses some sort of AI forecasting, but which does not have it as a primary focus, ends up with the same forecasting blindspots/problems. As an anecdote, I’ve seen lots of youtube computer coders use their authority as excellent human programmers to overconfidently state that their job is totally secure and their knowledge of deep programming practices leads them to believe that they can treat AI-powered-coding as a paradigm shift only on the scale of a new IDE or code library.
It’s interesting to see the specific breakdown of how it happens to Econ , if anyone has relevent examples for other fields (Law maybe?) I’d be curious to see if they also fall prey to the same problems.
(That drawing of the Dunning-Kruger Effect is a popular misconception—there was a post last week on that, see also here.)
I think there’s “if you have a hammer, everything looks like a nail” stuff going on. Economists spend a lot of time thinking about labor automation, so they often treat AGI as if it will be just another form of labor automation. LLM & CS people spend a lot of time thinking about the LLMs of 2025, so they often treat AGI as if it will be just like the LLMs of 2025. Military people spend a lo of time thinking about weapons, so they often treat AGI as if it will be just another weapon. Etc.
So yeah, this post happens to be targeted at economists, but that’s not because economists are uniquely blameworthy, or anything like that.