I get a bit sad reading this post. I do agree that a lot of economists sort of “miss the point” when it comes to AI, but I don’t think they are more “incorrect” than, say, the AI is Normal Technology folks. I think the crux more or less comes down to skepticism about the plausibility of superintelligence in the next decade or so. This is the mainstream position in economics, but also the mainstream position basically everywhere in academia? I don’t think it’s “learning econ” that makes people “dumber”, although I do think economists have a (generally healthy) strong skepticism towards grandiose claims (which makes them more correct on average).
Another reason I’m sad is that there is a growing group of economists who do take “transformative” AI seriously, and the TAI field has been growing and producing what I think are pretty cool work. For example, there’s an economics of transformative AI class designed mostly for grad students at Stanford this summer, and BlueDot also had an economics of transformative AI class.
Overall I think this post is unnecessarily uncharitable.
I might have overdone it on the sass, sorry. This is much sassier than my default (“scrupulously nuanced and unobjectionable and boring”)…
…partly because I’m usually writing for lesswrong and cross-posting on X/Twitter, whereas this one was vice-versa, and X is a medium that seems to call for more sass;
…partly in an amateur ham-fisted attempt to do clickbait (note also: listicle format!) because this is a message that I really want to put out there;
…and yes, partly because I do sometimes feel really frustrated talking to economists (#NotAllEconomists), and I think they can and should do better, and the sass is reflecting a real feeling that I feel.
But I think next time I would dial it back slightly, e.g. by replacing “DUMBER” with “WORSE” in the first sentence. I’m open to feedback, I don’t know what I’m doing. ¯\_(ツ)_/¯
Yeah, I agree that lots of CS professors are deeply mistaken about the consequences of AGI, and ditto with the neuroscientists, and ditto with many other fields, including even many of the people trying to build AGI right now. I don’t think that economists are more blameworthy than other groups, it just so happens that this one particular post is aimed at them.
I think the crux more or less comes down to skepticism about the plausibility of superintelligence in the next decade or so.
I think you’re being overly generous. “Decade or so” is not the crux. In climate change, people routinely talk about bad things that might happen in 2050, and even in 2100, or farther! People also routinely talk 30 years out or more in the context of science, government, infrastructure, institution-building, life-planning, etc. People talk about their grandkids and great-grandkids growing up, etc.
If someone expected superintelligence in the next 50 years but not the next 20—like if they really expected that, viscerally, with a full understanding of its implications—then that belief would be a massive, central influence on their life and worldview. That’s not what’s going on in the heads of the many (most?) people in academia who don’t take superintelligence seriously. Right?
Sad I fine instead how reliably the economists around me—overall smart and interested people—are less able to grasp the potential consequences of A(G)I than I think more random persons. We really are brainwashed into thinking capital is just leading to more productive labor employment possibilities, it is really a thing. Even sadder, imho, how the most rubbish arguments in such directions are made by many of the most famous of our profession indeed, and get traction, just a bit how OP points out.
I think the post doesn’t perfectly hit the explanation spot as I might try to elaborate below or elsewhere, but the post really is onto something.
Tone is of course up for debate, and you’re of course right to point out there are many exceptions and indeed increasing numbers. That we will have been surprisingly slow will remain undeniable though :).
I get a bit sad reading this post. I do agree that a lot of economists sort of “miss the point” when it comes to AI, but I don’t think they are more “incorrect” than, say, the AI is Normal Technology folks. I think the crux more or less comes down to skepticism about the plausibility of superintelligence in the next decade or so. This is the mainstream position in economics, but also the mainstream position basically everywhere in academia? I don’t think it’s “learning econ” that makes people “dumber”, although I do think economists have a (generally healthy) strong skepticism towards grandiose claims (which makes them more correct on average).
Another reason I’m sad is that there is a growing group of economists who do take “transformative” AI seriously, and the TAI field has been growing and producing what I think are pretty cool work. For example, there’s an economics of transformative AI class designed mostly for grad students at Stanford this summer, and BlueDot also had an economics of transformative AI class.
Overall I think this post is unnecessarily uncharitable.
I might have overdone it on the sass, sorry. This is much sassier than my default (“scrupulously nuanced and unobjectionable and boring”)…
…partly because I’m usually writing for lesswrong and cross-posting on X/Twitter, whereas this one was vice-versa, and X is a medium that seems to call for more sass;
…partly in an amateur ham-fisted attempt to do clickbait (note also: listicle format!) because this is a message that I really want to put out there;
…and yes, partly because I do sometimes feel really frustrated talking to economists (#NotAllEconomists), and I think they can and should do better, and the sass is reflecting a real feeling that I feel.
But I think next time I would dial it back slightly, e.g. by replacing “DUMBER” with “WORSE” in the first sentence. I’m open to feedback, I don’t know what I’m doing. ¯\_(ツ)_/¯
Yeah, I agree that lots of CS professors are deeply mistaken about the consequences of AGI, and ditto with the neuroscientists, and ditto with many other fields, including even many of the people trying to build AGI right now. I don’t think that economists are more blameworthy than other groups, it just so happens that this one particular post is aimed at them.
I think you’re being overly generous. “Decade or so” is not the crux. In climate change, people routinely talk about bad things that might happen in 2050, and even in 2100, or farther! People also routinely talk 30 years out or more in the context of science, government, infrastructure, institution-building, life-planning, etc. People talk about their grandkids and great-grandkids growing up, etc.
If someone expected superintelligence in the next 50 years but not the next 20—like if they really expected that, viscerally, with a full understanding of its implications—then that belief would be a massive, central influence on their life and worldview. That’s not what’s going on in the heads of the many (most?) people in academia who don’t take superintelligence seriously. Right?
Ok yeah that’s fair.
Proud economist here but: I really second the OP!
Sad I fine instead how reliably the economists around me—overall smart and interested people—are less able to grasp the potential consequences of A(G)I than I think more random persons. We really are brainwashed into thinking capital is just leading to more productive labor employment possibilities, it is really a thing. Even sadder, imho, how the most rubbish arguments in such directions are made by many of the most famous of our profession indeed, and get traction, just a bit how OP points out.
I think the post doesn’t perfectly hit the explanation spot as I might try to elaborate below or elsewhere, but the post really is onto something.
Tone is of course up for debate, and you’re of course right to point out there are many exceptions and indeed increasing numbers. That we will have been surprisingly slow will remain undeniable though :).