It’s true that Acemoglu generally avoids dealing with extreme AI capabilities, and maybe should be more explicit about what is in and out of scope when he talks about AI. But the criticism I would lay at his feet is that it seems like a lot of what he does is explain to people who’ve been “dumbed” by learning economics how economics gets it all wrong, but without acknowledging that the intuitive, never-took-econ perspective doesn’t need his corrections. Sort of a “man on the inside,” when it’s not clear whether it’s worth the effort.
A better example I’d suggest to represent your argument is Korinek and Suh’s “Scenarios for Transition to AGI,” which (as the title says) considers AGI, arrives at scenarios where, for instance, wages collapse, but completely ignores how this might upset economic models’ unstated assumptions like wages needing to remain above subsistence level to avoid complete social breakdown.
Nassim Nicholas Taleb, who loves taking down economists, is always worth a read as it is more generalizable than just AI stuff. The generalization is “human behavior is (correctly) tuned to avoiding being wiped out by power asymmetries and the unexpected, not maximizing expected returns under friendly conditions where a nation-state is there to save you from devastating losses.”
It’s true that Acemoglu generally avoids dealing with extreme AI capabilities, and maybe should be more explicit about what is in and out of scope when he talks about AI. But the criticism I would lay at his feet is that it seems like a lot of what he does is explain to people who’ve been “dumbed” by learning economics how economics gets it all wrong, but without acknowledging that the intuitive, never-took-econ perspective doesn’t need his corrections. Sort of a “man on the inside,” when it’s not clear whether it’s worth the effort. A better example I’d suggest to represent your argument is Korinek and Suh’s “Scenarios for Transition to AGI,” which (as the title says) considers AGI, arrives at scenarios where, for instance, wages collapse, but completely ignores how this might upset economic models’ unstated assumptions like wages needing to remain above subsistence level to avoid complete social breakdown.
Nassim Nicholas Taleb, who loves taking down economists, is always worth a read as it is more generalizable than just AI stuff. The generalization is “human behavior is (correctly) tuned to avoiding being wiped out by power asymmetries and the unexpected, not maximizing expected returns under friendly conditions where a nation-state is there to save you from devastating losses.”