(The problem in both cases is the central assumptions are implicit, and unlikely on the default trajectory; in my view at least at 10ˆ-3 in case of Acemoglu and 10ˆ-2 in case of Trammell)
The problem is prevalent in almost all academic econ writing; it’s easier to point to people are not doing the mistakes, with central example being Anton Korinek.
thanks, i read the Phil Trammell critique and it helped me understand your position.
My summary is that the points 1,2,3,4,6,9 were basically saying “AI might be misaligned and agentic and that could be bad for humans” and “maybe the institutions of law / democracy / markets will break down”.
I get that if you think these things are pretty likely, the analysis is less interesting and you want the assumptions flagged.
So overall i agree Phil should have a disclaimer like “i’m assuming we’ll get a great solution to alignment and that the current institutions of law + markets survive”, but i don’t think he needs to list out like 10 assumptions
I don’t agree “AI might be misaligned and agentic and that could be bad for humans” and “maybe the institutions of law / democracy / markets will break down” is sufficient summary. When large part of the debate is about distribution, it becomes important to discuss aligned to what/whom.
Eg in Phil’s accounting, any concentration of capital+intelligence which humans originally set in motion counts as human-aligned. Consider the option of “Ultimate Foundation for Scientific Progress”—a wealthy philanthropist set up an entity governed by AIs, which is aligned to the mission “make scientific progress”, keeps the resulting intellectual property, and reinvests. Because humans would be less reliable guardians, they are out of the loop. Maybe he changes his mind afterwards. From the perspective of humans, the income of the entity is not distributed to humans, or influenced by them. At the same time it is not “misaligned AI” in the mostly commonly used sense.
I’d guess what’s going on is if you use aligned in some extremely broad sense of “AIs taken in total are aligned to the good of collective humanity”, yes, many problems go away (but also likely the problems in the essay). If aligned means some more narrow thing, you can have humans owning ˜0% of capital without any local alignment constrain being violated (btw on the topic I highly recommend Beren’s post on difficulty of indexing AI economy)
Two illustrative examples given (in a footnote) are—
Daron Acemoglu The Simple Macroeconomics of AI (2024)
and—
Philip Trammell’s Capital in the 22nd Century
I didn’t want the focus of attention to be dissecting individual pieces; it is relatively easy, and applying the frame to a piece of econ writing is something AIs are perfectly capable of. For the case studies
- Opus 4.6 analysing Capital in the 22nd Century; Opus analysis is basically correct and I completely endorse point 1,2,3,4,6,9. Most of this was also independently covered by Zvi
- Opus 4.6 analysing The Simple Macroeconomics of AI
(The problem in both cases is the central assumptions are implicit, and unlikely on the default trajectory; in my view at least at 10ˆ-3 in case of Acemoglu and 10ˆ-2 in case of Trammell)
The problem is prevalent in almost all academic econ writing; it’s easier to point to people are not doing the mistakes, with central example being Anton Korinek.
thanks, i read the Phil Trammell critique and it helped me understand your position.
My summary is that the points 1,2,3,4,6,9 were basically saying “AI might be misaligned and agentic and that could be bad for humans” and “maybe the institutions of law / democracy / markets will break down”.
I get that if you think these things are pretty likely, the analysis is less interesting and you want the assumptions flagged.
So overall i agree Phil should have a disclaimer like “i’m assuming we’ll get a great solution to alignment and that the current institutions of law + markets survive”, but i don’t think he needs to list out like 10 assumptions
I don’t agree “AI might be misaligned and agentic and that could be bad for humans” and “maybe the institutions of law / democracy / markets will break down” is sufficient summary. When large part of the debate is about distribution, it becomes important to discuss aligned to what/whom.
Eg in Phil’s accounting, any concentration of capital+intelligence which humans originally set in motion counts as human-aligned. Consider the option of “Ultimate Foundation for Scientific Progress”—a wealthy philanthropist set up an entity governed by AIs, which is aligned to the mission “make scientific progress”, keeps the resulting intellectual property, and reinvests. Because humans would be less reliable guardians, they are out of the loop. Maybe he changes his mind afterwards. From the perspective of humans, the income of the entity is not distributed to humans, or influenced by them. At the same time it is not “misaligned AI” in the mostly commonly used sense.
I’d guess what’s going on is if you use aligned in some extremely broad sense of “AIs taken in total are aligned to the good of collective humanity”, yes, many problems go away (but also likely the problems in the essay). If aligned means some more narrow thing, you can have humans owning ˜0% of capital without any local alignment constrain being violated (btw on the topic I highly recommend Beren’s post on difficulty of indexing AI economy)
Can you elaborate on what you think Anton Korinek does differently?