When economists think and write about the post-AGI world, they often rely on the implicit assumption that parameters may change, but fundamentally, structurally, not much happens. And if it does, it’s maybe one or two empirical facts, but nothing too fundamental.
This mostly worked for all sorts of other technologies, where technologists would predict society to be radically transformed e.g. by everyone having most of humanity’s knowledge available for free all the time, or everyone having an ability to instantly communicate with almost anyone else. [1]
But it will not work for AGI, and as a result, most of the econ modelling of the post-AGI world is irrelevant or actively misleading [2], making people who rely on it more confused than if they just thought “this is hard to think about so I don’t know”.
Econ reasoning from high level perspective
Econ reasoning is trying to do something like projecting the extremely high dimensional reality into something like 10 real numbers and a few differential equations. All the hard cognitive work is in the projection. Solving a bunch of differential equations impresses the general audience, and historically may have worked as some sort of proof of intelligence, but is relatively trivial.
How the projection works is usually specified by some combination of assumptions, models and concepts used, where the concepts themselves usually imply many assumptions and simplifications.
In the best case of economic reasoning, the projections capture something important, and the math leads us to some new insights.[3] In cases which are in my view quite common, non-mathematical, often intuitive reasoning of the economist leads to some interesting insight, and then the formalisation, assumptions and models are selected in a way where the math leads to the same conclusions. The resulting epistemic situation may be somewhat tricky: the conclusions may be true, the assumptions sensible, but the math is less relevant than it seems—given the extremely large space of economic models, had the economist different intuitions, they would have been able to find a different math leading to different conclusions.
Unfortunately, there are many other ways the economist can reason. For example, they can be driven to reach some counter-intuitive conclusion, incentivized by academic drive for novelty. Or they may want to use some piece of math they like.[4] Or, they can have intuitive policy opinions, and the model could be selected so it supports some policy direction—this process is usually implicit and subconscious.
The bottom line is if we are interested in claims and predictions about reality, the main part of economic papers are assumptions and concepts used. The math is usually right. [5]
Econ reasoning applied to post-AGI situations
The basic problem with applying standard economic reasoning to post-AGI situations is that sufficiently advanced AI may violate many assumptions which make perfect sense in human economy, but may not generalize. Often the assumptions are so basic that they are implicit, assumed in most econ papers, and out of sight in the usual “examining the assumptions”. Also advanced AI may break some of the intuitions about how the world works, breaking the intuitive process upstream of formal arguments.
What complicates the matter is these assumptions often interact with considerations and disciplines outside of the core of economic discourse, and are better understood and examined using frameworks from other disciplines.
To give two examples:
AI consumers
Consumption so far was driven by human decisions and utility. Standard economic models ultimately ground value in human preferences and utility. Humans consume, humans experience satisfaction, and the whole apparatus of welfare economics and policy evaluation flows from this. Firms are modeled as profit-maximizing, but profit is instrumental—it flows to human owners and workers who then consume.
If AIs own capital and have preferences or goals of their own, this assumption breaks down. If such AIs spent resources, this should likely count as consumption in the economic sense.
Preferences
Usual assumption in most econ thinking is that humans have preferences which are somewhat stable, somewhat self-interested, and what these are is a question mostly outside of economics. [6] There are whole successful branches of economics studying to what extent human preferences deviate from VNM rationality or human decision making suffers from cognitive limitations, or on how preferences form, but these are not in the center of attention of mainstream macroeconomy. [7] Qualitative predictions in case of humans are often similar, so the topic is not so important.
When analyzing the current world, we find that human preferences come from diverse sources, like biological needs, learned tastes, and culture. A large component seems to be ultimately selected for by cultural evolution.
Post-AGI, the standard econ assumptions may fail, or need to be substantially modified. Why?
One consideration is the differences in cognitive abilities between AGIs and humans may make human preferences easily changeable for AGIs. As an intuition pump: consider a system composed of a five year old child and her parents. The child obviously has some preferences, but the parents can usually change these. Sometimes by coercion or manipulation, but often just by pointing out consequences, extrapolating children’s wants, or exposing them to novel situations.
Also preferences are relative to world model: standard econ way of modelling differences in world models is “information asymmetries”. The kid does not have as good understanding of the world, and would easily be exploited by adults.
Because child preferences are not as stable and self-interested as adults, and kids suffer from information asymmetries, they are partially protected by law: the result is patchwork of regulation where, for example, it is legal to try to modify children’s food preferences, but adults are prohibited to try to change child’s sexual preferences for their advantage.
Another ”so obvious it is easy to overlook” effect is child dependence on parent’s culture: if parents are Christians, it is quite likely their five year old kid will believe in God. If parents are patriots, the kid will also likely have some positive ideas about their country. [8]
When interacting with cognitive systems way more capable than us, we may find ourselves in a situation somewhat similar to kids: our preferences may be easily influenced, and not particularly self-interested. The ideologies we adopt may be driven by non-human systems. Our world models may be weak, resulting in massive information assymetries.
There even is a strand of economic literature that explicitly models parent-child interactions, families and formation of preferences. [9] This body of work may provide useful insights I’d be curious about—is anyone looking there?
The solution may be analogous: some form of paternalism, where human minds are massively protected by law from some types of interference. This may or may not work, but once it is the case, you basically can not start from classical liberal and libertarian assumptions. As an intuition pump, imagine someone trying to do “macroeconomy of ten year olds and younger” in the current world.
Other core concepts
We could examine some other typical econ assumptions and concepts in a similar way, and each would deserve a paper-length treatment. This post tries to mostly stay a bit more meta-, so just some pointers.
Property rights. Most economic models take property rights as exogenous—“assume well-defined and enforced property rights.” If you look into how most property rights are actually connected to physical reality, property rights often mean some row exists in a database run by the state or a corporation. Enforcement ultimately rests on the state’s monopoly on violence, cognitive monitoring capacity and will to act as independent enforcer. As all sorts of totalitarian, communist, colonial or despotic regimes illustrate, even in purely human systems, private property depends on power. If you assume property is stable, you are assuming things about governance and power.
Transaction costs and firm boundaries. Coase’s theory [10] explains why firms exist: it is sometimes cheaper to coordinate internally via hierarchy than externally via markets. The boundary of the firm sits where transaction costs of market exchange equal the costs of internal coordination. AI may radically reduce both—making market transactions nearly frictionless while also making large-scale coordination easy. The equilibrium size and structure of firms could shift in unpredictable directions, or the concept of a “firm” might become less coherent.
Discrete agents and competition. Market models assume distinct agents that cooperate and compete with each other. Market and competition models usually presuppose you can count the players. AGI systems can potentially be copied, forked, merged, or run as many instances, and what are their natural boundaries is an open problem.
Capital vs. Labour. Basic concepts in 101 economic models typically include capital and labour as concepts. Factors is production function, Total Factor Productivity, Cobb-Douglas, etc. Capital is produced, owned, accumulated, traded, and earns returns for its owners. Labour is what humans do, and cannot be owned. This makes a lot of sense in modern economies, where there is a mostly clear distinction between “things” and “people”. It is more ambiguous if you look back in time—in slave economies, do slaves count as labour or capital? It is also a bit more nuanced—for example with “human capital”.
When analyzing the current world, there are multiple reasons why the “things” and “people” distinction makes sense. “Things” are often tools. These amplify human effort, but are not agents. A tractor makes a farmer more productive, but does not make many decisions. Farmers can learn new tasks, tractors can not. Another distinction is humans are somewhat fixed: you can not easily and quickly increase or decrease their counts.
Post-AGI, this separation may stop making sense. AIs may reproduce similarly to capital, be agents like labour, learn fast, and produce innovation like humans. Also maybe humans may own them like normal capital, or more like slaves, or maybe AIs will be self-owned.
Better and worse ways how to reason about post-AGI situations
There are two epistemically sound ways to deal with problems with generalizing economic assumptions: broaden the view, or narrow the view. There are also many epistemically problematic moves people take.
Broadening the view means we try to incorporate all crucial considerations. If assumptions about private property lead us to think about post-AGI governance, we follow. If thinking about governance leads to the need to think about violence and military technology, we follow. In the best case, we think about everything in terms of probability distributions, and more or less likely effects. This is hard, interdisciplinary, and necessary, if we are interested in forecasts or policy recommendations.
Narrowing the view means focusing on some local domain, trying to make a locally valid model and clearly marking all the assumptions. This is often locally useful, may build intuitions for some dynamic, and fine as long as a lot of effort is spent on delineating where the model may apply and where clearly not.
What may be memetically successful and can get a lot of attention, but overall is bad, is doing the second kind of analysis and presenting it as the first type. Crucial consideration is a consideration which can flip the result. If an analysis ignores or assumes away ten of these, the results have basically no practical relevance: imagine for each crucial consideration, there is 60% chance the modal view is right and 40% it is not. Assume or imply the modal view is right 10 times, and your analysis holds in 0.6% worlds.
In practice, this is usually not done explicitly—almost no one claims their analysis considers all important factors—but as a form of motte-and-bailey fallacy. The motte is the math in the paper—follows from the assumptions and there are many of these. The bailey are the broad stroke arguments, blogpost summaries, tweets and short-hand references, spreading way further, without the hedging.
In the worst cases, various assumptions made are contradictory or at least anticorrelated. For example: some economists assume comparative advantage generally preserves relevance of human labour, and AIs are just a form of capital which can be bought and replicated. However, comparative advantage depends on opportunity costs: if you do X, you cannot do Y at the same time. The implicit assumption is you can not just boot a copy of you. If you can, the “opportunity cost” is not something like the cost of your labour, but the cost of booting up another copy. If you assume future AGIs are similarly efficient substitutes for human labour as current AIs are for moderately boring copywriting, the basic “comparative advantage” model is consistent with labour price dropping 10000x below minimum wage. While the comparative advantage model is still literally true, it does not have the same practical implications. Also while in the human case the comparative advantage model is usually not destroyed by frictions, if your labour is sufficiently low value, the effective price of human labour can be 0. For a human example, five year olds or people with severe mental disabilities unable to read are not actually employable in the modern economy. In the post-AGI economy, it is easy to predict frictions like humans operating at machine speeds or not understanding the directly communicated neural representations.
What to do
To return to the opening metaphor: economic reasoning projects high-dimensional reality into a low-dimensional model. The hard work is choosing the projection. Post-AGI, we face a situation where the reality we are projecting may be different enough that projections calibrated on human economies systematically fail. The solution is usually to step back and bring more variables into the model. Sometimes this involves venturing outside of the core of econ thinking, and bringing in political economy, evolution, computational complexity or even physics and philosophy. Or maybe just look at other parts of economic thinking, which may be unexpectedly relevant. This essay is not a literature review. I’m not claiming that no economist has ever thought about these issues, just that the most common approach is wrong.
On a bit of a personal note. I would love it if there were more than 5-10 economists working on the post-AGI questions seriously, and engaging with the debate seriously. If you are an economist… I do understand that you are used to interacting with the often ignorant public, worried about jobs and not familiar with all the standard arguments and effects like Baumol, Jevons, lump of labour fallacy, gains from trade, etc. Fair enough, but the critique here is different: you’re assuming answers to questions you haven’t asked. If you are modelling the future using econ tools, I would like to know your answers/assumptions about “are AIs agents?”, “how are you modelling AI consumption?” , “in your model, do AIs own capital?” or “what is the system of governance compatible with the economic system you are picturing?”
Thanks to Marek Hudík, Duncan Mcclements and David Duvenaud for helpful comments on a draft version of this text. Mistakes and views are my own. Also thanks to Claude Opus 4.5 for extensive help with the text.
- ^
Gordon, Robert J. The Rise and Fall of American Growth.
- ^
Examples of what I’m critizing range from texts by Nobel laureates—eg Daron Acemoglu The Simple Macroeconomics of AI (2024) to posts by rising stars of thinking about post-AGI economy like Philip Trammell’s Capital in the 22nd Century.
- ^
Sane economists are perfectly aware of nature of the discipline. For longer discussion: Rodrik, Dani. Economics Rules: The Rights and Wrongs of the Dismal Science. W.W. Norton, 2015.
- ^
Also this is not a novel criticism: Romer, Paul. “The Trouble with Macroeconomics.” The American Economist 61, no. 1 (2016): 31-44.
- ^
“So, math plays a purely instrumental role in economic models. In principle, models do not require math, and it is not the math that makes the models useful or scientific.” Rodrik (2015)
- ^
Classic text by Robbins (1932) defines preferences as out of scope “Economics is the science which studies human behavior as a relationship between given ends and scarce means which have alternative uses.” Another classical text on the topic is Stigler & Becker (1977) “De Gustibus Non Est Disputandum.” As with almost any claim in this text: yes, there are parts of econ literature about preference formation, but these usually do not influence the post-AGI macroeconomy papers.
- ^
De Grauwe, Paul, and Yuemei Ji. “Behavioural Economics is Also Useful in Macroeconomics.” VoxEU, January 2018.
Driscoll, John C., and Steinar Holden. “Behavioral Economics and Macroeconomic Models.” Journal of Macroeconomics 41 (2014): 133-147. - ^
Bisin, Alberto, and Thierry Verdier. “The Economics of Cultural Transmission and the Dynamics of Preferences.”
- ^
Becker, Gary S. A Treatise on the Family.
- ^
Coase, Ronald H. “The Nature of the Firm.” Economica 4, no. 16 (1937): 386-405
Thank you for this post. As an economist trying to engage seriously with the topic of AGI/TAI/ASI, I generally agree with you. I confirm that it is very hard to challenge any of the implicit assumptions and century-long traditions in (macro)economics. Reviewers hate unfamiliar setups, journal editors prefer to play it safe, and because of that publishing academic papers on transformative AI is a nightmare. By contrast, if you go the “safe” way of incremental modifications over the established literature, your work can be published much more easily, but then it probably ends up being “irrelevant or actively misleading”.
One of my main ideas which I came across while trying not to be “irrelevant or actively misleading”, is The Hardware-Software Framework: A New Perspective on Economic Growth with AI. It challenges the “capital vs. labor” distinction, pretty much along the lines that you mentioned.
I just want to say[1] that I agree with ~everything in this post. I think the most exciting topics in AGI & econ are stuff that tries to relax some of the standard assumptions, and also stuff attempts to map out the political economy of societies with powerful AIs.
As someone who has a background in econ and currently does technical AIS research.
Curious what do you think would help more talented economists to engage?
Honestly I don’t know. I’ve always been a big fan of “weird” econ papers like studying why people pray for rain, folklore, and political economy of alternative realities. I think there is a space in economics for work like the ones you outlined in the post.
Maybe someone just has to write the first paper that challenges these assumptions? You could also try having someone run a workshop on this, but I don’t expect that to work very well.
Probably the best way to reach these economist is by producing compelling outputs targeted towards a broad general audience. Like I think Matt Ygelesias has pretty good AI takes,[1] and I don’t think he required any targeted outreach to come around.
His recent post takes Musk’s space datacenters buildout quite seriously, despite the fact that people in the comment section mostly not buying it.
Curated! There’s a difficult-to-bridge divide between the intuitions of people who think everything is going to get really crazy with AG and those who think a kind of normality will be maintained. This post seems to do an uncommonly good job of piercing the divide by arguing in detail and mechanistically for why current picture doesn’t obviously continue. More generally, it argues for a better epistemic approach.
I struggle encountering people who predict reality being not-that-different in coming decades: it feels crazy to me, but that reaction makes it harder to discuss. I think the contents and example of this post point at where discussion can be had, suggesting both object and meta level places to explore cruxes. It’s a valuable contribution. Kudos!
I half believe this. I notice though that many modern societies have some paternalistic bits and pieces, often around addiction and other preference-hijacking activities… but are also often on the whole liberal+libertarian. It may be that more enclaves and ‘protection’ is needed at various layers, and then liberalism can be maintained within those boundaries.
Is this true? I’ve never heard of any laws about not trying to change a child’s sexual preferences. Or really any laws against trying to change people’s preferences for your benefit.
I suppose statutory rape laws could be construed a something like this, but it really seems a stretch.
Many countries have laws against “grooming” children
Comprehensive review for OECD countries by Claude
Summary in response to your question:
OECD countries don’t typically have laws phrased as “you can’t change a child’s sexual preferences,” but they do have laws that effectively prohibit adults from steering children’s sexual attitudes or behavior for the adult’s benefit. The most direct examples are grooming laws (now in 34 of 38 OECD countries), which criminalize adults systematically building trust with children to manipulate them toward sexual compliance — this is literally changing a child’s sexual boundaries/preferences for the adult’s advantage. Beyond that, corruption of minors statutes like France’s Art. 227-22 (corruption de mineur, 5–7 years), Italy’s Art. 609-quinquies (corruzione di minorenne), and the Czech §201 explicitly criminalize adults who steer children toward sexual behavior or expose them to sexual content in ways that distort their development. And more broadly, laws like Pennsylvania’s §6301 (“corruption of minors”) and Mexico’s Art. 201 (“corrupción de menores”) criminalize adults who manipulate children’s preferences and behavior across a range of domains — not just sexual ones, but also toward crime, substance use, begging, etc. So while no law literally says “don’t change a child’s preferences,” the underlying legal principle — that adults must not exploit their power asymmetry to reshape children’s attitudes for the adult’s benefit — is well-established across multiple legal traditions.
Some countries have laws against (anti-gay) conversion therapy.
Well, we don’t have good ways to track “changing preferences,” so instead we ban the result—adults are not allowed to have relationships with minors. This stops people from trying to change the preferences of children (e.g., person X seducing children is changing their preferences to include the person X), but also other types of abuse. Having a policy focused solely on “preference changing” would be too narrow, but it is effectively included in the current law.
Mild nitpick: there are a few five year olds that are employed as models, child actors, or similar activities for which “look like a five-year-old” is a job requirement, but those are rare exceptions that would remain rare even in the absence of child labor laws.
I think it’s worth considering the ways in which adults could be employable in a world with AGI. I can think of a few examples of adults being paid to be humans:
Marketing research studies
Figure modelling
Medical challenge studies
Athletic competitions*
Food criticism*
Also, I expect that a few careers will remain extremely resistant to AI adoption, regardless of how sophisticated the AI becomes, due to taboos against AIs being in positions of great authority over humans:
Pastors, priests, etc.
Politicians
Primary childcare-giver (i.e. parenting)
Police officers (maybe)
I’d be happy for someone to expand these lists.
*Added during an edit.
Figure modeling for what, exactly? AIs can already produce really good images of realistic people that don’t actually exist.
Many artists prefer to use live models, instead of images, as their references. If that wasn’t true, then live modelling would have died with the advent of the Internet—if not the camera—but it hasn’t. I’m not sure why artists have this preference, but they demonstrably do.
I’m not sure how much this matters and I’m not 100% sure this effect is real (it’s the sort of thing I could have just psy-op’d myself into believing). But, as an artist: there’s a difference between live models and pictures is that the live models force a subtle skill of… like, converting the 3D shape into a 2D one.
This is sort of like the difference between lifting free weights vs lifting weights at a machine. There’s a bunch of little subtle micromovements you have to make when free-weight-lifting whereas the machine isolates a given muscle. When you are looking at a real person, your head rocks back and forth, choosing exactly which 2D plane you are copying is subtly more difficult than when copying from a picture.
If you never intend to draw from life, then, I wouldn’t argue this matters that much. A lot of it might be a particular culture of art. But, I’m like 75% there’s a difference.
Ooh, I think that the weight-lifting analogy is very apt. Thanks for the insight.
If anything, the transaction cost is going to increase significantly because it will be much easier to scam. It doesn’t even require AGI, just open-weight agentic models as capable as Opus 4.6 with a little bit of finetuning by malicious actors (quite likely to happen by the end of this year, it seems), see thread: https://x.com/andonlabs/status/2019467232586121701
Thanks for writing this.
I’m sympathetic with the high-level claim that it’s very easy for someone to write down a rigorous but ultimately v misguided model in economics, and that a lot of the action is thinking about what the correct assumptions should be.
I was a bit disappointed not to see (imo) clear examples where someone proposes a model but actually the conclusions don’t follow because of some distribution shift that they didn’t anticipate.
Like, what are examples of models whose conclusions don’t follow once you account for the fact that there may be some AI systems that are consumers? Or once you account for the fact that AI systems may significantly influence the preferences of humans? E.g. to me it seems like the models predicting explosive growth in GDP or shrinking of the human labour share of the economy aren’t undercut by these things. I’m not sure what kinds of conclusions and reasoning you’re targeting here for critique.
Two illustrative examples given (in a footnote) are—
Daron Acemoglu The Simple Macroeconomics of AI (2024)
and—
Philip Trammell’s Capital in the 22nd Century
I didn’t want the focus of attention to be dissecting individual pieces; it is relatively easy, and applying the frame to a piece of econ writing is something AIs are perfectly capable of. For the case studies
- Opus 4.6 analysing Capital in the 22nd Century; Opus analysis is basically correct and I completely endorse point 1,2,3,4,6,9. Most of this was also independently covered by Zvi
- Opus 4.6 analysing The Simple Macroeconomics of AI
(The problem in both cases is the central assumptions are implicit, and unlikely on the default trajectory; in my view at least at 10ˆ-3 in case of Acemoglu and 10ˆ-2 in case of Trammell)
The problem is prevalent in almost all academic econ writing; it’s easier to point to people are not doing the mistakes, with central example being Anton Korinek.
Can you elaborate on what you think Anton Korinek does differently?
The author did use comparative advantage as an example, although I do think that the example’s implications should have been more explicit.
Agree with most of the points in this post. I would add “market power” to your list of important assumptions that often go unaddressed. Most economic models have a background assumption perfect or near-perfect competition, unless they are explicitly trying to model monopolies/oligopolies or monopsonies. Yet I expect AI is far more likely than previous technologies to concentrate market power (especially on the hardware side).
Excellent post, strong upvote. You’ve done a great job articulating what I felt as basically just a twisting of the guts whenever I read economic analysis of the idea of AGI. Tackling the problem head-on:
Model AI as a firm directly: I believe AI straightforwardly breaks the usefulness of the capital-labor distinction. The central crux for me is the extent to which the AI could perform the knowledge work of corporate management; I claim it doesn’t matter for economic purposes that the source of the decisions is an abstract machine the company owns (or rents), what matters for economic purposes is the level at which the decisions are made. If AI makes the management decisions for the firm, to the rest of the economy they are indistinguishable.
For modelling AI-as-a-firm:
Information Asymmetry: I predict the AI will have an information advantage over most other actors in the economy, and once we cross the AGI threshold over all non-AI actors in the economy, at least eventually. This might be a reasonable economics-view-definition of AGI: the threshold at which it achieves local information asymmetry on all transactions.
Transaction Costs: I expect transaction costs to be systematically lower for AI firms through the time-cost decisions. I make a concrete analogy is high frequency trading, where a fast trading algorithm can see a new buy order, go purchase the better orders on the market, and return to sell to the original order.
Some second-order items I would like to see:
Principal-Agent problems: This is how economics tackles alignment problems. Currently we model OpenAI/Anthropic as owning ChatGPT/Claude respectively, under capital; if the AIs were instead modeled as firms independently and viewed as subcontractors (albeit with contracts strongly favoring OpenAI/Anthropic) and apply the information asymmetry and transaction cost modifications above, what does a principal-agent model predict?
EMH-breaking threshold: My intuition is that the information asymmetry and transaction cost advantages are mutually reinforcing, but the idea I think is more important is that doing a transaction provides much more detailed information than a price signal. A systematic advantage in completing transactions means a systematic accumulation of higher dimensional information than prices; because the EMH works on price signals, I expect it will be defeated if it is possible to aggregate higher-dimensional signals than price.
Well, as I put it, it’s very similar to the horsecarts to automobiles analogy… we are the horses.