ph.d. in applied microeconomics, periodically thinking seriously about the impact of AI on employment and wages since Move 37.
Tim H
Why do you say this scenario is “without dignity”?
Money is fungible. It’s kind of stupid that we have an ‘income tax rate’ and then a ‘medicare tax’ on top of it that we pretend isn’t part of the income tax. And it’s a nice little fiction that payroll taxes pay for social security benefits. Yes, technically this could make the Social Security fund ‘insolvent’ or whatever, but then you ignore that and write the checks anyway and nothing happens.
No, Yglesias’s point is not invalidated by the fungibility of money. (It’s generally a good idea to think twice before concluding that an economics-y writer is making such a basic mistake.) Payroll taxes make up 35% of all federal revenue. The point is that a large dip in payrolls has a major impact on overall revenue. If it gets large enough, even increasing rates on current taxes would probably not be enough to make up for it. We may need to add a VAT or other new revenue sources. Try having a discussion with Opus about it. Here’s one I had recently: https://claude.ai/share/7f0707e6-52b2-423e-9587-07cbeee86df0
Our system is basically set up to not tax capital, specifically because we don’t want to discourage investment (or encourage capital flight). When income is diverted from workers to OpenAI, it may not be taxed at all for the foreseeable future—until their costs of building and operating ever more compute stop outstripping their revenues. So a new form of tax is needed, and what happens in the interim while such a thing gets passed through Congress and implemented?
From the Amish perspective, is the broader US society something of an aligned superintelligence? Maybe “super” is too strong, but I think of the Amish position as a model for how I hope ASI treats humans.
Thinking out loud, the thing is that there are analogs to the non-feed upkeep costs (shelter and farmer/veterinarian labor) for humans. Though some work, like composing poems, requires little more than physical sustenance to be performed well, most human production requires complementary inputs, principally various equipment or machinery. The question then comes down to whether you want to invest in such human-augmenting equipment as opposed to a fully automated solution.
For example, suppose total production is . Then optimal capital allocation means so that human-augmenting capital falls as robot productivity increases relative to human-in-the-loop productivity. Then the real wage, the marginal product of labor, is proportional to . If both and grow exponentially, the condition for the wage to remain constant is that grow at a rate times the rate at which grows (where is traditionally taken to be ). (Google Sheet simulation)
In this toy model, it is conceptually possible that human-augmenting technology, , advances sufficiently quickly relative to full automation, , to keep humans fully employed (at above subsistence wages) indefinitely. (And sufficiently deliberate policy could help.) But if, instead, continues growing at 1-2% annually while takes off at 10%+ rates of growth, human labor eventually becomes obsolete (in this toy model).
This seems like a whole other essay, rather than an edit to this one, though. I’m guessing the analogy to for horses was relatively fixed during 1910-1960.
Here’s Olmstead and Rhode:
The early gasoline tractors of the 1900s were behemoths, patterned after the giant steam plows that preceded them. They were useful for plowing, harrowing, and belt work but not for cultivating fields of growing crops nor powering farm equipment in tow. Innovative efforts between 1910 and 1940 vastly improved the machine’s versatility and reduced its size, making it suited to a wider range of farms and tasks. …
...the revolutionary McCormick-Deering Farmall (1924) was the first general-purpose tractor capable of cultivating amongst growing row crops. The latter machine was also among the first to incorporate a power-takeoff, enabling it to transfer power directly to implements under tow. A host of allied innovations such as improved air and oil filters, stronger implements, pneumatic tires, and the Ferguson three-point hitch and hydraulic system greatly increased the tractor’s life span and usefulness. Seemingly small changes often yielded enormous returns in terms of cost, durability, and performance. As an example, rubber tires reduced vibrations thereby extending machine life, enhanced the tractor’s usefulness in hauling (a task previously done by horses)… The greater mobility afforded by rubber tires also allowed farmers to use a tractor on widely separated fields.
The broader point is that, analogously, AI is only a suitable substitute for humans in narrow tasks today. But that should not be taken to preclude the possibility of total replacement later (except where, like with horse racing, literal humans are explicitly required).
Check out the Olmstead-Rhode paper cited in footnote 14. That was my main source for such specifics. I only have a minute at the moment or I would look myself and offer a better answer—I hope to come back to this. (My recollection is that they initially had hard tires and were difficult to maneuver?)
Yes! The insights from this analogy keep coming.
Interesting, this is the first I’ve heard of them. Thanks, I’ll check this out: https://www.rethinkx.com/labor
I’m pondering editing the post to discuss this explicitly and acknowledge that it does weaken the analogy with AGI and humans. The issue is that AGI could bring down the costs more broadly than tractors did, which raises the question of why the marginal revenue product of labor would fall more quickly than the subsistence wage?
Hmm, before you forced me to think harder about it, I was also thinking it basically comes down to the market value of the horse’s labor versus the price of feed. But you’re right that it’s not clear why tractors should decrease the market value of horse labor more than they reduce the cost of feed. To the contrary, by replacing horses they directly reduce demand for feed (lowering short-run prices, at least), in addition to the broader boost they give to agricultural efficiency overall.
Now I’m thinking the thing missing from that analysis is the other costs to keeping a horse. I am guessing those remained relatively stable (like horse productivity), but perhaps that stability is the issue. In contrast to the feed, other horse upkeep costs were not brought down by the tractor: shelter, veterinary help, and the farmer’s time required for horse care. So maybe the main story is the horse’s revenue product falling below those types of horse costs. (Tractors also have analogous costs, but presumably those are lower, proportionally.)
edit: Or maybe the answer is more to do with the fixed upfront costs of purchasing a horse (versus a tractor)? If so, that would seem to complicate the analogy with human employment.
edit2: Sonnet 4 is telling me lifetime horse upkeep costs (circa 1920) were 40x the purchase price, and feed made up half or less of the daily upkeep. So I guess it’s mostly about the non-feed upkeep costs. (The upfront cost is a bigger deal for tractors. In terms of operating costs, tractors have the advantage of very low costs when idle.)
Nevermind. I’m the inaccurate one here. What I said is true of the GATE model, but I now see that your paragraph was about a separate piece of Epoch commentary that was not based on the GATE model. And that separate piece definitely is talking specifically about formal R&D.
It’s a separate question whether the Epoch commentary is accurately representing the papers it is citing—and whether your response applies—but I haven’t delved into that.
A lot of this likely comes from the conflation of ‘formalized R&D’ as it is currently practiced, and ‘generally figuring things out’ which goes well beyond formalized R&D and will be turbocharged across the board. Yes, only ~20% of productivity gains look like they come from ‘R&D’ but that’s because we’re only counting formal R&D, and plausibly ignoring compounding effects.
No, the way they model R&D is meant to be quite general, just any dedication of resources toward improving software or hardware. They abstract away details by measuring that “dedication of resources” in real dollars, but you should think of that as representing researcher time, compute resources devoted to improvements, etc. And compounding is built-in both indirectly via the fact that improvements in software and hardware increase the resources available to invest and directly via the \phi_S and \phi_H parameters.
I haven’t yet dug into the ~20% result—decomposition can be complicated—but yours is not an accurate explanation of it.
I was surprised to not see much consideration, either here or in the original GD and IC essays, of the brute force approach of “ban development of certain forms of AI,” such as Anthony Aguirre proposes. Is that more (a) because it would be too difficult to enforce such a ban or (b) because those forms of AI are considered net positive despite the risk of human disempowerment?