Another potential crux[1] is that Ege’s world view seemingly doesn’t depend at all on AIs which are much faster and smarter than any human. As far as I can tell, it doesn’t enter into his modeling of takeoff (or timelines to full automation of remote work which partially depends on something more like takeoff).
On my views this makes a huge difference because a large number of domains would go much faster with much more (serial and smarter) intelligence. My sense is that a civilization where the smartest human was today’s median human and also everyone’s brain operated 50x slower[2] would in fact make technological progress much slower. Similarly, if AIs were as much smarter than the smartest humans as the smartest human is smarter than the median human and also ran 50x faster than humans (and operated at greater scale than the smartest humans with hundreds of thousands of copies all at 50x speed for over 10 million parallel worker equivalents putting aside the advantages of serial work and intelligence), then we’d see lots of sectors go much faster.
My sense is that Ege bullet bites on this and thinks that slowing everyone down wouldn’t make a big difference, but I find this surprising. Or maybe his views are that parallelism is nearly as good as speed and intelligence and sectors naturally scale up parallel worker equivalents to match up with other inputs, so we’re bottlenecking on some other inputs in the important cases.
FWIW, that’s not the impression I get from the post / I would bet that Ege doesn’t “bite the bullet” on those claims. (If I’m understanding the claims right, it seems like it’d be super crazy to bit the bullet? If you don’t think human speed impacts the rate of technological progress, then what does? Literal calendar time? What would be the mechanism for that?)
The post does refer to how much compute AIs need to match human workers, in several places. If AIs were way smarter or faster, I think that would translate into better compute efficiency. So the impression I get from the post is just that Ege doesn’t expect AIs to be much smarter or faster than humans at the time when they first automate remote work. (And the post doesn’t talk much about what happens afterwards.)
Example claims from the post:
My expectation is that these systems will initially either be on par with or worse than the human brain at turning compute into economic value at scale, and I also don’t expect them to be much faster than humans at performing most relevant work tasks.
...
Given that AI models still remain less sample efficient than humans, these two points lead me to believe that for AI models to automate all remote work, they will initially need at least as much inference compute as the humans who currently do these remote work tasks are using.
...
These are certainly reasons to expect AI workers to become more productive than humans per FLOP spent in the long run, perhaps after most of the economy has already been automated. However, in the short run the picture looks quite different: while these advantages already exist today, they are not resulting in AI systems being far more productive than humans on a revenue generated per FLOP spent basis.
If I’m understanding the claims right, it seems like it’d be super crazy to bit the bullet? If you don’t think human speed impacts the rate of technological progress, then what does? Literal calendar time? What would be the mechanism for that?
Physical bottlenecks, compute bottlenecks, etc.
The claim that you can only speed up algorithmic progress (given a fixed amount of compute) by a moderate amount even with an arbitrarily fast and smart superinteligence reduces to something like this.
So the impression I get from the post is just that Ege doesn’t expect AIs to be much smarter or faster than humans at the time when they first automate remote work. (And the post doesn’t talk much about what happens afterwards.)
Yes, but if you can (e.g.) spend extra compute to massively accelerate AI R&D or a smaller number of other key sectors which might be bottlenecked on fast labor, then doing this might be much more useful than very broadly automating remote work. I think it’s somewhat hard to end up with a view where generally automating remote work across the whole economy using a form factor pretty similar to a human worker (in speed and smarts) is plausible unless you also don’t think there are huge returns to accelerated speed given that things are so likely to funge and be variable.
(E.g., right now it is possible to spend more money to run AIs >10x faster. So, given limited fab ability, it will probably be possible to run AIs 10x faster than otherwise at substanitially higher cost at a point before when you would otherwise been able to automate all remote work. This implies that if there are high returns to speed, then you’d deploy these fast AIs for these tasks.)
You can interpret my argument here as claiming that in some important sectors/tasks AIs will be vastly more productive than typical humans per flop spent due to higher smarts (even if the AIs aren’t superhuman, the best humans are very scarce) and serial speed. By the time you’ve gotten around to automating everything, quite likely the AIs are very superhuman because you drilled down on narrower parts of the economy. (Then there is the question of whether these parts of the economy bottleneck without growing everything in parallel which is a generalization of the software only singularity question.)
Separately:
while these advantages already exist today, they are not resulting in AI systems being far more productive than humans on a revenue generated per FLOP spent basis.
I must misunderstand what Ege means by this because isn’t this trivially false on a task by task basis? If you tried to use a human in cursor it would be much less useful in many respects due to insufficient serial speed.
Maybe Ege means “the current marginal revenue from using 1e15 FLOP / s isn’t much higher than the revenue from a reasonably capable human”, but isn’t this just an extremely trivial implication of there being a market for compute and the cost of compute being below the cost of labor. (A human in the US costs $7-100 / hour while human equivalent flop (1e15 FLOP / s) costs around $2 / hour.) I think this can’t possibly be right because this claim was trivially false 30 years ago when chips were worse. I certainly agree that compute prices are likely to rise once AIs are more capable.
Compute would also be reduced within a couple of years, though, as workers at TSMC, NVIDIA, ASML and their suppliers all became much slower and less effective. (Ege does in fact think that explosive growth is likely once AIs are broadly automating human work! So he does think that more, smarter, faster labor can eventually speed up tech progress; and presumably would also expect slower humans to slow down tech progress.)
So I think the counterfactual you want to consider is one where only people doing AI R&D in particular are slowed down & made dumber. That gets at the disagreement about the importance of AI R&D, specifically, and how much labor vs. compute is contributing there.
For that question, I’m less confident about what Ege and the other mechanize people would think.
(They might say something like: “We’re only asserting that labor and compute are complementary. That means it’s totally possible that slowing down humans would slow progress a lot, but that speeding up humans wouldn’t increase the speed by a lot.” But that just raises the question of why we should think our current labor<>compute ratio is so close to the edge of where further labor speed-ups stop helping. Maybe the answer there is that they think parallel work is really good, so in the world where people were 50x slower, the AI companies would just hire 100x more people and not be too much worse off. Though I think that would massively blow up their spending on labor relative to capital, and so maybe it’d make it a weird coincidence that their current spending on labor and capital is so close to 50⁄50.)
Re your response to “Ege doesn’t expect AIs to be much smarter or faster than humans”: I’m mostly sympathetic. I see various places where I could speculate about what Ege’s objections might be. But I’m not sure how productive it is for me to try to speculate about his exact views when I don’t really buy them myself. I guess I just think that the argument you presented in this comment is somewhat complex, and I’d predict higher probability that people object (or haven’t thought about) some part of this argument then that they bite the crazy “universal human slow-down wouldn’t matter” bullet.
Yeah, I agree with this and doesn’t seem that productive to speculate about people’s views when I don’t fully understand them.
They might say something like: “We’re only asserting that labor and compute are complementary. That means it’s totally possible that slowing down humans would slow progress a lot, but that speeding up humans wouldn’t increase the speed by a lot.” But that just raises the question of why we should think our current labor<>compute ratio is so close to the edge of where further labor speed-ups stop helping.
I discuss this sort of thing in this comment and in a draft post I’ve DM’d you.
Another potential crux[1] is that Ege’s world view seemingly doesn’t depend at all on AIs which are much faster and smarter than any human. As far as I can tell, it doesn’t enter into his modeling of takeoff (or timelines to full automation of remote work which partially depends on something more like takeoff).
On my views this makes a huge difference because a large number of domains would go much faster with much more (serial and smarter) intelligence. My sense is that a civilization where the smartest human was today’s median human and also everyone’s brain operated 50x slower[2] would in fact make technological progress much slower. Similarly, if AIs were as much smarter than the smartest humans as the smartest human is smarter than the median human and also ran 50x faster than humans (and operated at greater scale than the smartest humans with hundreds of thousands of copies all at 50x speed for over 10 million parallel worker equivalents putting aside the advantages of serial work and intelligence), then we’d see lots of sectors go much faster.
My sense is that Ege bullet bites on this and thinks that slowing everyone down wouldn’t make a big difference, but I find this surprising. Or maybe his views are that parallelism is nearly as good as speed and intelligence and sectors naturally scale up parallel worker equivalents to match up with other inputs, so we’re bottlenecking on some other inputs in the important cases.
This is only somewhat related to this post.
Putting aside cases like construction etc where human reaction time being close enough to nature is important.
FWIW, that’s not the impression I get from the post / I would bet that Ege doesn’t “bite the bullet” on those claims. (If I’m understanding the claims right, it seems like it’d be super crazy to bit the bullet? If you don’t think human speed impacts the rate of technological progress, then what does? Literal calendar time? What would be the mechanism for that?)
The post does refer to how much compute AIs need to match human workers, in several places. If AIs were way smarter or faster, I think that would translate into better compute efficiency. So the impression I get from the post is just that Ege doesn’t expect AIs to be much smarter or faster than humans at the time when they first automate remote work. (And the post doesn’t talk much about what happens afterwards.)
Example claims from the post:
Physical bottlenecks, compute bottlenecks, etc.
The claim that you can only speed up algorithmic progress (given a fixed amount of compute) by a moderate amount even with an arbitrarily fast and smart superinteligence reduces to something like this.
Yes, but if you can (e.g.) spend extra compute to massively accelerate AI R&D or a smaller number of other key sectors which might be bottlenecked on fast labor, then doing this might be much more useful than very broadly automating remote work. I think it’s somewhat hard to end up with a view where generally automating remote work across the whole economy using a form factor pretty similar to a human worker (in speed and smarts) is plausible unless you also don’t think there are huge returns to accelerated speed given that things are so likely to funge and be variable.
(E.g., right now it is possible to spend more money to run AIs >10x faster. So, given limited fab ability, it will probably be possible to run AIs 10x faster than otherwise at substanitially higher cost at a point before when you would otherwise been able to automate all remote work. This implies that if there are high returns to speed, then you’d deploy these fast AIs for these tasks.)
You can interpret my argument here as claiming that in some important sectors/tasks AIs will be vastly more productive than typical humans per flop spent due to higher smarts (even if the AIs aren’t superhuman, the best humans are very scarce) and serial speed. By the time you’ve gotten around to automating everything, quite likely the AIs are very superhuman because you drilled down on narrower parts of the economy. (Then there is the question of whether these parts of the economy bottleneck without growing everything in parallel which is a generalization of the software only singularity question.)
Separately:
I must misunderstand what Ege means by this because isn’t this trivially false on a task by task basis? If you tried to use a human in cursor it would be much less useful in many respects due to insufficient serial speed.
Maybe Ege means “the current marginal revenue from using 1e15 FLOP / s isn’t much higher than the revenue from a reasonably capable human”, but isn’t this just an extremely trivial implication of there being a market for compute and the cost of compute being below the cost of labor. (A human in the US costs $7-100 / hour while human equivalent flop (1e15 FLOP / s) costs around $2 / hour.) I think this can’t possibly be right because this claim was trivially false 30 years ago when chips were worse. I certainly agree that compute prices are likely to rise once AIs are more capable.
Compute would also be reduced within a couple of years, though, as workers at TSMC, NVIDIA, ASML and their suppliers all became much slower and less effective. (Ege does in fact think that explosive growth is likely once AIs are broadly automating human work! So he does think that more, smarter, faster labor can eventually speed up tech progress; and presumably would also expect slower humans to slow down tech progress.)
So I think the counterfactual you want to consider is one where only people doing AI R&D in particular are slowed down & made dumber. That gets at the disagreement about the importance of AI R&D, specifically, and how much labor vs. compute is contributing there.
For that question, I’m less confident about what Ege and the other mechanize people would think.
(They might say something like: “We’re only asserting that labor and compute are complementary. That means it’s totally possible that slowing down humans would slow progress a lot, but that speeding up humans wouldn’t increase the speed by a lot.” But that just raises the question of why we should think our current labor<>compute ratio is so close to the edge of where further labor speed-ups stop helping. Maybe the answer there is that they think parallel work is really good, so in the world where people were 50x slower, the AI companies would just hire 100x more people and not be too much worse off. Though I think that would massively blow up their spending on labor relative to capital, and so maybe it’d make it a weird coincidence that their current spending on labor and capital is so close to 50⁄50.)
Re your response to “Ege doesn’t expect AIs to be much smarter or faster than humans”: I’m mostly sympathetic. I see various places where I could speculate about what Ege’s objections might be. But I’m not sure how productive it is for me to try to speculate about his exact views when I don’t really buy them myself. I guess I just think that the argument you presented in this comment is somewhat complex, and I’d predict higher probability that people object (or haven’t thought about) some part of this argument then that they bite the crazy “universal human slow-down wouldn’t matter” bullet.
Yeah, I agree with this and doesn’t seem that productive to speculate about people’s views when I don’t fully understand them.
I discuss this sort of thing in this comment and in a draft post I’ve DM’d you.