If I’m understanding the claims right, it seems like it’d be super crazy to bit the bullet? If you don’t think human speed impacts the rate of technological progress, then what does? Literal calendar time? What would be the mechanism for that?
Physical bottlenecks, compute bottlenecks, etc.
The claim that you can only speed up algorithmic progress (given a fixed amount of compute) by a moderate amount even with an arbitrarily fast and smart superinteligence reduces to something like this.
So the impression I get from the post is just that Ege doesn’t expect AIs to be much smarter or faster than humans at the time when they first automate remote work. (And the post doesn’t talk much about what happens afterwards.)
Yes, but if you can (e.g.) spend extra compute to massively accelerate AI R&D or a smaller number of other key sectors which might be bottlenecked on fast labor, then doing this might be much more useful than very broadly automating remote work. I think it’s somewhat hard to end up with a view where generally automating remote work across the whole economy using a form factor pretty similar to a human worker (in speed and smarts) is plausible unless you also don’t think there are huge returns to accelerated speed given that things are so likely to funge and be variable.
(E.g., right now it is possible to spend more money to run AIs >10x faster. So, given limited fab ability, it will probably be possible to run AIs 10x faster than otherwise at substanitially higher cost at a point before when you would otherwise been able to automate all remote work. This implies that if there are high returns to speed, then you’d deploy these fast AIs for these tasks.)
You can interpret my argument here as claiming that in some important sectors/tasks AIs will be vastly more productive than typical humans per flop spent due to higher smarts (even if the AIs aren’t superhuman, the best humans are very scarce) and serial speed. By the time you’ve gotten around to automating everything, quite likely the AIs are very superhuman because you drilled down on narrower parts of the economy. (Then there is the question of whether these parts of the economy bottleneck without growing everything in parallel which is a generalization of the software only singularity question.)
Separately:
while these advantages already exist today, they are not resulting in AI systems being far more productive than humans on a revenue generated per FLOP spent basis.
I must misunderstand what Ege means by this because isn’t this trivially false on a task by task basis? If you tried to use a human in cursor it would be much less useful in many respects due to insufficient serial speed.
Maybe Ege means “the current marginal revenue from using 1e15 FLOP / s isn’t much higher than the revenue from a reasonably capable human”, but isn’t this just an extremely trivial implication of there being a market for compute and the cost of compute being below the cost of labor. (A human in the US costs $7-100 / hour while human equivalent flop (1e15 FLOP / s) costs around $2 / hour.) I think this can’t possibly be right because this claim was trivially false 30 years ago when chips were worse. I certainly agree that compute prices are likely to rise once AIs are more capable.
Compute would also be reduced within a couple of years, though, as workers at TSMC, NVIDIA, ASML and their suppliers all became much slower and less effective. (Ege does in fact think that explosive growth is likely once AIs are broadly automating human work! So he does think that more, smarter, faster labor can eventually speed up tech progress; and presumably would also expect slower humans to slow down tech progress.)
So I think the counterfactual you want to consider is one where only people doing AI R&D in particular are slowed down & made dumber. That gets at the disagreement about the importance of AI R&D, specifically, and how much labor vs. compute is contributing there.
For that question, I’m less confident about what Ege and the other mechanize people would think.
(They might say something like: “We’re only asserting that labor and compute are complementary. That means it’s totally possible that slowing down humans would slow progress a lot, but that speeding up humans wouldn’t increase the speed by a lot.” But that just raises the question of why we should think our current labor<>compute ratio is so close to the edge of where further labor speed-ups stop helping. Maybe the answer there is that they think parallel work is really good, so in the world where people were 50x slower, the AI companies would just hire 100x more people and not be too much worse off. Though I think that would massively blow up their spending on labor relative to capital, and so maybe it’d make it a weird coincidence that their current spending on labor and capital is so close to 50⁄50.)
Re your response to “Ege doesn’t expect AIs to be much smarter or faster than humans”: I’m mostly sympathetic. I see various places where I could speculate about what Ege’s objections might be. But I’m not sure how productive it is for me to try to speculate about his exact views when I don’t really buy them myself. I guess I just think that the argument you presented in this comment is somewhat complex, and I’d predict higher probability that people object (or haven’t thought about) some part of this argument then that they bite the crazy “universal human slow-down wouldn’t matter” bullet.
Yeah, I agree with this and doesn’t seem that productive to speculate about people’s views when I don’t fully understand them.
They might say something like: “We’re only asserting that labor and compute are complementary. That means it’s totally possible that slowing down humans would slow progress a lot, but that speeding up humans wouldn’t increase the speed by a lot.” But that just raises the question of why we should think our current labor<>compute ratio is so close to the edge of where further labor speed-ups stop helping.
I discuss this sort of thing in this comment and in a draft post I’ve DM’d you.
Physical bottlenecks, compute bottlenecks, etc.
The claim that you can only speed up algorithmic progress (given a fixed amount of compute) by a moderate amount even with an arbitrarily fast and smart superinteligence reduces to something like this.
Yes, but if you can (e.g.) spend extra compute to massively accelerate AI R&D or a smaller number of other key sectors which might be bottlenecked on fast labor, then doing this might be much more useful than very broadly automating remote work. I think it’s somewhat hard to end up with a view where generally automating remote work across the whole economy using a form factor pretty similar to a human worker (in speed and smarts) is plausible unless you also don’t think there are huge returns to accelerated speed given that things are so likely to funge and be variable.
(E.g., right now it is possible to spend more money to run AIs >10x faster. So, given limited fab ability, it will probably be possible to run AIs 10x faster than otherwise at substanitially higher cost at a point before when you would otherwise been able to automate all remote work. This implies that if there are high returns to speed, then you’d deploy these fast AIs for these tasks.)
You can interpret my argument here as claiming that in some important sectors/tasks AIs will be vastly more productive than typical humans per flop spent due to higher smarts (even if the AIs aren’t superhuman, the best humans are very scarce) and serial speed. By the time you’ve gotten around to automating everything, quite likely the AIs are very superhuman because you drilled down on narrower parts of the economy. (Then there is the question of whether these parts of the economy bottleneck without growing everything in parallel which is a generalization of the software only singularity question.)
Separately:
I must misunderstand what Ege means by this because isn’t this trivially false on a task by task basis? If you tried to use a human in cursor it would be much less useful in many respects due to insufficient serial speed.
Maybe Ege means “the current marginal revenue from using 1e15 FLOP / s isn’t much higher than the revenue from a reasonably capable human”, but isn’t this just an extremely trivial implication of there being a market for compute and the cost of compute being below the cost of labor. (A human in the US costs $7-100 / hour while human equivalent flop (1e15 FLOP / s) costs around $2 / hour.) I think this can’t possibly be right because this claim was trivially false 30 years ago when chips were worse. I certainly agree that compute prices are likely to rise once AIs are more capable.
Compute would also be reduced within a couple of years, though, as workers at TSMC, NVIDIA, ASML and their suppliers all became much slower and less effective. (Ege does in fact think that explosive growth is likely once AIs are broadly automating human work! So he does think that more, smarter, faster labor can eventually speed up tech progress; and presumably would also expect slower humans to slow down tech progress.)
So I think the counterfactual you want to consider is one where only people doing AI R&D in particular are slowed down & made dumber. That gets at the disagreement about the importance of AI R&D, specifically, and how much labor vs. compute is contributing there.
For that question, I’m less confident about what Ege and the other mechanize people would think.
(They might say something like: “We’re only asserting that labor and compute are complementary. That means it’s totally possible that slowing down humans would slow progress a lot, but that speeding up humans wouldn’t increase the speed by a lot.” But that just raises the question of why we should think our current labor<>compute ratio is so close to the edge of where further labor speed-ups stop helping. Maybe the answer there is that they think parallel work is really good, so in the world where people were 50x slower, the AI companies would just hire 100x more people and not be too much worse off. Though I think that would massively blow up their spending on labor relative to capital, and so maybe it’d make it a weird coincidence that their current spending on labor and capital is so close to 50⁄50.)
Re your response to “Ege doesn’t expect AIs to be much smarter or faster than humans”: I’m mostly sympathetic. I see various places where I could speculate about what Ege’s objections might be. But I’m not sure how productive it is for me to try to speculate about his exact views when I don’t really buy them myself. I guess I just think that the argument you presented in this comment is somewhat complex, and I’d predict higher probability that people object (or haven’t thought about) some part of this argument then that they bite the crazy “universal human slow-down wouldn’t matter” bullet.
Yeah, I agree with this and doesn’t seem that productive to speculate about people’s views when I don’t fully understand them.
I discuss this sort of thing in this comment and in a draft post I’ve DM’d you.