Wow, I’m really glad that you stuck with me here, and am surprised that we managed to clear so much up. It does feel to me now like we’re on the same page and can dig in on the object level disagreement / clarify the dread-inducing long timelines picture.
When I’m thinking about worlds where AGI takes 20+ years to arrive, it’s not necessarily accompanied by a general slowing of progress. It’s usually just “that underspecified goal out there on the horizon is further away than you think it is.” I don’t at all dispute that contemporary systems are powerful, or that progress is very fast, and I don’t actually expect legislation, economic blowback, or public opinion to slow things down (I’d like it if they did and am trying to make that happen! But it doesn’t feel especially likely). Rather, conditional on very powerful systems taking a while to arrive, I imagine it would be because of a discontinuity in the requirements, and an inadequacy of our existing metrics (plus the incessant gaming of those metrics).
Given the incentives, lack of feedback loops, and general inscrutability of the technology, I’d be pretty unsurprised if it turns out we’re just totally wrong about what a multi-day 80 percent task completion time horizon on the METR eval means for the capabilities of that model once it’s deployed in the world. I also wouldn’t be that shocked if it turns out the capabilities requirements for a system that gave multiple OOMs of speedup to existing progress (a la ‘superhuman coder’ in AI2027) were further off than many expect.
However, even in these worlds, I’m pretty worried about gradual disempowerment and prosaic harms. AGI won’t take 20 years because we are wrong about the capabilities of systems available in 2026, but it may take 20 years because we were wrong about the delta between current systems and the machine god.
Current systems are indeed very powerful, and will simply take time to diffuse through the economy. However, once this process begins in earnest (which it may have already), we’ll be (as Seth said in his comment), in the painful part of economic expansion, where average quality of life actually goes down before going back up, which can last a very long time! If you couple this picture with the idea that progress isn’t slowed (the target is just further away), you end up in a new industrial revolution every time a SOTA model is released. Then you’re stuck in the painful investment part indefinitely, since the rewards of the last boom were never felt, and instead immediately invested in the next boom (with its corresponding 10x payoff).
Something like this is already happening locally at the frontier labs. Here’s Dario talking about:
“There’s two different ways you could describe what’s happening in the model business right now. So, let’s say in 2023, you train a model that costs $100 million, and then you deploy it in 2024, and it makes $200 million of revenue. Meanwhile, because of the scaling laws, in 2024, you also train a model that costs $1 billion. And then in 2025, you get $2 billion of revenue from that $1 billion, and you’ve spent $10 billion to train the model.
So, if you look in a conventional way at the profit and loss of the company, you’ve lost $100 million the first year, you’ve lost $800 million the second year, and you’ve lost $8 billion in the third year — it looks like it’s getting worse and worse.”
Imagine an entire economy operating on that model, where the only way material benefits of the technology are realized is if someone reaches escape velocity and brings about the machine god, since anything that isn’t the machine god is simply viewed as a stepping stone to the machine god, and all of its positive externalities immediately sacrificed on the alter of progress rather than circulating through the economy. On my view, some double-digit percentage of American financial resources are already being used in approximately this way. Either that continues, or the economy collapses in a tech bubble burst, plausibly wiping as much as 60 percent of the value off the S&P ~overnight. (A bubble burst would also accelerate automation adoption as companies look for ways to cut costs, and AI infrastructure plummets in value, permitting entrenched giants to snap it up cheaply.)
To be clear, I’m not especially economically savvy, and wouldn’t be surprised if parts of my picture here are wrong, but this is the thing that young people see when they think about AI: Either we build the machine god, or we permanently mortgage our collective future trying. This is why it’s disinteresting to me to talk about ‘benefits’ of AI systems in longer timeline scenarios. (Of course they will! We’re just not going to be in a scenario that permits most to experience them (much less so than with other technologies).
Thank you. I am not an economist, but I think that it is unlikely for the entire economy to operate on the model of an AI lab whereby every year you keep just pumping all gains back into AI. Both investors and the general public have a limited patience, and they will want to see some benefits. While our democracy is not perfect, public opinion has much more impact today than the opinions of factory workers in England in the 1700′s, and so I do hope that we won’t see the pattern where things became worse before they were better. But I agree that it is not sure thing by any means.
However, if AI does indeed still grow in capability and economic growth is significantly above the 2% per capita it has been stuck on for the last ~120 years it would be a very big deal and would open up new options for increasing the social safety net. Many of the dillemas—e.g. how do we reduce the deficit without slashing benefits, etc. - will just disappear with that level of growth. So at least economically, it would be possible for the U.S. to have Scandinavian levels of social services. (Whether the U.S. political system will deliver that is another matter, but at least from the last few years it seems that even the Republican party is not shy about big spending.)
This actually goes to the bottom line, is that I think how AI ends up playing out will end up depending not so much on the economic but on the political factors, which is part of what I wrote about in “Machines of Faithful Obedience”. If AI enables authoritarian government then we could have a scenario with very few winners and a vast majority of losers. But if we keep (and hopefully strenghten) our democracy then I am much more optimistic about how the benefits from AI will be spread.
I don’t think there is something fundamental about AI that makes it obvious in which way it will shift the balance of power between governments and individuals. Sometimes the same technology could have either impact. For example the printing press had the impact of reducing state power in Europe and increasing it in China. So I think it’s still up in the air how it will play out. Actually this is one of the reasons I am happy that so far AI’s development has been in the private sector, and aimed at making money and marketing to consumers, than in developed in government, and focused on military applications, as it well could have been in another timeline.
This feels like a natural stopping point where we’ve surfaced a bunch of background disagreements. Short version is: I am much more pessimistic about the behavior of governments, citizens, and corporations than you appear to be, and I expect further advances in AI to make this situation worse, rather than better, for concentration of power reasons.
Wow, I’m really glad that you stuck with me here, and am surprised that we managed to clear so much up. It does feel to me now like we’re on the same page and can dig in on the object level disagreement / clarify the dread-inducing long timelines picture.
When I’m thinking about worlds where AGI takes 20+ years to arrive, it’s not necessarily accompanied by a general slowing of progress. It’s usually just “that underspecified goal out there on the horizon is further away than you think it is.” I don’t at all dispute that contemporary systems are powerful, or that progress is very fast, and I don’t actually expect legislation, economic blowback, or public opinion to slow things down (I’d like it if they did and am trying to make that happen! But it doesn’t feel especially likely). Rather, conditional on very powerful systems taking a while to arrive, I imagine it would be because of a discontinuity in the requirements, and an inadequacy of our existing metrics (plus the incessant gaming of those metrics).
Given the incentives, lack of feedback loops, and general inscrutability of the technology, I’d be pretty unsurprised if it turns out we’re just totally wrong about what a multi-day 80 percent task completion time horizon on the METR eval means for the capabilities of that model once it’s deployed in the world. I also wouldn’t be that shocked if it turns out the capabilities requirements for a system that gave multiple OOMs of speedup to existing progress (a la ‘superhuman coder’ in AI2027) were further off than many expect.
However, even in these worlds, I’m pretty worried about gradual disempowerment and prosaic harms. AGI won’t take 20 years because we are wrong about the capabilities of systems available in 2026, but it may take 20 years because we were wrong about the delta between current systems and the machine god.
Current systems are indeed very powerful, and will simply take time to diffuse through the economy. However, once this process begins in earnest (which it may have already), we’ll be (as Seth said in his comment), in the painful part of economic expansion, where average quality of life actually goes down before going back up, which can last a very long time! If you couple this picture with the idea that progress isn’t slowed (the target is just further away), you end up in a new industrial revolution every time a SOTA model is released. Then you’re stuck in the painful investment part indefinitely, since the rewards of the last boom were never felt, and instead immediately invested in the next boom (with its corresponding 10x payoff).
Something like this is already happening locally at the frontier labs. Here’s Dario talking about:
Imagine an entire economy operating on that model, where the only way material benefits of the technology are realized is if someone reaches escape velocity and brings about the machine god, since anything that isn’t the machine god is simply viewed as a stepping stone to the machine god, and all of its positive externalities immediately sacrificed on the alter of progress rather than circulating through the economy. On my view, some double-digit percentage of American financial resources are already being used in approximately this way. Either that continues, or the economy collapses in a tech bubble burst, plausibly wiping as much as 60 percent of the value off the S&P ~overnight. (A bubble burst would also accelerate automation adoption as companies look for ways to cut costs, and AI infrastructure plummets in value, permitting entrenched giants to snap it up cheaply.)
To be clear, I’m not especially economically savvy, and wouldn’t be surprised if parts of my picture here are wrong, but this is the thing that young people see when they think about AI: Either we build the machine god, or we permanently mortgage our collective future trying. This is why it’s disinteresting to me to talk about ‘benefits’ of AI systems in longer timeline scenarios. (Of course they will! We’re just not going to be in a scenario that permits most to experience them (much less so than with other technologies).
Thank you. I am not an economist, but I think that it is unlikely for the entire economy to operate on the model of an AI lab whereby every year you keep just pumping all gains back into AI.
Both investors and the general public have a limited patience, and they will want to see some benefits. While our democracy is not perfect, public opinion has much more impact today than the opinions of factory workers in England in the 1700′s, and so I do hope that we won’t see the pattern where things became worse before they were better. But I agree that it is not sure thing by any means.
However, if AI does indeed still grow in capability and economic growth is significantly above the 2% per capita it has been stuck on for the last ~120 years it would be a very big deal and would open up new options for increasing the social safety net. Many of the dillemas—e.g. how do we reduce the deficit without slashing benefits, etc. - will just disappear with that level of growth. So at least economically, it would be possible for the U.S. to have Scandinavian levels of social services. (Whether the U.S. political system will deliver that is another matter, but at least from the last few years it seems that even the Republican party is not shy about big spending.)
This actually goes to the bottom line, is that I think how AI ends up playing out will end up depending not so much on the economic but on the political factors, which is part of what I wrote about in “Machines of Faithful Obedience”. If AI enables authoritarian government then we could have a scenario with very few winners and a vast majority of losers. But if we keep (and hopefully strenghten) our democracy then I am much more optimistic about how the benefits from AI will be spread.
I don’t think there is something fundamental about AI that makes it obvious in which way it will shift the balance of power between governments and individuals. Sometimes the same technology could have either impact. For example the printing press had the impact of reducing state power in Europe and increasing it in China. So I think it’s still up in the air how it will play out. Actually this is one of the reasons I am happy that so far AI’s development has been in the private sector, and aimed at making money and marketing to consumers, than in developed in government, and focused on military applications, as it well could have been in another timeline.
This feels like a natural stopping point where we’ve surfaced a bunch of background disagreements. Short version is: I am much more pessimistic about the behavior of governments, citizens, and corporations than you appear to be, and I expect further advances in AI to make this situation worse, rather than better, for concentration of power reasons.
Thanks again!