Thank you for engaging. I now understand better your point. The set of things people are worried about AI is very large, and I agree I addressed only part, and maybe not the most important part of what people are worried about. I also agree that “experts” disagree with each other, so you can’t just trust the expert. I can offer my thoughts of how to think of AI, and maybe they will make sense to some people, but they should make their own judgement and not take things on faith.
If I understand correctly, you want for the sake of discussions, to consider the world where AGI takes 20+ years to achieve. People have different definitions of AGI, but it seems safe to say this world would be one where progress significantly undershoots the expectations of many people in the AI space and AI companies. There is a sense of positive feedback loop—I imagine that if AI undershoots expectations then funding will also be squeezed and so this could lead to even more slowdown—and so in such a world it’s possible that over the next 20 years AI’s impact, for both good and bad, will just not be extremely significant.
If we talk about “prosaic harms” we should also talk about “prosaic benefits”. If we take the view of AI as a “normal technology” then our past experience with technologies was that overall the benefits are larger than the harms. Over the long run, we have seen a pretty smooth and consistent increase in life expectency and other metrics of wellbeing. So if AI does not radically reshape society, the baseline expectation should be that it has overall a positive impact. AI may well have positive impact even if it does radically shape humanity (I happen to believe it will) but we have less prior data to base on in that case.
“We’ll replace tons of jobs really fast and it will probably be good for anyone who’s smart and cares” is counterintuitive, for good reasons. I’m a good libertarian capitalist like most folks here, but markets embedded in societies aren’t magic.
New technologies have been net beneficial over the long run, not the short run. Job disruptions have taken up to a hundred years, by some good-sounding arguments, to return to the same average wage. I think that was claimed for industrial looms and the steam engine; but there’s a credible claim that the average time of recovery has been very long. And those didn’t disrupt the markets nearly as quickly as drop-in replacements for intellectual labor would do.
Assuming that upsides of even a relatively slow, aligned AI progress are likely to outweigh the negatives, without further argument, seems purely optimistic.
AI will certainly have prosaic benefits. They seem pretty unlikely to outweigh the harms.
Civilizations have not typically reacted well enough to massive disruptions to be optimistic about the unknowns here. Spreading the advantages of AI as broadly as the pains of job losses seems like threading a needle that nobody has even aimed at yet.
I am an optimist by nature. The more closely I think about AI impacts, the less optimistic I feel.
I don’t know what to say to young people, because uncertainty is historically really bad, and the objective situation seems to be mostly about massive uncertainty.
Wow, I’m really glad that you stuck with me here, and am surprised that we managed to clear so much up. It does feel to me now like we’re on the same page and can dig in on the object level disagreement / clarify the dread-inducing long timelines picture.
When I’m thinking about worlds where AGI takes 20+ years to arrive, it’s not necessarily accompanied by a general slowing of progress. It’s usually just “that underspecified goal out there on the horizon is further away than you think it is.” I don’t at all dispute that contemporary systems are powerful, or that progress is very fast, and I don’t actually expect legislation, economic blowback, or public opinion to slow things down (I’d like it if they did and am trying to make that happen! But it doesn’t feel especially likely). Rather, conditional on very powerful systems taking a while to arrive, I imagine it would be because of a discontinuity in the requirements, and an inadequacy of our existing metrics (plus the incessant gaming of those metrics).
Given the incentives, lack of feedback loops, and general inscrutability of the technology, I’d be pretty unsurprised if it turns out we’re just totally wrong about what a multi-day 80 percent task completion time horizon on the METR eval means for the capabilities of that model once it’s deployed in the world. I also wouldn’t be that shocked if it turns out the capabilities requirements for a system that gave multiple OOMs of speedup to existing progress (a la ‘superhuman coder’ in AI2027) were further off than many expect.
However, even in these worlds, I’m pretty worried about gradual disempowerment and prosaic harms. AGI won’t take 20 years because we are wrong about the capabilities of systems available in 2026, but it may take 20 years because we were wrong about the delta between current systems and the machine god.
Current systems are indeed very powerful, and will simply take time to diffuse through the economy. However, once this process begins in earnest (which it may have already), we’ll be (as Seth said in his comment), in the painful part of economic expansion, where average quality of life actually goes down before going back up, which can last a very long time! If you couple this picture with the idea that progress isn’t slowed (the target is just further away), you end up in a new industrial revolution every time a SOTA model is released. Then you’re stuck in the painful investment part indefinitely, since the rewards of the last boom were never felt, and instead immediately invested in the next boom (with its corresponding 10x payoff).
Something like this is already happening locally at the frontier labs. Here’s Dario talking about:
“There’s two different ways you could describe what’s happening in the model business right now. So, let’s say in 2023, you train a model that costs $100 million, and then you deploy it in 2024, and it makes $200 million of revenue. Meanwhile, because of the scaling laws, in 2024, you also train a model that costs $1 billion. And then in 2025, you get $2 billion of revenue from that $1 billion, and you’ve spent $10 billion to train the model.
So, if you look in a conventional way at the profit and loss of the company, you’ve lost $100 million the first year, you’ve lost $800 million the second year, and you’ve lost $8 billion in the third year — it looks like it’s getting worse and worse.”
Imagine an entire economy operating on that model, where the only way material benefits of the technology are realized is if someone reaches escape velocity and brings about the machine god, since anything that isn’t the machine god is simply viewed as a stepping stone to the machine god, and all of its positive externalities immediately sacrificed on the alter of progress rather than circulating through the economy. On my view, some double-digit percentage of American financial resources are already being used in approximately this way. Either that continues, or the economy collapses in a tech bubble burst, plausibly wiping as much as 60 percent of the value off the S&P ~overnight. (A bubble burst would also accelerate automation adoption as companies look for ways to cut costs, and AI infrastructure plummets in value, permitting entrenched giants to snap it up cheaply.)
To be clear, I’m not especially economically savvy, and wouldn’t be surprised if parts of my picture here are wrong, but this is the thing that young people see when they think about AI: Either we build the machine god, or we permanently mortgage our collective future trying. This is why it’s disinteresting to me to talk about ‘benefits’ of AI systems in longer timeline scenarios. (Of course they will! We’re just not going to be in a scenario that permits most to experience them (much less so than with other technologies).
Thank you. I am not an economist, but I think that it is unlikely for the entire economy to operate on the model of an AI lab whereby every year you keep just pumping all gains back into AI. Both investors and the general public have a limited patience, and they will want to see some benefits. While our democracy is not perfect, public opinion has much more impact today than the opinions of factory workers in England in the 1700′s, and so I do hope that we won’t see the pattern where things became worse before they were better. But I agree that it is not sure thing by any means.
However, if AI does indeed still grow in capability and economic growth is significantly above the 2% per capita it has been stuck on for the last ~120 years it would be a very big deal and would open up new options for increasing the social safety net. Many of the dillemas—e.g. how do we reduce the deficit without slashing benefits, etc. - will just disappear with that level of growth. So at least economically, it would be possible for the U.S. to have Scandinavian levels of social services. (Whether the U.S. political system will deliver that is another matter, but at least from the last few years it seems that even the Republican party is not shy about big spending.)
This actually goes to the bottom line, is that I think how AI ends up playing out will end up depending not so much on the economic but on the political factors, which is part of what I wrote about in “Machines of Faithful Obedience”. If AI enables authoritarian government then we could have a scenario with very few winners and a vast majority of losers. But if we keep (and hopefully strenghten) our democracy then I am much more optimistic about how the benefits from AI will be spread.
I don’t think there is something fundamental about AI that makes it obvious in which way it will shift the balance of power between governments and individuals. Sometimes the same technology could have either impact. For example the printing press had the impact of reducing state power in Europe and increasing it in China. So I think it’s still up in the air how it will play out. Actually this is one of the reasons I am happy that so far AI’s development has been in the private sector, and aimed at making money and marketing to consumers, than in developed in government, and focused on military applications, as it well could have been in another timeline.
This feels like a natural stopping point where we’ve surfaced a bunch of background disagreements. Short version is: I am much more pessimistic about the behavior of governments, citizens, and corporations than you appear to be, and I expect further advances in AI to make this situation worse, rather than better, for concentration of power reasons.
Thank you for engaging. I now understand better your point.
The set of things people are worried about AI is very large, and I agree I addressed only part, and maybe not the most important part of what people are worried about. I also agree that “experts” disagree with each other, so you can’t just trust the expert. I can offer my thoughts of how to think of AI, and maybe they will make sense to some people, but they should make their own judgement and not take things on faith.
If I understand correctly, you want for the sake of discussions, to consider the world where AGI takes 20+ years to achieve. People have different definitions of AGI, but it seems safe to say this world would be one where progress significantly undershoots the expectations of many people in the AI space and AI companies. There is a sense of positive feedback loop—I imagine that if AI undershoots expectations then funding will also be squeezed and so this could lead to even more slowdown—and so in such a world it’s possible that over the next 20 years AI’s impact, for both good and bad, will just not be extremely significant.
If we talk about “prosaic harms” we should also talk about “prosaic benefits”. If we take the view of AI as a “normal technology” then our past experience with technologies was that overall the benefits are larger than the harms. Over the long run, we have seen a pretty smooth and consistent increase in life expectency and other metrics of wellbeing. So if AI does not radically reshape society, the baseline expectation should be that it has overall a positive impact. AI may well have positive impact even if it does radically shape humanity (I happen to believe it will) but we have less prior data to base on in that case.
“We’ll replace tons of jobs really fast and it will probably be good for anyone who’s smart and cares” is counterintuitive, for good reasons. I’m a good libertarian capitalist like most folks here, but markets embedded in societies aren’t magic.
New technologies have been net beneficial over the long run, not the short run. Job disruptions have taken up to a hundred years, by some good-sounding arguments, to return to the same average wage. I think that was claimed for industrial looms and the steam engine; but there’s a credible claim that the average time of recovery has been very long. And those didn’t disrupt the markets nearly as quickly as drop-in replacements for intellectual labor would do.
Assuming that upsides of even a relatively slow, aligned AI progress are likely to outweigh the negatives, without further argument, seems purely optimistic.
AI will certainly have prosaic benefits. They seem pretty unlikely to outweigh the harms.
Civilizations have not typically reacted well enough to massive disruptions to be optimistic about the unknowns here. Spreading the advantages of AI as broadly as the pains of job losses seems like threading a needle that nobody has even aimed at yet.
I am an optimist by nature. The more closely I think about AI impacts, the less optimistic I feel.
I don’t know what to say to young people, because uncertainty is historically really bad, and the objective situation seems to be mostly about massive uncertainty.
Wow, I’m really glad that you stuck with me here, and am surprised that we managed to clear so much up. It does feel to me now like we’re on the same page and can dig in on the object level disagreement / clarify the dread-inducing long timelines picture.
When I’m thinking about worlds where AGI takes 20+ years to arrive, it’s not necessarily accompanied by a general slowing of progress. It’s usually just “that underspecified goal out there on the horizon is further away than you think it is.” I don’t at all dispute that contemporary systems are powerful, or that progress is very fast, and I don’t actually expect legislation, economic blowback, or public opinion to slow things down (I’d like it if they did and am trying to make that happen! But it doesn’t feel especially likely). Rather, conditional on very powerful systems taking a while to arrive, I imagine it would be because of a discontinuity in the requirements, and an inadequacy of our existing metrics (plus the incessant gaming of those metrics).
Given the incentives, lack of feedback loops, and general inscrutability of the technology, I’d be pretty unsurprised if it turns out we’re just totally wrong about what a multi-day 80 percent task completion time horizon on the METR eval means for the capabilities of that model once it’s deployed in the world. I also wouldn’t be that shocked if it turns out the capabilities requirements for a system that gave multiple OOMs of speedup to existing progress (a la ‘superhuman coder’ in AI2027) were further off than many expect.
However, even in these worlds, I’m pretty worried about gradual disempowerment and prosaic harms. AGI won’t take 20 years because we are wrong about the capabilities of systems available in 2026, but it may take 20 years because we were wrong about the delta between current systems and the machine god.
Current systems are indeed very powerful, and will simply take time to diffuse through the economy. However, once this process begins in earnest (which it may have already), we’ll be (as Seth said in his comment), in the painful part of economic expansion, where average quality of life actually goes down before going back up, which can last a very long time! If you couple this picture with the idea that progress isn’t slowed (the target is just further away), you end up in a new industrial revolution every time a SOTA model is released. Then you’re stuck in the painful investment part indefinitely, since the rewards of the last boom were never felt, and instead immediately invested in the next boom (with its corresponding 10x payoff).
Something like this is already happening locally at the frontier labs. Here’s Dario talking about:
Imagine an entire economy operating on that model, where the only way material benefits of the technology are realized is if someone reaches escape velocity and brings about the machine god, since anything that isn’t the machine god is simply viewed as a stepping stone to the machine god, and all of its positive externalities immediately sacrificed on the alter of progress rather than circulating through the economy. On my view, some double-digit percentage of American financial resources are already being used in approximately this way. Either that continues, or the economy collapses in a tech bubble burst, plausibly wiping as much as 60 percent of the value off the S&P ~overnight. (A bubble burst would also accelerate automation adoption as companies look for ways to cut costs, and AI infrastructure plummets in value, permitting entrenched giants to snap it up cheaply.)
To be clear, I’m not especially economically savvy, and wouldn’t be surprised if parts of my picture here are wrong, but this is the thing that young people see when they think about AI: Either we build the machine god, or we permanently mortgage our collective future trying. This is why it’s disinteresting to me to talk about ‘benefits’ of AI systems in longer timeline scenarios. (Of course they will! We’re just not going to be in a scenario that permits most to experience them (much less so than with other technologies).
Thank you. I am not an economist, but I think that it is unlikely for the entire economy to operate on the model of an AI lab whereby every year you keep just pumping all gains back into AI.
Both investors and the general public have a limited patience, and they will want to see some benefits. While our democracy is not perfect, public opinion has much more impact today than the opinions of factory workers in England in the 1700′s, and so I do hope that we won’t see the pattern where things became worse before they were better. But I agree that it is not sure thing by any means.
However, if AI does indeed still grow in capability and economic growth is significantly above the 2% per capita it has been stuck on for the last ~120 years it would be a very big deal and would open up new options for increasing the social safety net. Many of the dillemas—e.g. how do we reduce the deficit without slashing benefits, etc. - will just disappear with that level of growth. So at least economically, it would be possible for the U.S. to have Scandinavian levels of social services. (Whether the U.S. political system will deliver that is another matter, but at least from the last few years it seems that even the Republican party is not shy about big spending.)
This actually goes to the bottom line, is that I think how AI ends up playing out will end up depending not so much on the economic but on the political factors, which is part of what I wrote about in “Machines of Faithful Obedience”. If AI enables authoritarian government then we could have a scenario with very few winners and a vast majority of losers. But if we keep (and hopefully strenghten) our democracy then I am much more optimistic about how the benefits from AI will be spread.
I don’t think there is something fundamental about AI that makes it obvious in which way it will shift the balance of power between governments and individuals. Sometimes the same technology could have either impact. For example the printing press had the impact of reducing state power in Europe and increasing it in China. So I think it’s still up in the air how it will play out. Actually this is one of the reasons I am happy that so far AI’s development has been in the private sector, and aimed at making money and marketing to consumers, than in developed in government, and focused on military applications, as it well could have been in another timeline.
This feels like a natural stopping point where we’ve surfaced a bunch of background disagreements. Short version is: I am much more pessimistic about the behavior of governments, citizens, and corporations than you appear to be, and I expect further advances in AI to make this situation worse, rather than better, for concentration of power reasons.
Thanks again!