I think people in these parts are not taking sufficiently seriously the idea that we might be in an AI bubble. this doesn’t necessarily mean that AI isn’t going to be a huge deal—just because there was a dot com bubble doesn’t mean the Internet died—but it does very substantially affect the strategic calculus in many ways.
I would be utterly unsurprised to see an AI crash in the next 24 months, leading to another AI Winter. I lived through 1999 and Petfood.com and the Internet bubble pop. And I can pattern match.
But the Internet crash didn’t last long. Google and Amazon survived just fine, Ruby on Rails was big within half a decade, and soon enough we were doing Web 2.0 and AJAX and all that fun stuff.
It’s possible that current generation LLMs might hit a wall soon, for various architectural reasons that are obvious to many people but that I’m superstitiously averse to amplifying. If they do, that increases the chance of an AI Winter until the underlying research gets done.
But I have trouble imagining any series of events that buys us 10 more years. Bubble pops in tech are usually an early correction that wipes out a Precambrian Explosion of dumb money, and that ultimately concentrates resources into a few successful players.
I guess figuring out whether we’re “in a bubble” just hasn’t seemed very important to me, relative to how hard it seems to determine? What effects on the strategic calculus do you think it has?
E.g. my current best guess is that I personally should just do what I can to help build the science of interpretability and learning as fast as possible, so we can get to a point where we can start doing proper alignment research and reason more legibly about why alignment might be very hard and what could go wrong. Whether we’re in a bubble or not mostly matters for that only insofar as it’s one factor influencing how much time we have left to do that research.
But I’m already going about as fast as I can anyway, so having a better estimate of timelines isn’t very action-relevant for me. And “bubble vs. no bubble” doesn’t even seem like a leading-order term in timeline uncertainty anyway.
all effects that route through longer timelines (allocating more to upskilling oneself and others, longer term bets, not expecting agi to look like current models, aggressiveness of distributing funds to alignment, etc)
whether to pursue an aggressive (stock-heavy) or conservative (bond-heavy) investment strategy. if there is an ai bubble pop, it will likely bring the entire economy into a recession.
how much money to save as runway; should you be taking advantage of the bubble to grab as much cash as possible before the music stops, or should you be trying to dispose of all of your money before the singularity makes it worthless?
for lab employees: how much lab equity to sell/hold?
how much to emphasize “agi soon” in public comms, or in conversations with policymakers? (during a bubble pop, having predicted agi soon will probably be even more negatively viewed than merely having been wrong about timelines with no pop)
if there is a bubble and it pops, sentiment around agi will flip from inevitability to impossibility. many people will not be epistemically strong enough to resist the urge to conform. being aware of the hype cycle can help free yourself from it and avoid both over and under exuberance.
whether to pursue an aggressive (stock-heavy) or conservative (bond-heavy) investment strategy. if there is an ai bubble pop, it will likely bring the entire economy into a recession.
This is my biggest disagreement at the moment, and the reason is unlike 2008 or 2020, there’s no supply squeeze or financial consequences severe enough that banks start to fail, and I expect an AI bubble to look more like the 2000 bubble than the 2008 or 2020 bubbles/crises.
That said, AI stocks would fall hard and GPUs would become way, way cheaper.
“should you be trying to dispose of all of your money before the singularity makes it worthless”
This is pretty different than my model of what would happen? Though I admittedly haven’t spent a ton of time thinking through it. I just don’t see why money would lose value though; I expect that some goods would still remain scarce, positional, etc (land in high-demand cities being a strong example), which would seem to cut against that happening?
I’m actually uncertan about whether an AI bubble would trigger a recession (period of broad decline in economic activity). What I have seen reported is that the economy’s generally stagnant now, except for AI, and that American jobs may have been declining for months already, contrary to published statistics, motivating a recent rate cut by the Fed. If true, would an AI bubble really have substantial broader ripple effects outside the AI sector, and would those ripples even necessarily be negative?
In particular, I have been suspecting that the biotech winter that’s been going on for a few years is partly due to routing of investor money into the AI craze. Maybe that money just ends up funding other economic activity instead? I’m really unsure of how to think about this and am quite curious.
my mental model of how a pop triggers a broader crash is something like: a lot of people are taking money and investing it into AI stuff, directly (by investing in openai, nvidia, tsmc, etc) or indirectly (by investing in literally anything else; like, cement companies that make a lot of money by selling cement to build datacenters or whatever). this includes VCs, sovereign wealth funds, banks, etc. if it suddenly turned out that the datacenters and IP were worth a lot less than they thought it was, their equity (or debt) ownership is suddenly worth a lot less than they thought it was, and they may become insolvent. and lots of financial institutions becoming insolvent is pretty bad.
Hm. It seems like the extent to which there is an increased risk of insolvency due to a popped AI bubble would partly depend on the extent to which these institutions had sold other assets or used leverage to pay for equity in or lend to AI companies and the suppliers that are most dependent on AI company business.
My understanding is that the great financial crisis resulted from extremely leveraged investments in mortgages due to lenient rules and a perception that American mortgages were extremely reliably paid. I don’t know to what extent important institutions may be overleveraged or overweighted in their investments in AI.
But my modal prediction is that an AI bubble would cause hedged AI investors to become less valuable without becoming insolvent, a bunch of distressed assets to be purchased for low low prices by those who kept their powder dry, and a bunch of cancelled orders and perhaps layoffs and restructuring by suppliers who expanded to meet the temporary surge in demand by AI companies. That could cause turmoil, but I really don’t have a sense of to what extent the American or global economy has reshaped itself to build out AI. It’s hard to know particularly because with Trump’s tariffs, there has been so much coincident market turmoil that it’s hard to know how much is AI and how much is tariffs/end of ZIRP (as others have pointed out before).
I just meant that if an oracle told me ASI was coming in two years, I probably couldn’t spend down energy reserves to get more done within that timeframe compared to being told it’ll take ten years. I might feel a greater sense of urgency than I already am and perhaps end up working longer hours as a result of that, but if so that’d probably be an unendorsed emotional response I couldn’t help more than a considered plan. I kind of doubt I’d actually get more done that way. Some slack for curiosity and play is required for me to do my job well.
The stakes are already so high and time so short that varying either within an order of magnitude up or down really doesn’t change things all that much.
Then what does it mean, in concrete terms? Can you give some probabilities about what you think will happen to the valuations of what companies over what time frame?
I think people in these parts are not taking sufficiently seriously the idea that we might be in an AI bubble. this doesn’t necessarily mean that AI isn’t going to be a huge deal—just because there was a dot com bubble doesn’t mean the Internet died—but it does very substantially affect the strategic calculus in many ways.
I would be utterly unsurprised to see an AI crash in the next 24 months, leading to another AI Winter. I lived through 1999 and Petfood.com and the Internet bubble pop. And I can pattern match.
But the Internet crash didn’t last long. Google and Amazon survived just fine, Ruby on Rails was big within half a decade, and soon enough we were doing Web 2.0 and AJAX and all that fun stuff.
It’s possible that current generation LLMs might hit a wall soon, for various architectural reasons that are obvious to many people but that I’m superstitiously averse to amplifying. If they do, that increases the chance of an AI Winter until the underlying research gets done.
But I have trouble imagining any series of events that buys us 10 more years. Bubble pops in tech are usually an early correction that wipes out a Precambrian Explosion of dumb money, and that ultimately concentrates resources into a few successful players.
I guess figuring out whether we’re “in a bubble” just hasn’t seemed very important to me, relative to how hard it seems to determine? What effects on the strategic calculus do you think it has?
E.g. my current best guess is that I personally should just do what I can to help build the science of interpretability and learning as fast as possible, so we can get to a point where we can start doing proper alignment research and reason more legibly about why alignment might be very hard and what could go wrong. Whether we’re in a bubble or not mostly matters for that only insofar as it’s one factor influencing how much time we have left to do that research.
But I’m already going about as fast as I can anyway, so having a better estimate of timelines isn’t very action-relevant for me. And “bubble vs. no bubble” doesn’t even seem like a leading-order term in timeline uncertainty anyway.
some reasons why it matters
all effects that route through longer timelines (allocating more to upskilling oneself and others, longer term bets, not expecting agi to look like current models, aggressiveness of distributing funds to alignment, etc)
whether to pursue an aggressive (stock-heavy) or conservative (bond-heavy) investment strategy. if there is an ai bubble pop, it will likely bring the entire economy into a recession.
how much money to save as runway; should you be taking advantage of the bubble to grab as much cash as possible before the music stops, or should you be trying to dispose of all of your money before the singularity makes it worthless?
for lab employees: how much lab equity to sell/hold?
how much to emphasize “agi soon” in public comms, or in conversations with policymakers? (during a bubble pop, having predicted agi soon will probably be even more negatively viewed than merely having been wrong about timelines with no pop)
if there is a bubble and it pops, sentiment around agi will flip from inevitability to impossibility. many people will not be epistemically strong enough to resist the urge to conform. being aware of the hype cycle can help free yourself from it and avoid both over and under exuberance.
This is my biggest disagreement at the moment, and the reason is unlike 2008 or 2020, there’s no supply squeeze or financial consequences severe enough that banks start to fail, and I expect an AI bubble to look more like the 2000 bubble than the 2008 or 2020 bubbles/crises.
That said, AI stocks would fall hard and GPUs would become way, way cheaper.
This is pretty different than my model of what would happen? Though I admittedly haven’t spent a ton of time thinking through it. I just don’t see why money would lose value though; I expect that some goods would still remain scarce, positional, etc (land in high-demand cities being a strong example), which would seem to cut against that happening?
to be more precise, I mean worthless for decreasing p(doom)
I’m actually uncertan about whether an AI bubble would trigger a recession (period of broad decline in economic activity). What I have seen reported is that the economy’s generally stagnant now, except for AI, and that American jobs may have been declining for months already, contrary to published statistics, motivating a recent rate cut by the Fed. If true, would an AI bubble really have substantial broader ripple effects outside the AI sector, and would those ripples even necessarily be negative?
In particular, I have been suspecting that the biotech winter that’s been going on for a few years is partly due to routing of investor money into the AI craze. Maybe that money just ends up funding other economic activity instead? I’m really unsure of how to think about this and am quite curious.
my mental model of how a pop triggers a broader crash is something like: a lot of people are taking money and investing it into AI stuff, directly (by investing in openai, nvidia, tsmc, etc) or indirectly (by investing in literally anything else; like, cement companies that make a lot of money by selling cement to build datacenters or whatever). this includes VCs, sovereign wealth funds, banks, etc. if it suddenly turned out that the datacenters and IP were worth a lot less than they thought it was, their equity (or debt) ownership is suddenly worth a lot less than they thought it was, and they may become insolvent. and lots of financial institutions becoming insolvent is pretty bad.
Hm. It seems like the extent to which there is an increased risk of insolvency due to a popped AI bubble would partly depend on the extent to which these institutions had sold other assets or used leverage to pay for equity in or lend to AI companies and the suppliers that are most dependent on AI company business.
My understanding is that the great financial crisis resulted from extremely leveraged investments in mortgages due to lenient rules and a perception that American mortgages were extremely reliably paid. I don’t know to what extent important institutions may be overleveraged or overweighted in their investments in AI.
But my modal prediction is that an AI bubble would cause hedged AI investors to become less valuable without becoming insolvent, a bunch of distressed assets to be purchased for low low prices by those who kept their powder dry, and a bunch of cancelled orders and perhaps layoffs and restructuring by suppliers who expanded to meet the temporary surge in demand by AI companies. That could cause turmoil, but I really don’t have a sense of to what extent the American or global economy has reshaped itself to build out AI. It’s hard to know particularly because with Trump’s tariffs, there has been so much coincident market turmoil that it’s hard to know how much is AI and how much is tariffs/end of ZIRP (as others have pointed out before).
Tangent:
Is this literally true? You don’t think there are realistic ways for you to make faster progress?
I just meant that if an oracle told me ASI was coming in two years, I probably couldn’t spend down energy reserves to get more done within that timeframe compared to being told it’ll take ten years. I might feel a greater sense of urgency than I already am and perhaps end up working longer hours as a result of that, but if so that’d probably be an unendorsed emotional response I couldn’t help more than a considered plan. I kind of doubt I’d actually get more done that way. Some slack for curiosity and play is required for me to do my job well.
The stakes are already so high and time so short that varying either within an order of magnitude up or down really doesn’t change things all that much.
The evidence just seems to keep pointing towards this not being a bubble.
Then what does it mean, in concrete terms? Can you give some probabilities about what you think will happen to the valuations of what companies over what time frame?