Reality itself doesn’t know whether AI is a bubble. Or, to be more precise: whether a “burst-like event”[1] will happen or not is—in all likelihood, as far as I’m concerned—not entirely determined at this point in time. If we were to “re-run reality” a million times starting today, we’d probably find something that looks like a bursting bubble in some percentage of these and nothing that looks like a bursting bubble in some other percentage—and the rest would be cases where people disagree even in hindsight whether a bubble did burst or not.[2]
When people discuss whether AI is a bubble, they often frame this (whether deliberately or not) as a question about the current state of reality. As if you could just go out into the world and do some measurements, and if you find out “yep, it’s a bubble”, then you know for sure that this bubble must pop eventually.[3] And while there certainly are ways to measure properties of bubbliness of different parts of the economy, it could well be that what looks like a bubble today may either slowly “deflate” rather than burst, or reality around it catches up eventually, justifying the previously high valuations.
Uncertainty is sometimes conceptually split into two parts: epistemic (our limited knowledge) and aleatory (fundamental uncertainty in reality itself). My claim here is basically just that, when it comes to bubbles bursting in the future, the aleatory component is not 0, and we shouldn’t treat it as such. In other words, there is an upper limit in how certain a rational person can become at any point in time on whether an AI bubble burst event will occur or not. Sadly, knowing where that limit is is in itself uncertain, which makes all of this not very actionable. Still, it seems important[4] to acknowledge that we can’t just expect that doing any amount of research today will lead to certainty on such questions, as reality itself probably isn’t fully certain on the question at hand.
Ultimately, whether any burst-like event will eventually occur depends on a complex interplay of market participants’ expectations. Any current bubble-like properties of the AI sector definitely play a big role in shaping these expectations and thereby the outcome—but even then, these expectations are highly path-dependent, and I find it very unlikely that the current state of the world fully determines how they will, in fact, develop.
Of course, you can distinguish between “X has bubble-like properties right now” and “The X bubble will eventually burst”. You could believe something “is a bubble” in some sense without having to also believe that this bubble will burst. In public discourse though, “X is a bubble” is often, whether explicitly or implicitly, equated with “the X bubble will burst”. My take here mostly focuses on predictions of future bursts rather than claims about present bubble-like properties.
I make no claims about the magnitude of these different probabilities; this is rather a meta argument about how these discussions are often framed, and that that can be misleading. It could of course still be true that reality is determined to a degree that the probability of a future bubble burst event gets ~arbitrarily close to 0% or 100% (even though I’d be surprised if that were currently the case)
Not everyone discusses it like that or has this model of the world, but it’s very easy to walk away with this impression when following the public discourse around the topic.
Is it actually important? I’m not sure. Perhaps, even epistemic uncertainty is “enough” if you take it seriously. Maybe the idea of aleatory uncertainty in this context is just a useful intuition pump to resist the urge to become highly confident in one’s judgment about the outcome of a complex process. 🤷
It may be unknown, or even unknowable by any real-world agent. It’s still not necessarily undetermined by the universe—I find it pretty likely that the universe is, in fact, deterministic.
Your underlying point is correct, though. Because human behavior is anti-inductive (people change their behavior based on their predictions of others’ predictions), a lot of these kinds of questions are chaotic (in the fractal / James Gleik sense).
So far as I can tell, the most plausible way for the universe to be deterministic is something along the lines of “many worlds” where Reality is a vast superposition of what-look-to-us-like-realities, and if the future of AI is determined what that means is more like “15% of the future has AI destroying all human value, 10% has AI ushering in a utopia for humans, 20% has it producing a mundane dystopia where all the power and wealth is in a few not-very-benevolent hands, 20% has it improving the world in mundane ways, and 35% has it fizzling out and never making much more change than it already has done” than like “it’s already determined that AI will/won’t kill us all”.
(For the avoidance of doubt, those percentages are not serious attempts at estimating the probabilities. Maybe some of them are more like 0.01% or 99.99%.)
Reality itself doesn’t know whether AI is a bubble. Or, to be more precise: whether a “burst-like event”[1] will happen or not is—in all likelihood, as far as I’m concerned—not entirely determined at this point in time. If we were to “re-run reality” a million times starting today, we’d probably find something that looks like a bursting bubble in some percentage of these and nothing that looks like a bursting bubble in some other percentage—and the rest would be cases where people disagree even in hindsight whether a bubble did burst or not.[2]
When people discuss whether AI is a bubble, they often frame this (whether deliberately or not) as a question about the current state of reality. As if you could just go out into the world and do some measurements, and if you find out “yep, it’s a bubble”, then you know for sure that this bubble must pop eventually.[3] And while there certainly are ways to measure properties of bubbliness of different parts of the economy, it could well be that what looks like a bubble today may either slowly “deflate” rather than burst, or reality around it catches up eventually, justifying the previously high valuations.
Uncertainty is sometimes conceptually split into two parts: epistemic (our limited knowledge) and aleatory (fundamental uncertainty in reality itself). My claim here is basically just that, when it comes to bubbles bursting in the future, the aleatory component is not 0, and we shouldn’t treat it as such. In other words, there is an upper limit in how certain a rational person can become at any point in time on whether an AI bubble burst event will occur or not. Sadly, knowing where that limit is is in itself uncertain, which makes all of this not very actionable. Still, it seems important[4] to acknowledge that we can’t just expect that doing any amount of research today will lead to certainty on such questions, as reality itself probably isn’t fully certain on the question at hand.
Ultimately, whether any burst-like event will eventually occur depends on a complex interplay of market participants’ expectations. Any current bubble-like properties of the AI sector definitely play a big role in shaping these expectations and thereby the outcome—but even then, these expectations are highly path-dependent, and I find it very unlikely that the current state of the world fully determines how they will, in fact, develop.
Of course, you can distinguish between “X has bubble-like properties right now” and “The X bubble will eventually burst”. You could believe something “is a bubble” in some sense without having to also believe that this bubble will burst. In public discourse though, “X is a bubble” is often, whether explicitly or implicitly, equated with “the X bubble will burst”. My take here mostly focuses on predictions of future bursts rather than claims about present bubble-like properties.
I make no claims about the magnitude of these different probabilities; this is rather a meta argument about how these discussions are often framed, and that that can be misleading. It could of course still be true that reality is determined to a degree that the probability of a future bubble burst event gets ~arbitrarily close to 0% or 100% (even though I’d be surprised if that were currently the case)
Not everyone discusses it like that or has this model of the world, but it’s very easy to walk away with this impression when following the public discourse around the topic.
Is it actually important? I’m not sure. Perhaps, even epistemic uncertainty is “enough” if you take it seriously. Maybe the idea of aleatory uncertainty in this context is just a useful intuition pump to resist the urge to become highly confident in one’s judgment about the outcome of a complex process. 🤷
It may be unknown, or even unknowable by any real-world agent. It’s still not necessarily undetermined by the universe—I find it pretty likely that the universe is, in fact, deterministic.
Your underlying point is correct, though. Because human behavior is anti-inductive (people change their behavior based on their predictions of others’ predictions), a lot of these kinds of questions are chaotic (in the fractal / James Gleik sense).
So far as I can tell, the most plausible way for the universe to be deterministic is something along the lines of “many worlds” where Reality is a vast superposition of what-look-to-us-like-realities, and if the future of AI is determined what that means is more like “15% of the future has AI destroying all human value, 10% has AI ushering in a utopia for humans, 20% has it producing a mundane dystopia where all the power and wealth is in a few not-very-benevolent hands, 20% has it improving the world in mundane ways, and 35% has it fizzling out and never making much more change than it already has done” than like “it’s already determined that AI will/won’t kill us all”.
(For the avoidance of doubt, those percentages are not serious attempts at estimating the probabilities. Maybe some of them are more like 0.01% or 99.99%.)