There’s a meme that “nothing ever happens” that’s popular among prediction market traders, with the idea being that the status quo changes less frequently and in ways less according-to-specific-priors than traders first-order expect. I think there’s a similar principle that applies to reasoning about AI development and takeoff speeds: nothing ever happens (prior to the development of superintelligence). In non-meme form, I think people closely following AI development tend to systematically overestimate the likelihood and impactfulness of any particular event or change actually happening prior to the development of superintelligence, and this has some interesting implications.
For one, semiconductor, energy, and tech stocks are way up, and capital markets more broadly are roaring around AI, despite some geopolitical chaos. But the actual wider economic and societal impacts of AI so far seem surprisingly small, given how smart and easily accessible SoTA models and harnesses are.
If you showed a demo of Claude Code or Codex to someone in 2021 and mentioned that it was available to any business or individual for purchase at non-exorbitant rates[1], I think a lot of people would be surprised at how little impact / transformation there has been given (a) how smart the models are and (b) how readily accessible / deploy-able they are.
This seems like a win forEliezer’s world model vs. Paul’s, and a reason for pessimism about some iterative-deployment takes and plans are more broadly. It now seems more plausible that things will look and feel pretty normal (“nothing ever happens”) for the vast majority of people, until they very suddenly don’t.
For two, I think there’s also a related story about a narrower impact on research specifically that goes something like: “we’ll learn a bunch from iteration on earlier models, and also the world will look importantly different in ways that we’ll be able to leverage to do or automate the most critical safety work before crunch time.” Several people have written up plans and research agendas with various details and variations expanding on this basic plan, e.g. @ryan_greenblatt’s recent posts: Plans A, B, C, and D for misalignment risk, Iterated Development and Study of Schemers (IDSS), and How do we (more) safely defer to AIs? are some recent examples in this genre.
But regardless of specifics, it looks less plausible that these kinds of plans get deployed in a world that looks relevantly and substantially different than the one we’re in today, in part because “nothing ever happens”. Even if things on the policy front improve somewhat, e.g. global agreements sufficient to slow race dynamics such that frontier labs have enough time to find and filter down to actually-good control and alignment techniques and then universally adopt + integrate them (a big if), they’ll likely be integrating them in worlds that don’t look dramatically different from the world we’re in today.
And a marginally more interpretable / controllable / aligned (but non-superintelligent) model increasingly looks like it just doesn’t change the gameboard much before actual superintelligence is developed, for the same sorts of reasons that having a plausibly-good-enough baseline level of Claude Cowork functionality and availability doesn’t immediately result in automation of most white-collar work. Alignment researchers are quicker on the uptake in terms of adopting and adapting to new tools unlocked by non-superintelligent AI compared to the average white collar worker, but that’s counterbalanced by the fact that frontier models are also apparently worse at alignment research than generic white collar work, and there are other bottlenecks (e.g. inference compute) as well.
So the upshot is that I think it’s worth planning more for the case that many things during takeoff look pretty similar to the way they do today, and being somewhat skeptical of any plan that assumes a phase-changing uplift step during a gradual takeoff from non-super-intelligent AGI, and that this holds even if good things happen swiftly on the AI governance front.
I think the heuristic “nothing ever happens” is better interpreted to mean “nothing ever happens relative to baseline trends” than “literally nothing ever happens”. The incrementalist worldview seems like a better fit for this heuristic than Eliezer’s, which after all ultimately predicts something very dramatic happening.
Yes. But the OP is about contrasting people like Paul with Eliezer. Paul (I think) does indeed predict a dramatic singularity, but also that the ramp up to said singularity will be smoother and more widely distributed across society than Eliezer predicts.
But the actual wider economic and societal impacts of AI so far seem surprisingly small, given how smart and easily accessible SoTA models and harnesses are.
Idk about “surprisingly small”, but the economic impacts aren’t that small! AI company revenue is ~0.4% of US GDP and it looks like it will grow to be a significantly larger fraction prior to very high capability levels.
This seems like a win for Eliezer’s world model vs. Paul’s, and a reason for pessimism about some iterative-deployment takes and plans are more broadly.
Is it? I think Eliezer’s world model predicts significantly lower revenue and I’d guess Paul would have made reasonable guesses about revenue given metrics like time horizon and other capability measures? (And how long AIs at this capability level have been around.) I suspect Paul would have been a bit high, but not crazy high?
Revenue has been growing very fast from a low base! Like, it’s crazy that revenues have been growing 3x/year and soon may be growing ~10x/year if Anthropic starts driving the overall AI industry trend and their growth continues.
To clarify: I think specifically Paul seems to have expected >$10 trillion in revenue prior to AIs that can easily takeover the world. This seems unlikely to me and I think Paul has updated against this. But, I do think we’re likely to get to >$1 trillion before this point and >$10 trillion seems plausible. So, we’re seemingly closer to Paul’s view than Eliezer’s view in log space on this particular question.
Idk about “surprisingly small”, but the economic impacts aren’t that small! AI company revenue is ~0.4% of US GDP and it looks like it will grow to be a significantly larger factor prior to very high capability levels.
0.4% of GDP is a lot in absolute terms, and yes it’s growing fast, but by surprisingly small, I meant relative to already-existing capability levels (which are, conversely, surprisingly high).
I don’t know if anyone explicitly considered this at the time, but if you went back to 2021 and told everyone that the IMO gold challenge bet resolved in Eliezer’s favor, and further that the models that were used to win it were widely available and general (good for things other than competition math), I think everyone would be surprised that we don’t also have a lot more information about Paul’s 4 year → 1 year GWP doubling time prediction. Meaning: it would be clear either way by this point whether we’re going to get a 4 year doubling before a 1 year doubling or not, or possibly that one of the doublings would have already happened, or almost happened, by now. I think you (and possibly Paul in 2021) are now saying that a fast takeoff (and possibly a 1 year doubling before a 4 year doubling) now looks somewhat more likely going forward, based on current AI revenue growth trajectories:
I think this would be significant evidence that takeoff will be limited by sociological facts and engineering effort rather than a slow march of smooth ML scaling. Maybe I’d move from a 30% chance of hard takeoff to a 50% chance of hard takeoff.
But I think the fact that it’s still uncertain (at least, I’m uncertain about it) is itself surprising to most 2021 models.
There’s a meme that “nothing ever happens” that’s popular among prediction market traders, with the idea being that the status quo changes less frequently and in ways less according-to-specific-priors than traders first-order expect. I think there’s a similar principle that applies to reasoning about AI development and takeoff speeds: nothing ever happens (prior to the development of superintelligence). In non-meme form, I think people closely following AI development tend to systematically overestimate the likelihood and impactfulness of any particular event or change actually happening prior to the development of superintelligence, and this has some interesting implications.
For one, semiconductor, energy, and tech stocks are way up, and capital markets more broadly are roaring around AI, despite some geopolitical chaos. But the actual wider economic and societal impacts of AI so far seem surprisingly small, given how smart and easily accessible SoTA models and harnesses are.
If you showed a demo of Claude Code or Codex to someone in 2021 and mentioned that it was available to any business or individual for purchase at non-exorbitant rates[1], I think a lot of people would be surprised at how little impact / transformation there has been given (a) how smart the models are and (b) how readily accessible / deploy-able they are.
This seems like a win for Eliezer’s world model vs. Paul’s, and a reason for pessimism about some iterative-deployment takes and plans are more broadly. It now seems more plausible that things will look and feel pretty normal (“nothing ever happens”) for the vast majority of people, until they very suddenly don’t.
For two, I think there’s also a related story about a narrower impact on research specifically that goes something like: “we’ll learn a bunch from iteration on earlier models, and also the world will look importantly different in ways that we’ll be able to leverage to do or automate the most critical safety work before crunch time.” Several people have written up plans and research agendas with various details and variations expanding on this basic plan, e.g. @ryan_greenblatt’s recent posts: Plans A, B, C, and D for misalignment risk, Iterated Development and Study of Schemers (IDSS), and How do we (more) safely defer to AIs? are some recent examples in this genre.
But regardless of specifics, it looks less plausible that these kinds of plans get deployed in a world that looks relevantly and substantially different than the one we’re in today, in part because “nothing ever happens”. Even if things on the policy front improve somewhat, e.g. global agreements sufficient to slow race dynamics such that frontier labs have enough time to find and filter down to actually-good control and alignment techniques and then universally adopt + integrate them (a big if), they’ll likely be integrating them in worlds that don’t look dramatically different from the world we’re in today.
And a marginally more interpretable / controllable / aligned (but non-superintelligent) model increasingly looks like it just doesn’t change the gameboard much before actual superintelligence is developed, for the same sorts of reasons that having a plausibly-good-enough baseline level of Claude Cowork functionality and availability doesn’t immediately result in automation of most white-collar work. Alignment researchers are quicker on the uptake in terms of adopting and adapting to new tools unlocked by non-superintelligent AI compared to the average white collar worker, but that’s counterbalanced by the fact that frontier models are also apparently worse at alignment research than generic white collar work, and there are other bottlenecks (e.g. inference compute) as well.
So the upshot is that I think it’s worth planning more for the case that many things during takeoff look pretty similar to the way they do today, and being somewhat skeptical of any plan that assumes a phase-changing uplift step during a gradual takeoff from non-super-intelligent AGI, and that this holds even if good things happen swiftly on the AI governance front.
the $200 / mo subscriptions are kinda limited, but you can just pay-per-token and it’s not crazy expensive for many use cases.
I think the heuristic “nothing ever happens” is better interpreted to mean “nothing ever happens relative to baseline trends” than “literally nothing ever happens”. The incrementalist worldview seems like a better fit for this heuristic than Eliezer’s, which after all ultimately predicts something very dramatic happening.
Hard disagree—extrapolating the baseline trends predicts an extremely dramatic singularity happening sometime in the next decade or so.
Yes. But the OP is about contrasting people like Paul with Eliezer. Paul (I think) does indeed predict a dramatic singularity, but also that the ramp up to said singularity will be smoother and more widely distributed across society than Eliezer predicts.
Idk about “surprisingly small”, but the economic impacts aren’t that small! AI company revenue is ~0.4% of US GDP and it looks like it will grow to be a significantly larger fraction prior to very high capability levels.
Is it? I think Eliezer’s world model predicts significantly lower revenue and I’d guess Paul would have made reasonable guesses about revenue given metrics like time horizon and other capability measures? (And how long AIs at this capability level have been around.) I suspect Paul would have been a bit high, but not crazy high?
Revenue has been growing very fast from a low base! Like, it’s crazy that revenues have been growing 3x/year and soon may be growing ~10x/year if Anthropic starts driving the overall AI industry trend and their growth continues.
See also: Paul’s comment here
To clarify: I think specifically Paul seems to have expected >$10 trillion in revenue prior to AIs that can easily takeover the world. This seems unlikely to me and I think Paul has updated against this. But, I do think we’re likely to get to >$1 trillion before this point and >$10 trillion seems plausible. So, we’re seemingly closer to Paul’s view than Eliezer’s view in log space on this particular question.
0.4% of GDP is a lot in absolute terms, and yes it’s growing fast, but by surprisingly small, I meant relative to already-existing capability levels (which are, conversely, surprisingly high).
I don’t know if anyone explicitly considered this at the time, but if you went back to 2021 and told everyone that the IMO gold challenge bet resolved in Eliezer’s favor, and further that the models that were used to win it were widely available and general (good for things other than competition math), I think everyone would be surprised that we don’t also have a lot more information about Paul’s 4 year → 1 year GWP doubling time prediction. Meaning: it would be clear either way by this point whether we’re going to get a 4 year doubling before a 1 year doubling or not, or possibly that one of the doublings would have already happened, or almost happened, by now. I think you (and possibly Paul in 2021) are now saying that a fast takeoff (and possibly a 1 year doubling before a 4 year doubling) now looks somewhat more likely going forward, based on current AI revenue growth trajectories:
But I think the fact that it’s still uncertain (at least, I’m uncertain about it) is itself surprising to most 2021 models.