But the actual wider economic and societal impacts of AI so far seem surprisingly small, given how smart and easily accessible SoTA models and harnesses are.
Idk about “surprisingly small”, but the economic impacts aren’t that small! AI company revenue is ~0.4% of US GDP and it looks like it will grow to be a significantly larger factor prior to very high capability levels.
This seems like a win for Eliezer’s world model vs. Paul’s, and a reason for pessimism about some iterative-deployment takes and plans are more broadly.
Is it? I think Eliezer’s world model predicts significantly lower revenue and I’d guess Paul would have made reasonable guesses about revenue given metrics like time horizon and other capability measures? (And how long AIs at this capability level have been around.) I suspect Paul would have been a bit high, but not crazy high?
Revenue has been growing very fast from a low base! Like, it’s crazy that revenues have been growing 3x/year and soon may be growing ~10x/year if Anthropic starts driving the overall AI industry trend and their growth continues.
To clarify: I think specifically Paul seems to have expected >$10 trillion in revenue prior to AIs that can easily takeover the world. This seems unlikely to me and I think Paul has updated against this. But, I do think we’re likely to get to >$1 trillion before this point and >$10 trillion seems plausible. So, we’re seemingly closer to Paul’s view than Eliezer’s view in log space on this particular question.
Idk about “surprisingly small”, but the economic impacts aren’t that small! AI company revenue is ~0.4% of US GDP and it looks like it will grow to be a significantly larger factor prior to very high capability levels.
0.4% of GDP is a lot in absolute terms, and yes it’s growing fast, but by surprisingly small, I meant relative to already-existing capability levels (which are, conversely, surprisingly high).
I don’t know if anyone explicitly considered this at the time, but if you went back to 2021 and told everyone that the IMO gold challenge bet resolved in Eliezer’s favor, and further that the models that were used to win it were widely available and general (good for things other than competition math), I think everyone would be surprised that we don’t also have a lot more information about Paul’s 4 year → 1 year GWP doubling time prediction. Meaning: it would be clear either way by this point whether we’re going to get a 4 year doubling before a 1 year doubling or not, or possibly that one of the doublings would have already happened, or almost happened, by now. I think you (and possibly Paul in 2021) are now saying that a fast takeoff (and possibly a 1 year doubling before a 4 year doubling) now looks somewhat more likely going forward, based on current AI revenue growth trajectories:
I think this would be significant evidence that takeoff will be limited by sociological facts and engineering effort rather than a slow march of smooth ML scaling. Maybe I’d move from a 30% chance of hard takeoff to a 50% chance of hard takeoff.
But I think the fact that it’s still uncertain (at least, I’m uncertain about it) is itself surprising to most 2021 models.
Idk about “surprisingly small”, but the economic impacts aren’t that small! AI company revenue is ~0.4% of US GDP and it looks like it will grow to be a significantly larger factor prior to very high capability levels.
Is it? I think Eliezer’s world model predicts significantly lower revenue and I’d guess Paul would have made reasonable guesses about revenue given metrics like time horizon and other capability measures? (And how long AIs at this capability level have been around.) I suspect Paul would have been a bit high, but not crazy high?
Revenue has been growing very fast from a low base! Like, it’s crazy that revenues have been growing 3x/year and soon may be growing ~10x/year if Anthropic starts driving the overall AI industry trend and their growth continues.
See also: Paul’s comment here
To clarify: I think specifically Paul seems to have expected >$10 trillion in revenue prior to AIs that can easily takeover the world. This seems unlikely to me and I think Paul has updated against this. But, I do think we’re likely to get to >$1 trillion before this point and >$10 trillion seems plausible. So, we’re seemingly closer to Paul’s view than Eliezer’s view in log space on this particular question.
0.4% of GDP is a lot in absolute terms, and yes it’s growing fast, but by surprisingly small, I meant relative to already-existing capability levels (which are, conversely, surprisingly high).
I don’t know if anyone explicitly considered this at the time, but if you went back to 2021 and told everyone that the IMO gold challenge bet resolved in Eliezer’s favor, and further that the models that were used to win it were widely available and general (good for things other than competition math), I think everyone would be surprised that we don’t also have a lot more information about Paul’s 4 year → 1 year GWP doubling time prediction. Meaning: it would be clear either way by this point whether we’re going to get a 4 year doubling before a 1 year doubling or not, or possibly that one of the doublings would have already happened, or almost happened, by now. I think you (and possibly Paul in 2021) are now saying that a fast takeoff (and possibly a 1 year doubling before a 4 year doubling) now looks somewhat more likely going forward, based on current AI revenue growth trajectories:
But I think the fact that it’s still uncertain (at least, I’m uncertain about it) is itself surprising to most 2021 models.