I was trying to figure out where this claim comes from: “the software intelligence explosion will probably (~60%) compress >3 years of AI progress into <1 year, but is somewhat unlikely (~20%) to compress >10 years into <1 year”. Curious if you think this is accurate.
First: “the software intelligence explosion will probably (~60%) compress >3 years of AI progress into <1 year”.
With constant r=1, we get 3 years of progress in 1 year iff the initial speed-up is >6. (Because we need software to go 6 times faster to make up for 3 years of progress, if software has contributed half of the recent AI progress.)
3 years of progress is 3 OOMs of progress, which is 10 doublings. This creates decent room for r to bite (since r changes the speed at every doubling). If r is notably greater than 1, there’s a very high probability of clearing 3 years of progress in 1 year, if r is notably lower than 1, there’s a very low probability. This parameter is somewhat more important than the initial speed-up, but high or low initial speed-up can flip the results for r around 1.
The report estimates speed-up is slightly more likely than not to be above 6 (median is 8) and r is slightly more likely than not to be above 1 (median is 1.2), so overall this means 3 years of AI progress in 1 year is somewhat more likely than not.
What about upper bounds on progress? There are probably significantly further away than 3 years worth of progress, so they don’t matter much. (The model starts lowering r immediately to approach 0, but this is slow enough that it only cuts the probability of 3 years’ worth of progress by 5-10%.)
Second: “the software intelligence explosion is somewhat unlikely (~20%) to compress >10 years into <1 year”.
From playing around with the model, I get:
If I remove the upper bound on software progress without modifying other parameters, there’s a ~60/40 chance of getting 10 years of progress (indeed, an arbitrary number of years of progress) in ≤1 year. Presumably this approximately corresponds to the probability that r is greater than 1.
If I set r=3.6 without modifying other parameters, there’s a ~60/40 of getting 10 years of progress in ≤1 year (and also ≤4 months). Presumably this corresponds to the probability that the upper bound is >10 years.
If I set r to be constant (until the software upper limit is reached), we get 33%, which is approximately 0.6*0.6. (I.e approximately equal to the probability that none of the above two are blockers.)
Intitial speed matters somewhat less here, because 10 years of progress is enough that r matters much more. (But it can move the probability up or down by 5-10% relative to the default model settings.)
So a simplified argument for why >10 years in <1 year is unlikely coud be roughly:
If there weren’t any upper bounds, we’d be a bit more likely than 50⁄50 to reach infinity in 1 year. But it’s only a bit more likely than 50⁄50 that 10 years of progress is even possible with arbitrarily powerful technology. Multiplying these together, we get a probability that’s somewhat greater than a quarter, let’s say a third.
But progress probably will probably slow down a lot before hitting the ultimate limits (rather than keep going at full speed and then crash into the ceiling). It’s very unclear how to model this, but let’s say you need a bit of extra margin for both the upper bound and for r — maybe you need the upper bound to allow for like 11 years of progress (because the last year might be slow) and for r to be greater than 1.5 (so the speed of progress has time to gather up a bit of momentum before it goes below 1). 11 years of progress is around 50⁄50 likely, and r greater than 1.5 is a bit below 50⁄50, so now we’re a bit below one quarter, say 20%.
Nice! Yep this is a great analysis and checks out for me. I think it’s really valuable to back-out qualitative stories to support the conclusions of these models. Thanks very much.
I think it’s possible to get more sceptical of >10 years in <1 year by saying:
Even if r >> 1 today, that’s not strong evidence r will remain >1 until we’re very near effective limits. E.g. suppose r = 4 today, and effective limits of tech are 15 OOMs away. There’s a good chance r falls below r after a few OOMs of progress. But my model assumes that r would remain above 1 until we’re very close to limits. So my model is too aggressive.
This relates to you saying “It’s very unclear how to model this, but let’s say you need a bit of extra margin for both the upper bound and for r — maybe you need the upper bound to allow for like 11 years of progress (because the last year might be slow) and for r to be greater than 1.5 (so the speed of progress has time to gather up a bit of momentum before it goes below 1).” Plausibly you need to add more margin than you’ve done there for this to go through. (Though as you point out in another comment, it’s plausible my values for limits are too conservative.)
I was trying to figure out where this claim comes from: “the software intelligence explosion will probably (~60%) compress >3 years of AI progress into <1 year, but is somewhat unlikely (~20%) to compress >10 years into <1 year”. Curious if you think this is accurate.
First: “the software intelligence explosion will probably (~60%) compress >3 years of AI progress into <1 year”.
With constant r=1, we get 3 years of progress in 1 year iff the initial speed-up is >6. (Because we need software to go 6 times faster to make up for 3 years of progress, if software has contributed half of the recent AI progress.)
3 years of progress is 3 OOMs of progress, which is 10 doublings. This creates decent room for r to bite (since r changes the speed at every doubling). If r is notably greater than 1, there’s a very high probability of clearing 3 years of progress in 1 year, if r is notably lower than 1, there’s a very low probability. This parameter is somewhat more important than the initial speed-up, but high or low initial speed-up can flip the results for r around 1.
The report estimates speed-up is slightly more likely than not to be above 6 (median is 8) and r is slightly more likely than not to be above 1 (median is 1.2), so overall this means 3 years of AI progress in 1 year is somewhat more likely than not.
What about upper bounds on progress? There are probably significantly further away than 3 years worth of progress, so they don’t matter much. (The model starts lowering r immediately to approach 0, but this is slow enough that it only cuts the probability of 3 years’ worth of progress by 5-10%.)
Second: “the software intelligence explosion is somewhat unlikely (~20%) to compress >10 years into <1 year”.
From playing around with the model, I get:
If I remove the upper bound on software progress without modifying other parameters, there’s a ~60/40 chance of getting 10 years of progress (indeed, an arbitrary number of years of progress) in ≤1 year. Presumably this approximately corresponds to the probability that r is greater than 1.
If I set r=3.6 without modifying other parameters, there’s a ~60/40 of getting 10 years of progress in ≤1 year (and also ≤4 months). Presumably this corresponds to the probability that the upper bound is >10 years.
If I set r to be constant (until the software upper limit is reached), we get 33%, which is approximately 0.6*0.6. (I.e approximately equal to the probability that none of the above two are blockers.)
Intitial speed matters somewhat less here, because 10 years of progress is enough that r matters much more. (But it can move the probability up or down by 5-10% relative to the default model settings.)
So a simplified argument for why >10 years in <1 year is unlikely coud be roughly:
If there weren’t any upper bounds, we’d be a bit more likely than 50⁄50 to reach infinity in 1 year. But it’s only a bit more likely than 50⁄50 that 10 years of progress is even possible with arbitrarily powerful technology. Multiplying these together, we get a probability that’s somewhat greater than a quarter, let’s say a third.
But progress probably will probably slow down a lot before hitting the ultimate limits (rather than keep going at full speed and then crash into the ceiling). It’s very unclear how to model this, but let’s say you need a bit of extra margin for both the upper bound and for r — maybe you need the upper bound to allow for like 11 years of progress (because the last year might be slow) and for r to be greater than 1.5 (so the speed of progress has time to gather up a bit of momentum before it goes below 1). 11 years of progress is around 50⁄50 likely, and r greater than 1.5 is a bit below 50⁄50, so now we’re a bit below one quarter, say 20%.
Nice! Yep this is a great analysis and checks out for me. I think it’s really valuable to back-out qualitative stories to support the conclusions of these models. Thanks very much.
I think it’s possible to get more sceptical of >10 years in <1 year by saying:
Even if r >> 1 today, that’s not strong evidence r will remain >1 until we’re very near effective limits. E.g. suppose r = 4 today, and effective limits of tech are 15 OOMs away. There’s a good chance r falls below r after a few OOMs of progress. But my model assumes that r would remain above 1 until we’re very close to limits. So my model is too aggressive.
This relates to you saying “It’s very unclear how to model this, but let’s say you need a bit of extra margin for both the upper bound and for r — maybe you need the upper bound to allow for like 11 years of progress (because the last year might be slow) and for r to be greater than 1.5 (so the speed of progress has time to gather up a bit of momentum before it goes below 1).” Plausibly you need to add more margin than you’ve done there for this to go through. (Though as you point out in another comment, it’s plausible my values for limits are too conservative.)