I’m spending about 1⁄4 of my time thinking about how to best get data on this and predict whether we’re heading for a software intelligence explosion. For now, one thought is that the inference scaling curve is more likely to be a power law, because it’s scalefree and consistent with a world where AIs are prone to get stuck when doing harder tasks, but get stuck less and less as their capability increases.
The qualitative conclusions seem to be the same if you use power law & most of the progress comes from change in slope:
However, if half the progress comes from change in intercept, we get this weird graph with a discontinuity at the end:
Not sure what’s going on there. Maybe the intercept has risen enough that there is no longer any crossover point, despite the slope of the AI line still being shallower than the slope of the human line?
I’m spending about 1⁄4 of my time thinking about how to best get data on this and predict whether we’re heading for a software intelligence explosion. For now, one thought is that the inference scaling curve is more likely to be a power law, because it’s scalefree and consistent with a world where AIs are prone to get stuck when doing harder tasks, but get stuck less and less as their capability increases.
My current guess is still something like the independent-steps model which has a power law.
The qualitative conclusions seem to be the same if you use power law & most of the progress comes from change in slope:
However, if half the progress comes from change in intercept, we get this weird graph with a discontinuity at the end:
Not sure what’s going on there. Maybe the intercept has risen enough that there is no longer any crossover point, despite the slope of the AI line still being shallower than the slope of the human line?