The problem with this model is, its predictions depend a lot on how you draw the boundary around “field”. Take Yudkowsky’s example of startups. How do we explain small startups succeed where large companies failed?
I don’t quite see how this is a problem for the model. The narrower you draw the boundary, the more jumpy progress will be, right?
Successful startups are big relative to individuals, but not that big relative to the world as a whole. If we’re talking about a project / technology / company that can rival the rest of the world in its output, then the relevant scale is trillions of dollars (prob deca-trillions), not billions.
And while the most fantastically successful startups can become billion dollar companies within a few years, nobody has yet made it to a trillion in less than a decade.
EDIT: To clarify, not trying to say that something couldn’t grow faster than any previous startup. There could certainly be a ‘kink’ in the rate of progress, like you describe. I just want to emphasize that:
startups are not that jumpy, on the world scale
the actual scale of the world matters
A simple model for the discontinuousness of a field might have two parameters — one for the intrinsic lumpiness of available discoveries, and one for total effort going into discovery. And,
all else equal, more people means smoother progress — if we lived in a trillion person world, AI progress would be more continuous
it’s an open empirical question whether the actual values for these parameters will result in smooth or jumpy takeoff:
even if investment in AI is in the deca-trillions and a meaningful fraction of all world output, it could still be that the actual territory of available discoveries is so lumpy that progress is discontinuous
but, remember that reality has a surprising amount of detail, which I think tends to push things in a smoother direction — it means there are more fiddly details to work through, even when you have a unique insight or technological advantage
or, in other words, even if you have a random draw from a distribution that ends up being an outlier, actual progress in the real world will be the result of many different draws, which will tend to push things more toward the regime of normals
I don’t quite see how this is a problem for the model. The narrower you draw the boundary, the more jumpy progress will be, right?
So, you’re saying: if we draw the boundary around a narrow field, we get jumpy/noisy progress. If we the draw the boundary around a broad field, all the narrow subfields average out and the result is less noise. This makes a lot of sense, thank you!
The question is, what metric do we use to average the subfields. For example, on some metrics the Manhattan project might be a rather small jump in military-technology-averaged-over-subfields. But, its particular subfield had a rather outsized impact! In general, I think that “impactfulness” has a heavy-tailed distribution and therefore the “correct” averaging still leaves a fair amount of jumpiness.
And while the most fantastically successful startups can become billion dollar companies within a few years, nobody has yet made it to a trillion in less than a decade.
Yeaaah, but like I said before, I am skeptical of giving so much weight to data from economics. Economics reflects a lot about people and about the world, but there are facts about physics/math it cannot possibly know about, so evidence from such facts cannot be meaningfully overturned with economic data.
Moreover, from certain angles singleton takeoff can look sort of like a “normal” type of economic story. In one case, person has an idea, does a lot of work, gets investments etc etc and after a decade there’s a trillion dollars. In the other case, person builds AI, the AI has some ideas, [stuff happens], after a decade nanobots kill everyone. As Daniel Kokotajlo argued, what’s actually important is when the point-of-no-return (PONR) is. And, the PONR might be substantially earlier than the analogue-of-trillion-dollars.
all else equal, more people means smoother progress — if we lived in a trillion person world, AI progress would be more continuous
Would it? It’s clear that progress in AI would be faster, but why more continuous?
I think that the causation actually goes in the opposite direction. If a field has a lot of small potential improvements with substantial economic value, then a lot of people will work in the field because (i) you don’t need extremely intelligent people to make progress and (ii) it pays off. If a field has a small number of large improvements, then only a small number of people are able to contribute to it. So, a lot of people working on AI is evidence about the kind of progress happening today, but not strong evidence about the absence of significant kinks in the future.
I don’t quite see how this is a problem for the model. The narrower you draw the boundary, the more jumpy progress will be, right?
Successful startups are big relative to individuals, but not that big relative to the world as a whole. If we’re talking about a project / technology / company that can rival the rest of the world in its output, then the relevant scale is trillions of dollars (prob deca-trillions), not billions.
And while the most fantastically successful startups can become billion dollar companies within a few years, nobody has yet made it to a trillion in less than a decade.
EDIT: To clarify, not trying to say that something couldn’t grow faster than any previous startup. There could certainly be a ‘kink’ in the rate of progress, like you describe. I just want to emphasize that:
startups are not that jumpy, on the world scale
the actual scale of the world matters
A simple model for the discontinuousness of a field might have two parameters — one for the intrinsic lumpiness of available discoveries, and one for total effort going into discovery. And,
all else equal, more people means smoother progress — if we lived in a trillion person world, AI progress would be more continuous
it’s an open empirical question whether the actual values for these parameters will result in smooth or jumpy takeoff:
even if investment in AI is in the deca-trillions and a meaningful fraction of all world output, it could still be that the actual territory of available discoveries is so lumpy that progress is discontinuous
but, remember that reality has a surprising amount of detail, which I think tends to push things in a smoother direction — it means there are more fiddly details to work through, even when you have a unique insight or technological advantage
or, in other words, even if you have a random draw from a distribution that ends up being an outlier, actual progress in the real world will be the result of many different draws, which will tend to push things more toward the regime of normals
So, you’re saying: if we draw the boundary around a narrow field, we get jumpy/noisy progress. If we the draw the boundary around a broad field, all the narrow subfields average out and the result is less noise. This makes a lot of sense, thank you!
The question is, what metric do we use to average the subfields. For example, on some metrics the Manhattan project might be a rather small jump in military-technology-averaged-over-subfields. But, its particular subfield had a rather outsized impact! In general, I think that “impactfulness” has a heavy-tailed distribution and therefore the “correct” averaging still leaves a fair amount of jumpiness.
Yeaaah, but like I said before, I am skeptical of giving so much weight to data from economics. Economics reflects a lot about people and about the world, but there are facts about physics/math it cannot possibly know about, so evidence from such facts cannot be meaningfully overturned with economic data.
Moreover, from certain angles singleton takeoff can look sort of like a “normal” type of economic story. In one case, person has an idea, does a lot of work, gets investments etc etc and after a decade there’s a trillion dollars. In the other case, person builds AI, the AI has some ideas, [stuff happens], after a decade nanobots kill everyone. As Daniel Kokotajlo argued, what’s actually important is when the point-of-no-return (PONR) is. And, the PONR might be substantially earlier than the analogue-of-trillion-dollars.
Would it? It’s clear that progress in AI would be faster, but why more continuous?
I think that the causation actually goes in the opposite direction. If a field has a lot of small potential improvements with substantial economic value, then a lot of people will work in the field because (i) you don’t need extremely intelligent people to make progress and (ii) it pays off. If a field has a small number of large improvements, then only a small number of people are able to contribute to it. So, a lot of people working on AI is evidence about the kind of progress happening today, but not strong evidence about the absence of significant kinks in the future.