I don’t quite see how this is a problem for the model. The narrower you draw the boundary, the more jumpy progress will be, right?
So, you’re saying: if we draw the boundary around a narrow field, we get jumpy/noisy progress. If we the draw the boundary around a broad field, all the narrow subfields average out and the result is less noise. This makes a lot of sense, thank you!
The question is, what metric do we use to average the subfields. For example, on some metrics the Manhattan project might be a rather small jump in military-technology-averaged-over-subfields. But, its particular subfield had a rather outsized impact! In general, I think that “impactfulness” has a heavy-tailed distribution and therefore the “correct” averaging still leaves a fair amount of jumpiness.
And while the most fantastically successful startups can become billion dollar companies within a few years, nobody has yet made it to a trillion in less than a decade.
Yeaaah, but like I said before, I am skeptical of giving so much weight to data from economics. Economics reflects a lot about people and about the world, but there are facts about physics/math it cannot possibly know about, so evidence from such facts cannot be meaningfully overturned with economic data.
Moreover, from certain angles singleton takeoff can look sort of like a “normal” type of economic story. In one case, person has an idea, does a lot of work, gets investments etc etc and after a decade there’s a trillion dollars. In the other case, person builds AI, the AI has some ideas, [stuff happens], after a decade nanobots kill everyone. As Daniel Kokotajlo argued, what’s actually important is when the point-of-no-return (PONR) is. And, the PONR might be substantially earlier than the analogue-of-trillion-dollars.
all else equal, more people means smoother progress — if we lived in a trillion person world, AI progress would be more continuous
Would it? It’s clear that progress in AI would be faster, but why more continuous?
I think that the causation actually goes in the opposite direction. If a field has a lot of small potential improvements with substantial economic value, then a lot of people will work in the field because (i) you don’t need extremely intelligent people to make progress and (ii) it pays off. If a field has a small number of large improvements, then only a small number of people are able to contribute to it. So, a lot of people working on AI is evidence about the kind of progress happening today, but not strong evidence about the absence of significant kinks in the future.
So, you’re saying: if we draw the boundary around a narrow field, we get jumpy/noisy progress. If we the draw the boundary around a broad field, all the narrow subfields average out and the result is less noise. This makes a lot of sense, thank you!
The question is, what metric do we use to average the subfields. For example, on some metrics the Manhattan project might be a rather small jump in military-technology-averaged-over-subfields. But, its particular subfield had a rather outsized impact! In general, I think that “impactfulness” has a heavy-tailed distribution and therefore the “correct” averaging still leaves a fair amount of jumpiness.
Yeaaah, but like I said before, I am skeptical of giving so much weight to data from economics. Economics reflects a lot about people and about the world, but there are facts about physics/math it cannot possibly know about, so evidence from such facts cannot be meaningfully overturned with economic data.
Moreover, from certain angles singleton takeoff can look sort of like a “normal” type of economic story. In one case, person has an idea, does a lot of work, gets investments etc etc and after a decade there’s a trillion dollars. In the other case, person builds AI, the AI has some ideas, [stuff happens], after a decade nanobots kill everyone. As Daniel Kokotajlo argued, what’s actually important is when the point-of-no-return (PONR) is. And, the PONR might be substantially earlier than the analogue-of-trillion-dollars.
Would it? It’s clear that progress in AI would be faster, but why more continuous?
I think that the causation actually goes in the opposite direction. If a field has a lot of small potential improvements with substantial economic value, then a lot of people will work in the field because (i) you don’t need extremely intelligent people to make progress and (ii) it pays off. If a field has a small number of large improvements, then only a small number of people are able to contribute to it. So, a lot of people working on AI is evidence about the kind of progress happening today, but not strong evidence about the absence of significant kinks in the future.