Thanks for the recommendation! I liked ryan’s sketches of what capabilities an Nx AI R&D labor AIs might possess. Makes things a bit more concrete. (Though I definitely don’t like the name.) I’m not sure if we want to include this definition, as it is pretty niche. And I’m not convinced of its utility. When I tried drafting a paragraph describing it, I struggled to articulate why readers should care about it.
Here’s the draft paragraph. “Nx AI R&D labor AIs: The level of AI capabilities that is necessary for increasing the effective amount of labor working on AI research by a factor of N. This is not the same thing as the capabilities required to increase AI progress by a factor of N, as labor is just one input to AI progress. The virtues of this definition include: ease of operationalization, [...]”
I think the main value of that operationalization is enabling more concrete thinking/forecasting about how AI might progress. Models some of the relevant causal structure of reality, at a reasonable level of abstraction: not too nitty-gritty[1], not too abstract[2].
which would lead to “losing the forest for the trees”, make the abstraction too effortful to use in practice, and/or risk making it irrelevant as soon as something changes in the world of AI
e.g. a higher-level abstraction like “AI that speeds up AI development by a factor of N” might at first glance seem more useful. But as you and ryan noted, speed-of-AI-development depends on many factors, so that operationalization would be mixing together many distinct things, hiding relevant causal structures of reality, and making it difficult/confusing to think about AI development.
I think this approach to thinking about AI capabilities is quite pertinent. Could be worth including “Nx AI R&D labor AIs” in the list?
Thanks for the recommendation! I liked ryan’s sketches of what capabilities an Nx AI R&D labor AIs might possess. Makes things a bit more concrete. (Though I definitely don’t like the name.) I’m not sure if we want to include this definition, as it is pretty niche. And I’m not convinced of its utility. When I tried drafting a paragraph describing it, I struggled to articulate why readers should care about it.
Here’s the draft paragraph.
“Nx AI R&D labor AIs: The level of AI capabilities that is necessary for increasing the effective amount of labor working on AI research by a factor of N. This is not the same thing as the capabilities required to increase AI progress by a factor of N, as labor is just one input to AI progress. The virtues of this definition include: ease of operationalization, [...]”
I think the main value of that operationalization is enabling more concrete thinking/forecasting about how AI might progress. Models some of the relevant causal structure of reality, at a reasonable level of abstraction: not too nitty-gritty[1], not too abstract[2].
which would lead to “losing the forest for the trees”, make the abstraction too effortful to use in practice, and/or risk making it irrelevant as soon as something changes in the world of AI
e.g. a higher-level abstraction like “AI that speeds up AI development by a factor of N” might at first glance seem more useful. But as you and ryan noted, speed-of-AI-development depends on many factors, so that operationalization would be mixing together many distinct things, hiding relevant causal structures of reality, and making it difficult/confusing to think about AI development.