I think the main value of that operationalization is enabling more concrete thinking/forecasting about how AI might progress. Models some of the relevant causal structure of reality, at a reasonable level of abstraction: not too nitty-gritty[1], not too abstract[2].
which would lead to “losing the forest for the trees”, make the abstraction too effortful to use in practice, and/or risk making it irrelevant as soon as something changes in the world of AI
e.g. a higher-level abstraction like “AI that speeds up AI development by a factor of N” might at first glance seem more useful. But as you and ryan noted, speed-of-AI-development depends on many factors, so that operationalization would be mixing together many distinct things, hiding relevant causal structures of reality, and making it difficult/confusing to think about AI development.
I think the main value of that operationalization is enabling more concrete thinking/forecasting about how AI might progress. Models some of the relevant causal structure of reality, at a reasonable level of abstraction: not too nitty-gritty[1], not too abstract[2].
which would lead to “losing the forest for the trees”, make the abstraction too effortful to use in practice, and/or risk making it irrelevant as soon as something changes in the world of AI
e.g. a higher-level abstraction like “AI that speeds up AI development by a factor of N” might at first glance seem more useful. But as you and ryan noted, speed-of-AI-development depends on many factors, so that operationalization would be mixing together many distinct things, hiding relevant causal structures of reality, and making it difficult/confusing to think about AI development.