I just feel like the length and complexity of the thinking involved is all fundamentally undermined by this uncertainty. The consequences are almost entirely parameter-determined (since as you say, the core model is very simple). Something like how many OOM gains are possible before hitting limits for example is key—this is literally what makes the difference between a world with slightly better software engineering, one in which all software engineers and scientists are now unemployed because AIs completely wipe the floor with them, and one in which ASI iteratively self-improves its way to physical godhood and takes over the light-cone. And I feel like something of that kind implies so many answers to very open questions about the world, the nature of intelligence and of computation itself, I’m not sure how could any estimate produce anything else than some kind of almost circular reasoning.
I just feel like the length and complexity of the thinking involved is all fundamentally undermined by this uncertainty. The consequences are almost entirely parameter-determined (since as you say, the core model is very simple). Something like how many OOM gains are possible before hitting limits for example is key—this is literally what makes the difference between a world with slightly better software engineering, one in which all software engineers and scientists are now unemployed because AIs completely wipe the floor with them, and one in which ASI iteratively self-improves its way to physical godhood and takes over the light-cone. And I feel like something of that kind implies so many answers to very open questions about the world, the nature of intelligence and of computation itself, I’m not sure how could any estimate produce anything else than some kind of almost circular reasoning.