I agree with the general point, but don’t think it applies to this model.
I’m not calculating anything to a high degree of precision, inputs or outputs.
There aren’t complicated interaction effects with lots of noisy inputs such that the model might overfit to noise.
I could have dropped the code, but then i’d have a worse understanding of what my best-guess inputs imply about the output. And it the analysis would be less transparent. And other couldn’t run it for their preferred inputs.
I just feel like the length and complexity of the thinking involved is all fundamentally undermined by this uncertainty. The consequences are almost entirely parameter-determined (since as you say, the core model is very simple). Something like how many OOM gains are possible before hitting limits for example is key—this is literally what makes the difference between a world with slightly better software engineering, one in which all software engineers and scientists are now unemployed because AIs completely wipe the floor with them, and one in which ASI iteratively self-improves its way to physical godhood and takes over the light-cone. And I feel like something of that kind implies so many answers to very open questions about the world, the nature of intelligence and of computation itself, I’m not sure how could any estimate produce anything else than some kind of almost circular reasoning.
I agree with the general point, but don’t think it applies to this model.
I’m not calculating anything to a high degree of precision, inputs or outputs.
There aren’t complicated interaction effects with lots of noisy inputs such that the model might overfit to noise.
I could have dropped the code, but then i’d have a worse understanding of what my best-guess inputs imply about the output. And it the analysis would be less transparent. And other couldn’t run it for their preferred inputs.
I just feel like the length and complexity of the thinking involved is all fundamentally undermined by this uncertainty. The consequences are almost entirely parameter-determined (since as you say, the core model is very simple). Something like how many OOM gains are possible before hitting limits for example is key—this is literally what makes the difference between a world with slightly better software engineering, one in which all software engineers and scientists are now unemployed because AIs completely wipe the floor with them, and one in which ASI iteratively self-improves its way to physical godhood and takes over the light-cone. And I feel like something of that kind implies so many answers to very open questions about the world, the nature of intelligence and of computation itself, I’m not sure how could any estimate produce anything else than some kind of almost circular reasoning.