One can model a local part of the future, not just the past.
Here’s a related example. If I know the spring constant of some material, I can model the force as linear in displacement. I know that it is not truly linear in displacement, but that is still a useful simplified model that allows an analytical solution and works well in some reasonable local neighborhood—and I might not know exactly where it breaks down! I can still use the approximation if I don’t know exactly where it breaks down. And I don’t actually know the math that describes precisely where it breaks down, so trying to add it to the model would probably just result in saying something wrong—it might even mess up my LOCAL predictions.
I don’t think that you would burst into an undergraduate physics class and say “why are you using this first order differential equation?? In reality, pulling a spring far past the breaking point will cause the spring ‘constant’ to decrease! It’s not really a constant! It is NEVER really a constant! This model is making you more wrong!”
Rather, the professor should say the following sentence: “the linear model is an approximation that works for small displacements.”
When it comes to springs, someone (not me) does know math that reasonably well models where the linear approximation breaks down for a given material. For scaling laws, we don’t even know that math.
Ironically, I have a 250 dollar bet on with Kokotajlo that task lengths will scale (quite) sub exponentially because I believe that the sigmoid runs out soon, for a long list of concrete reasons that apply to this specific case. But NOT because I’m actually fitting a sigmoid.
Yes, this all seems quite reasonable, and I think it’s a failure if we fail to at least acknowledge that the model is going to break down at some point and give some guesses about when it will break down, which is what I see happening a lot when I read about exponential growth models (the modeler presents a curve, but not a model or even of theory of how growth might end, which to me feels like I’m only getting half a model, and it makes the model not very useful because it has such limited predictive power and it’s not even attempting to quantifying the limitations).
Like I’m okay with saying we don’t know how to quantify something at all, but once we start quantifying, I expect to see the quantifying carried through. Making a bet is a great way to quantify!
One can model a local part of the future, not just the past.
Here’s a related example. If I know the spring constant of some material, I can model the force as linear in displacement. I know that it is not truly linear in displacement, but that is still a useful simplified model that allows an analytical solution and works well in some reasonable local neighborhood—and I might not know exactly where it breaks down! I can still use the approximation if I don’t know exactly where it breaks down. And I don’t actually know the math that describes precisely where it breaks down, so trying to add it to the model would probably just result in saying something wrong—it might even mess up my LOCAL predictions.
I don’t think that you would burst into an undergraduate physics class and say “why are you using this first order differential equation?? In reality, pulling a spring far past the breaking point will cause the spring ‘constant’ to decrease! It’s not really a constant! It is NEVER really a constant! This model is making you more wrong!”
Rather, the professor should say the following sentence: “the linear model is an approximation that works for small displacements.”
When it comes to springs, someone (not me) does know math that reasonably well models where the linear approximation breaks down for a given material. For scaling laws, we don’t even know that math.
Ironically, I have a 250 dollar bet on with Kokotajlo that task lengths will scale (quite) sub exponentially because I believe that the sigmoid runs out soon, for a long list of concrete reasons that apply to this specific case. But NOT because I’m actually fitting a sigmoid.
Yes, this all seems quite reasonable, and I think it’s a failure if we fail to at least acknowledge that the model is going to break down at some point and give some guesses about when it will break down, which is what I see happening a lot when I read about exponential growth models (the modeler presents a curve, but not a model or even of theory of how growth might end, which to me feels like I’m only getting half a model, and it makes the model not very useful because it has such limited predictive power and it’s not even attempting to quantifying the limitations).
Like I’m okay with saying we don’t know how to quantify something at all, but once we start quantifying, I expect to see the quantifying carried through. Making a bet is a great way to quantify!