You say “if we are to accurately model the world”...
If I am modelling the path of a baseball, and I write “F = mg”, would you “correct” me that it’s actually inverse square, that the Earth’s gravitation cannot stay at this strength to arbitrary heights? If you did, I would remind you that we are talking about a baseball game, and not shooting it into orbit—or conclude that you had an agenda other than determining where the ball lands.
What if I’m sampling from a population, and you catch me multiplying probabilities together, as if my draws are independent, as if the population is infinite? Yes there is an end to the population, but as long as it’s far away, the dependence induced by sampling without replacement is negligible.
Well, that’s the question, whether to include an effect in the model or whether it’s negligible. An effect like finite population size, diminishing gravity, or the “crowding” effects that turn an exponential growth model logistic.
And the question cannot be escaped just by noting the effect is important eventually.
For most growth curves, the effect is important on reasonable time scales. Most of the time we’re trying to model something where growth will run out in years or decades (though perhaps to be replaced by another S-curve due to innovation).
For example, it should have been obvious to everyone that LLM scaling was on an S-curve, and that scale alone would run out of gains and we’d have to start looking for gains from sources other than scale. That many modelers before 2025 thought that LLM gains could go on forever from scaling alone is a failure of their models. Had they even tried to model when scaling would run out, they could have better anticipated when future growth in capabilities would have had to come from other sources. But most models I saw didn’t even attempt this.
I’m surprised at how hard it is for me to think of counterexamples.
I thought surely whale populations due to the slow generation time, but it looks like humpback whale populations have already recovered from whaling, and blue whales will get there before long.
Thinking again—in my baseball example, gravity is pulling the ball into the domain of applicability of the constant acceleration model.
Maybe what’s special about the exponential growth model is it implies escape from its own domain of applicability, in time that grows slowly (logarithmically) with the threshold.
You say “if we are to accurately model the world”...
If I am modelling the path of a baseball, and I write “F = mg”, would you “correct” me that it’s actually inverse square, that the Earth’s gravitation cannot stay at this strength to arbitrary heights? If you did, I would remind you that we are talking about a baseball game, and not shooting it into orbit—or conclude that you had an agenda other than determining where the ball lands.
What if I’m sampling from a population, and you catch me multiplying probabilities together, as if my draws are independent, as if the population is infinite? Yes there is an end to the population, but as long as it’s far away, the dependence induced by sampling without replacement is negligible.
Well, that’s the question, whether to include an effect in the model or whether it’s negligible. An effect like finite population size, diminishing gravity, or the “crowding” effects that turn an exponential growth model logistic.
And the question cannot be escaped just by noting the effect is important eventually.
For most growth curves, the effect is important on reasonable time scales. Most of the time we’re trying to model something where growth will run out in years or decades (though perhaps to be replaced by another S-curve due to innovation).
For example, it should have been obvious to everyone that LLM scaling was on an S-curve, and that scale alone would run out of gains and we’d have to start looking for gains from sources other than scale. That many modelers before 2025 thought that LLM gains could go on forever from scaling alone is a failure of their models. Had they even tried to model when scaling would run out, they could have better anticipated when future growth in capabilities would have had to come from other sources. But most models I saw didn’t even attempt this.
I’m surprised at how hard it is for me to think of counterexamples.
I thought surely whale populations due to the slow generation time, but it looks like humpback whale populations have already recovered from whaling, and blue whales will get there before long.
Thinking again—in my baseball example, gravity is pulling the ball into the domain of applicability of the constant acceleration model.
Maybe what’s special about the exponential growth model is it implies escape from its own domain of applicability, in time that grows slowly (logarithmically) with the threshold.