Yes, if you want to model growth, you probably want something like a sigmoid or Gompertz curve, whose value approaches some limit rather than rising infinitely forever.
But if you want to actually draw that curve, and use it to predict effects that you care about, you need to know what limit it approaches.
Whether that curve approaches 1 or 10^100, is going to matter a lot.
Yes, this is true, and I think modelers are cowards for not making a guess.
(I’m only being a little hyperbolic here. Lots of people are deathly afraid of being wrong, and so I have a suspicion that many modelers are drawn to exponentials because they have to model less stuff and when the exponential runs out they can just shrug it off because they didn’t make a specific prediction about when growth would end.)
It’s possible to fit data points to an S-curve without assuming where it flattens out — but then you might not draw the limit as part of your prediction. We don’t want to assume the thing can grow forever; the market saturates somewhere, even if we don’t know where.
Like, if you’re forecasting traffic growth for a web service, you know that there’s a finite amount of human attention that can be spent on looking at web pages. It might be a lot higher than anyone expected, but it’s not infinite.
See for instance Wikipedia’s growth models (sadly not updated since 2015) — they started out with an exponential model and then ended up with Gompertz being much more accurate.
This comment makes me think the conversation can’t usefully continue on the object level—your wider beliefs about rationality seem to depart from mine pretty far.
Why does it seem good to you to provide incomplete models that we know are incomplete and fail to acknowledge that incompleteness, because that’s how most exponential models of growth look to me?
I’m on Twitter all day. People love charts that go up and to the right. Feels like every week I see multiple tweets from people model some growth process, usually related to AI, with a naive exponential and no consideration of the fact that growth ends.
I saw enough of these that yesterday it finally clicked that I should write this post.
Yes, if you want to model growth, you probably want something like a sigmoid or Gompertz curve, whose value approaches some limit rather than rising infinitely forever.
But if you want to actually draw that curve, and use it to predict effects that you care about, you need to know what limit it approaches.
Whether that curve approaches 1 or 10^100, is going to matter a lot.
Yes, this is true, and I think modelers are cowards for not making a guess.
(I’m only being a little hyperbolic here. Lots of people are deathly afraid of being wrong, and so I have a suspicion that many modelers are drawn to exponentials because they have to model less stuff and when the exponential runs out they can just shrug it off because they didn’t make a specific prediction about when growth would end.)
It’s possible to fit data points to an S-curve without assuming where it flattens out — but then you might not draw the limit as part of your prediction. We don’t want to assume the thing can grow forever; the market saturates somewhere, even if we don’t know where.
Like, if you’re forecasting traffic growth for a web service, you know that there’s a finite amount of human attention that can be spent on looking at web pages. It might be a lot higher than anyone expected, but it’s not infinite.
See for instance Wikipedia’s growth models (sadly not updated since 2015) — they started out with an exponential model and then ended up with Gompertz being much more accurate.
This comment makes me think the conversation can’t usefully continue on the object level—your wider beliefs about rationality seem to depart from mine pretty far.
Why does it seem good to you to provide incomplete models that we know are incomplete and fail to acknowledge that incompleteness, because that’s how most exponential models of growth look to me?
This is a strawman. Essentially everyone acknowledges the incompleteness.
The AI 2027 guys expect it to go super exponential.
I expect it to go sigmoid.
We both say this loudly and frequently.
EDIT: actually you may have a point about “situational awareness”
Yes, some people are more careful than others.
I’m on Twitter all day. People love charts that go up and to the right. Feels like every week I see multiple tweets from people model some growth process, usually related to AI, with a naive exponential and no consideration of the fact that growth ends.
I saw enough of these that yesterday it finally clicked that I should write this post.