This comment makes me think the conversation can’t usefully continue on the object level—your wider beliefs about rationality seem to depart from mine pretty far.
Why does it seem good to you to provide incomplete models that we know are incomplete and fail to acknowledge that incompleteness, because that’s how most exponential models of growth look to me?
I’m on Twitter all day. People love charts that go up and to the right. Feels like every week I see multiple tweets from people model some growth process, usually related to AI, with a naive exponential and no consideration of the fact that growth ends.
I saw enough of these that yesterday it finally clicked that I should write this post.
This comment makes me think the conversation can’t usefully continue on the object level—your wider beliefs about rationality seem to depart from mine pretty far.
Why does it seem good to you to provide incomplete models that we know are incomplete and fail to acknowledge that incompleteness, because that’s how most exponential models of growth look to me?
This is a strawman. Essentially everyone acknowledges the incompleteness.
The AI 2027 guys expect it to go super exponential.
I expect it to go sigmoid.
We both say this loudly and frequently.
EDIT: actually you may have a point about “situational awareness”
Yes, some people are more careful than others.
I’m on Twitter all day. People love charts that go up and to the right. Feels like every week I see multiple tweets from people model some growth process, usually related to AI, with a naive exponential and no consideration of the fact that growth ends.
I saw enough of these that yesterday it finally clicked that I should write this post.