I’ve never enjoyed, or agreed with, arguments of the form: “X is inherently, intrinsically incapable of Y.” The presence of such statements indicates that there is some social tension of the form “X might be inherently, intrinsically capable of Y.” There might be a bias towards the moderate social acceptance of statements such as “X is inherently, intrinsically incapable of Y” due to no more than it being possible to disprove trivially, if X is inherently, intrinsically capable of Y. Disprovable statements might be overrated a lot, and if so, boy, would I hate that.
This seems kind of relevant to the main point of this post too:
GPTs are not Imitators, nor Simulators, but Predictors.
Question: Is GPT-5 an Imitator? Simulator? And Predictor? Is GPT-6?
Does the message of this post become moot on larger, more powerful LLMs? Or does it predict that such models have already reached their limit?
I’ve never enjoyed, or agreed with, arguments of the form: “X is inherently, intrinsically incapable of Y.” The presence of such statements indicates that there is some social tension of the form “X might be inherently, intrinsically capable of Y.” There might be a bias towards the moderate social acceptance of statements such as “X is inherently, intrinsically incapable of Y” due to no more than it being possible to disprove trivially, if X is inherently, intrinsically capable of Y. Disprovable statements might be overrated a lot, and if so, boy, would I hate that.
This seems kind of relevant to the main point of this post too:
Question: Is GPT-5 an Imitator? Simulator? And Predictor? Is GPT-6?
Does the message of this post become moot on larger, more powerful LLMs? Or does it predict that such models have already reached their limit?