How can we rule out the possibility that anthropomorphization is an artifact of language use—it’s possible that any sufficiently advanced user of a human language may be forced by their use of that language to assume selfhood imply the presence of a humanoid “self” in order to communicate complex concepts effectively.
“AI cognition” and “AI use of language” are not quite the same thing. Have you been able to use non-linguistic AI as a reference point in sorting out what’s “because transformer model” from what’s “because language user”? Most of the consumer-facing image stuff is still heavily language-dependent, but perhaps there are frontier models in other fields trained in data other than language? I’m not sure what the state of the art is in those but it seems like a model trained on something like chemistry or weather forecasting could be worth considering in the question of what “pure AI”, outside the social baggage imposed by language use, might be like.
How can we rule out the possibility that anthropomorphization is an artifact of language use—it’s possible that any sufficiently advanced user of a human language may be forced by their use of that language to
assume selfhoodimply the presence of a humanoid “self” in order to communicate complex concepts effectively.“AI cognition” and “AI use of language” are not quite the same thing. Have you been able to use non-linguistic AI as a reference point in sorting out what’s “because transformer model” from what’s “because language user”? Most of the consumer-facing image stuff is still heavily language-dependent, but perhaps there are frontier models in other fields trained in data other than language? I’m not sure what the state of the art is in those but it seems like a model trained on something like chemistry or weather forecasting could be worth considering in the question of what “pure AI”, outside the social baggage imposed by language use, might be like.