ChatGPT is settling the Chinese Room argument

My observation of people’s experiences with ChatGPT leads me to the conclusion that while for many people it appears to be passing Turing test, it is possible to demonstrate that it does not have any deeper understanding (i.e. that it does not contain useful representation of semantics) or any ability to generalize and transfer knowledge to adjacent domains.

This is understandable considering that GPT is over-parametrized model which contains some encoding of the entire (very large) training set. It is a de-facto implementation of a Chinese Room having sufficient language fluency to parse and generate passably correct language. (The lack of generalization capability is also a known property of current ANNs.)

The main take-out of that is that the Chinese Room which merely finds appropriate answers from an (encoded) set is not the same as a semantic representation and processing, and can be relatively easily distinguished from the actual intelligence, bolstering John Searle’s argument.