I somewhat agree with your description of how LLMs seem to think, but I don’t think it is an explanation of a general limitation of LLMs. But the patterns you describe do not seem to me to be a good explanation for how humans think in general. Ever since The Cognitive Science of Rationality has it been discussed here that humans usually do not integrate their understanding into a single, coherent map of the world. Humans instead build and maintain many partial, overlapping, and sometimes contradictory maps that only appear unified. Isn’t that the whole point of Heuristics & Biases? I don’t doubt that the process you describe exists or is behind the heights of human reasoning, but it doesn’t seem to be the basis of the main body of “reasoning” out there on the internet on which LLMs are trained. Maybe they just imitate that? Or at least they will have a lot of trouble imitating human thinking while still building a coherent picture underneath that.
I somewhat agree with your description of how LLMs seem to think, but I don’t think it is an explanation of a general limitation of LLMs. But the patterns you describe do not seem to me to be a good explanation for how humans think in general. Ever since The Cognitive Science of Rationality has it been discussed here that humans usually do not integrate their understanding into a single, coherent map of the world. Humans instead build and maintain many partial, overlapping, and sometimes contradictory maps that only appear unified. Isn’t that the whole point of Heuristics & Biases? I don’t doubt that the process you describe exists or is behind the heights of human reasoning, but it doesn’t seem to be the basis of the main body of “reasoning” out there on the internet on which LLMs are trained. Maybe they just imitate that? Or at least they will have a lot of trouble imitating human thinking while still building a coherent picture underneath that.