After having read a few GPT-3 generated texts, its type of pattern-matching babbling really reminds me of what is here described as apologist. Maybe the apologist part of the mind just does not do sufficiently model-based thinking to catch mistakes that are obvious to an explicitly model-based way of thinking (“revolutionary”)?
It seems very plausible to me that there are both high-level model-based and model-free parts in the human mind. This would also match the seemingly obvious mistakes in the apologists reasoning and explain why it is effectively impossible to get someone’s apologist to realise their mistakes by talking to them (I would assume that for healthy people, the model-based thinking does inform/override the model-free thinking to a degree)
After having read a few GPT-3 generated texts, its type of pattern-matching babbling really reminds me of what is here described as apologist. Maybe the apologist part of the mind just does not do sufficiently model-based thinking to catch mistakes that are obvious to an explicitly model-based way of thinking (“revolutionary”)?
It seems very plausible to me that there are both high-level model-based and model-free parts in the human mind. This would also match the seemingly obvious mistakes in the apologists reasoning and explain why it is effectively impossible to get someone’s apologist to realise their mistakes by talking to them (I would assume that for healthy people, the model-based thinking does inform/override the model-free thinking to a degree)