Thoughts are things occurring in some mental model (this is a vague sentence but just assume it makes sense). Some of these mental models are strongly rooted in reality (e.g. the mental model we see as reality) and so we have a high degree of confidence about their accuracy. But for things like introspection, we do not have a reliable ground-truth feedback to tell us if our introspection is correct or not—it’s just our mental model of our mind, there is no literal “mind’s eye”.
So often our introspection is wrong. E.g. if you ask someone to visualize a lion from behind, they’ll say they can, but if you ask them some details, like “what do the tail hairs look like?” they can’t answer. Or better example: if you ask someone to visualize a neural network, they will, but if you ask “how many neurons do you see?” they will not know, and not for lack of counting. Or they will say they “think in words” or that their internal monologue is fundamental to their thinking, but that’s obviously wrong: you have already decided what the rest of the sentence will be before you’ve thought the first word.
We can tell some basic facts about our thinking by reasoning from observation. For example, if you have an internal monologue (or just force yourself to have one) then you can confirm that you indeed have one by speaking the words of the internal monologue out loud and confirming that it took very little cognitive effort (so you didn’t have to think them again). This proves an internal monologue/precisely simulating words in your head is possible. Likewise for any action.
Or you can confirm that you had a certain thought, or a thought about something, because you can express it out loud with less effort than otherwise. Though here there is still room for that thought to have been imprecise; unless you verbalize or materialize those thoughts you don’t know if your thoughts were really precise. So all these things have grounding in reality, and therefore are likely to be (or can trained to be, by consistently materializing them) accurate models. By materialize I mean, e.g. solving a math problem you think in your head you can solve.
Thoughts are things occurring in some mental model (this is a vague sentence but just assume it makes sense). Some of these mental models are strongly rooted in reality (e.g. the mental model we see as reality) and so we have a high degree of confidence about their accuracy. But for things like introspection, we do not have a reliable ground-truth feedback to tell us if our introspection is correct or not—it’s just our mental model of our mind, there is no literal “mind’s eye”.
So often our introspection is wrong. E.g. if you ask someone to visualize a lion from behind, they’ll say they can, but if you ask them some details, like “what do the tail hairs look like?” they can’t answer. Or better example: if you ask someone to visualize a neural network, they will, but if you ask “how many neurons do you see?” they will not know, and not for lack of counting. Or they will say they “think in words” or that their internal monologue is fundamental to their thinking, but that’s obviously wrong: you have already decided what the rest of the sentence will be before you’ve thought the first word.
We can tell some basic facts about our thinking by reasoning from observation. For example, if you have an internal monologue (or just force yourself to have one) then you can confirm that you indeed have one by speaking the words of the internal monologue out loud and confirming that it took very little cognitive effort (so you didn’t have to think them again). This proves an internal monologue/precisely simulating words in your head is possible. Likewise for any action.
Or you can confirm that you had a certain thought, or a thought about something, because you can express it out loud with less effort than otherwise. Though here there is still room for that thought to have been imprecise; unless you verbalize or materialize those thoughts you don’t know if your thoughts were really precise. So all these things have grounding in reality, and therefore are likely to be (or can trained to be, by consistently materializing them) accurate models. By materialize I mean, e.g. solving a math problem you think in your head you can solve.