This also reminded me that I wanted to go through the Intentional Stance by Daniel Dennett and find the good bits. Also worth reading is the wiki page.
I think he would state that the model you describe comes from folk psychology.
A relevant passage
“We have all learned to take a more skeptical attitude to the dictates of folk physics, including those robust deliverances that persist in the face of academic science. Even the “undeniable introspective fact” that you can feel “centrifugal force” cannot save it, except for the pragmatic purposes of rough-and-ready understanding it has always served. The delicate question of just how we ought to express our diminished allegiance to the categories of folk physics has been a central topic in philosophy since the seventeenth century, when Descartes, Boyle and other began to ponder the meta-physical status of color, felt warmth, and other “secondary qualities”. These discussions, while cautiously agnostic about folk physics have traditionally assumed as unchallenged the bedrock of folk-psychological counterpart categories: conscious perceptions of color, sensations of warmth, or beliefs about the external “world”.”
In lesswrong people do tend to discard the perception and sensation parts of folk psychology, but keep the belief and goal concepts.
You might have trouble convincing people here, mainly because people are interested in what should be done by an intelligence, rather than what is currently done by humans. It is a lot harder to find evidence for what ought to be done rather than what is done.
The problem I have found is determining what people accept as evidence about “intelligences”.
If everyone thought intelligence was always somewhat humanlike (i.e. that if we can’t localise beliefs in humans we shouldn’t try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised.
I think it fairly uncontroversial that beliefs aren’t stored in one particular place in humans on Lesswrong. However because people are aware of the limitations of Humans, they think that they can design AI without the flaws so they do not constrain their designs to be humanlike, so that allows them to slip localised/programmatic beliefs back in.
To convince them that localised beliefs where incorrect/unworkable for all intelligences would require a constructive theory of intelligence.
This also reminded me that I wanted to go through the Intentional Stance by Daniel Dennett and find the good bits. Also worth reading is the wiki page.
I think he would state that the model you describe comes from folk psychology.
A relevant passage
“We have all learned to take a more skeptical attitude to the dictates of folk physics, including those robust deliverances that persist in the face of academic science. Even the “undeniable introspective fact” that you can feel “centrifugal force” cannot save it, except for the pragmatic purposes of rough-and-ready understanding it has always served. The delicate question of just how we ought to express our diminished allegiance to the categories of folk physics has been a central topic in philosophy since the seventeenth century, when Descartes, Boyle and other began to ponder the meta-physical status of color, felt warmth, and other “secondary qualities”. These discussions, while cautiously agnostic about folk physics have traditionally assumed as unchallenged the bedrock of folk-psychological counterpart categories: conscious perceptions of color, sensations of warmth, or beliefs about the external “world”.”
In lesswrong people do tend to discard the perception and sensation parts of folk psychology, but keep the belief and goal concepts.
You might have trouble convincing people here, mainly because people are interested in what should be done by an intelligence, rather than what is currently done by humans. It is a lot harder to find evidence for what ought to be done rather than what is done.
Relevant and new-to-me, thanks.
I’d be interested to hear examples of things, related to this discussion, that people here would not be easily convinced of.
The problem I have found is determining what people accept as evidence about “intelligences”.
If everyone thought intelligence was always somewhat humanlike (i.e. that if we can’t localise beliefs in humans we shouldn’t try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised.
I think it fairly uncontroversial that beliefs aren’t stored in one particular place in humans on Lesswrong. However because people are aware of the limitations of Humans, they think that they can design AI without the flaws so they do not constrain their designs to be humanlike, so that allows them to slip localised/programmatic beliefs back in.
To convince them that localised beliefs where incorrect/unworkable for all intelligences would require a constructive theory of intelligence.
Does that help?