The problem I have found is determining what people accept as evidence about “intelligences”.
If everyone thought intelligence was always somewhat humanlike (i.e. that if we can’t localise beliefs in humans we shouldn’t try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised.
I think it fairly uncontroversial that beliefs aren’t stored in one particular place in humans on Lesswrong. However because people are aware of the limitations of Humans, they think that they can design AI without the flaws so they do not constrain their designs to be humanlike, so that allows them to slip localised/programmatic beliefs back in.
To convince them that localised beliefs where incorrect/unworkable for all intelligences would require a constructive theory of intelligence.
The problem I have found is determining what people accept as evidence about “intelligences”.
If everyone thought intelligence was always somewhat humanlike (i.e. that if we can’t localise beliefs in humans we shouldn’t try to build AI with localised beliefs) then evidence about humans would constitute evidence about AI somewhat. In this case things like blind sight (mentioned in the Intentional Stance) would show that beliefs were not easily localised.
I think it fairly uncontroversial that beliefs aren’t stored in one particular place in humans on Lesswrong. However because people are aware of the limitations of Humans, they think that they can design AI without the flaws so they do not constrain their designs to be humanlike, so that allows them to slip localised/programmatic beliefs back in.
To convince them that localised beliefs where incorrect/unworkable for all intelligences would require a constructive theory of intelligence.
Does that help?