If the people involved are good naturalists, they will agree that both the symbolic and the connectionist approaches are making claims about high-level descriptions that can apply to things made of atoms. Jerry Fodor, famous proponent that brains have a “language of thought,” would still say that the language of thought is a high-level description of collections of low-level things like atoms-bumping into other atoms.
My point is that arguments about what high-level descriptions are useful are also arguments about what things “are.” When a way of thinking about the world is powerful enough, we call its building blocks real.
I would still make distinctions between describing human minds and trying to build artificial ones, here. You might have different opinions about how useful different ideas are for the different tasks. Someone will at some point say “We didn’t build airplanes that flap their wings.” I think a lot of the “old guard” of AI researchers have picked sides in this battle over the years, and the heavy-symbolicist side is in disrepute, but a pretty wide spectrum of views from “mostly symbolic reasoning with some learned components” to “all learned” are represented.
I think there’s plenty of machine learning that doesn’t look like connectionism. SVMs were successful for a long time and they’re not very neuromorphic. I would expect ML that extracts the maximum value from TPUs to be more dense / nonlocal than actual brains, and probably violate the analogy to brains in some other ways too.
If the people involved are good naturalists, they will agree that both the symbolic and the connectionist approaches are making claims about high-level descriptions that can apply to things made of atoms. Jerry Fodor, famous proponent that brains have a “language of thought,” would still say that the language of thought is a high-level description of collections of low-level things like atoms-bumping into other atoms.
My point is that arguments about what high-level descriptions are useful are also arguments about what things “are.” When a way of thinking about the world is powerful enough, we call its building blocks real.
I would still make distinctions between describing human minds and trying to build artificial ones, here. You might have different opinions about how useful different ideas are for the different tasks. Someone will at some point say “We didn’t build airplanes that flap their wings.” I think a lot of the “old guard” of AI researchers have picked sides in this battle over the years, and the heavy-symbolicist side is in disrepute, but a pretty wide spectrum of views from “mostly symbolic reasoning with some learned components” to “all learned” are represented.
I think there’s plenty of machine learning that doesn’t look like connectionism. SVMs were successful for a long time and they’re not very neuromorphic. I would expect ML that extracts the maximum value from TPUs to be more dense / nonlocal than actual brains, and probably violate the analogy to brains in some other ways too.