Machine language is a known lower level; neurons aren’t; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself.
In a computer it would indeed make no sense for a programmer to examine something below machine language, since you are compiling or otherwise acting upon it. But it’s not a known isomorphism to the mind.
If you’d like a parallel to the above, from the history of philosophy, you might be interested in comparing dialectic reasoning and Aristotelian logic. It’s not by accident that Aristotle explicitly argued that for any system to include the means to prove something (proof isn’t there in dialectics, not past some level, exactly because no lower level is built into the system) it has to be set with at least one axiom: the inability of anything to simultaneously include and not include a quality (in math you’d more often see this as A∨¬A). In dialectics (Parmenides, Zeno etc), this explicitly is argued against, the possibility of infinite division of matter being one of their premises.
Machine language is a known lower level; neurons aren’t; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself.
That doesn’t change that models of neither are of much use for most practical applications. If you do gene therapy with the target of changing cognition, it helps to understand what neurons do. If you care about how to memorize information it’s irrelevant and you rather focus on empirics of what happens when human memorize information.
It’s not by accident that Aristotle explicitly argued that for any system to include the means to prove something
Aristotle knew little about how to do science and learn through empiricism and today we have a much better idea of how to learn about the world then we had back then. Thinking in thousand year old terms while ignoring recent advances in how to gather knowledge is ineffective.
Machine language is a known lower level; neurons aren’t; perhaps in the future there will be more microscopic building blocks examined; maybe there is no end to the division itself.
In a computer it would indeed make no sense for a programmer to examine something below machine language, since you are compiling or otherwise acting upon it. But it’s not a known isomorphism to the mind.
If you’d like a parallel to the above, from the history of philosophy, you might be interested in comparing dialectic reasoning and Aristotelian logic. It’s not by accident that Aristotle explicitly argued that for any system to include the means to prove something (proof isn’t there in dialectics, not past some level, exactly because no lower level is built into the system) it has to be set with at least one axiom: the inability of anything to simultaneously include and not include a quality (in math you’d more often see this as A∨¬A). In dialectics (Parmenides, Zeno etc), this explicitly is argued against, the possibility of infinite division of matter being one of their premises.
That doesn’t change that models of neither are of much use for most practical applications. If you do gene therapy with the target of changing cognition, it helps to understand what neurons do. If you care about how to memorize information it’s irrelevant and you rather focus on empirics of what happens when human memorize information.
Aristotle knew little about how to do science and learn through empiricism and today we have a much better idea of how to learn about the world then we had back then. Thinking in thousand year old terms while ignoring recent advances in how to gather knowledge is ineffective.