poke—by a weird coincidence I was just looking at the software from van Gelder’s company, Austhink (they make systems for argument mapping). I never read his dynamic cognition papers, but it seems to be rather similar to the critiques of GOFAI (good old-fashioned AI) that were made in the 90s by neural-net people and the situated action people. There is some validity to these critiques, a lot actually, but in a sense they are attacking a strawman. Nobody really believes the brain is a classic Turing machine; even if it is doing symbol processing it is doing it in a massively parallel, associative style. But it is doing some sort of computation (a variety of sorts, actually), and nobody has come up with a better way of theorizing about what it is doing that computationalism.
Computers are specifically designed so that we don’t have to understand the hardware. That’s why I said it’s spurious to call anything but an artifact a computer.
Practially, programming computers usually requires having understandings one or two levels below the level you would like to. If I’m coding something, I would like to think in terms of pure algorithms but end up having to think about clock speeds, memory locality, and (if you are Google) heating and electrical supply issues. Computers do a better job of separating out levels than biology, because they are designed that way, but in both cases you have different levels of operation built out of underlying levels.
To return to the original issue, the question of what is the ontological status of entities and processes that exist at higher levels of this stack. They are certainly made of physics, but are they physics? This is a hard question that refuses to go away, except by declaring at so as some reductionists would like to do.
poke—by a weird coincidence I was just looking at the software from van Gelder’s company, Austhink (they make systems for argument mapping). I never read his dynamic cognition papers, but it seems to be rather similar to the critiques of GOFAI (good old-fashioned AI) that were made in the 90s by neural-net people and the situated action people. There is some validity to these critiques, a lot actually, but in a sense they are attacking a strawman. Nobody really believes the brain is a classic Turing machine; even if it is doing symbol processing it is doing it in a massively parallel, associative style. But it is doing some sort of computation (a variety of sorts, actually), and nobody has come up with a better way of theorizing about what it is doing that computationalism.
Practially, programming computers usually requires having understandings one or two levels below the level you would like to. If I’m coding something, I would like to think in terms of pure algorithms but end up having to think about clock speeds, memory locality, and (if you are Google) heating and electrical supply issues. Computers do a better job of separating out levels than biology, because they are designed that way, but in both cases you have different levels of operation built out of underlying levels.To return to the original issue, the question of what is the ontological status of entities and processes that exist at higher levels of this stack. They are certainly made of physics, but are they physics? This is a hard question that refuses to go away, except by declaring at so as some reductionists would like to do.