I think you really have to think in terms of individual neurons as micro-agents, and ask what’s in it for them?
You don’t need goal-directed behavior to explain this; we’re probably looking at a load-balancing process of some sort, but that doesn’t mean that it’s working in a way that’s well modeled by agents acting on desires. Analogously, the branches of a red-black tree appear from the outside to even themselves out if one gets much longer than its sibling, a process that looks pretty goal-oriented, but all the meat of the algorithm goes into reflexively satisfying local constraints that have nothing obviously to do with branch length.
Truthfully, this reads as a “when all you have is a hammer” sort of thought process to me.
It depends on how one uses the hammer. If you are a philosopher there’s nothing wrong with trying your hammer at all problems and seeing whether some useful insight comes out of it.
You don’t need goal-directed behavior to explain this; we’re probably looking at a load-balancing process of some sort, but that doesn’t mean that it’s working in a way that’s well modeled by agents acting on desires. Analogously, the branches of a red-black tree appear from the outside to even themselves out if one gets much longer than its sibling, a process that looks pretty goal-oriented, but all the meat of the algorithm goes into reflexively satisfying local constraints that have nothing obviously to do with branch length.
Truthfully, this reads as a “when all you have is a hammer” sort of thought process to me.
It depends on how one uses the hammer. If you are a philosopher there’s nothing wrong with trying your hammer at all problems and seeing whether some useful insight comes out of it.