And how does that sort of anthropomorphization explain anything?
The answer to this is a major theme of Dennett’s book “Intuition Pumps...” Which isn’t to say that I understood the answer when he wrote the book, but I did get the impression that a large number of dynamic systems, especially living or computationally driven systems, are more effectively predicted by theories using what they “want” than by theories that do not use such abstractions.
I don’t see anything in the quoted passage that suggests that individual neurons do something in their own interest to the detriment of the brain/person. But much more importantly, neurons don’t aim for anything. They’re not, you know, agents.
So this is why I’m objecting: the anthropomorphization is singularly unhelpful in understanding any of this, because whatever the mechanism behind what’s going on, goal-directed intentional behavior is very far from it.
But much more importantly, neurons don’t aim for anything.
You don’t know that.
We don’t know how neurons work. There are huge networks of transcription processes going on every time a neuron fires, and much of it is uncharted. We don’t know the minimum complexity required for goal-oriented behavior and it could well be below the complexity of the processes going on in neurons.
Bacteria can distinguish between different nutrients available around them, and eat the more yummy ones first. Is that not goal-oriented behavior? Neurons are way more complicated than bacteria.
My personal speculation is that neurons have a very simple goal: they attempt to predict the input they’re going to receive next and correlate it with their own output. If they arrive at a model that, based on their own output, predicts their input fairly reliably, they have achieved a measure of control over their neighboring neurons. Of course this means their neighboring neurons also experience more predictability—many things that know each other become one thing that knows itself. So this group of neurons begins to act in a somewhat coordinated fashion, and it does the only thing neurons know how to do, i.e. attempt to predict its surroundings based on its own output. Escalate this up to the macroscopic level, with neuroanatomical complications along the way, and you have goal-orientation in humans, which is the thing Dennett is really trying to explain.
I think choosing nutrients is plausible. I’m much more dubious about neurons trying to predict, but I might be underestimating the computational abilities of cells.
But why would we even suspect that? We expect that genes encode traits which increase the inclusive genetic fitness because there’s a known mechanism which eliminates genes that don’t. So we have a situation in which we know that something (the occurrence of a gene) has distant effects but we don’t necessarily knowing the intervening causal chain. This is analogous to knowing somebody’s goals without understanding their plan, so a careful application of anthropomorphism might help us understand.
I don’t know if there’s a mechanism that would make us expect neurons to behave in ways that give them lots of neuromodulators or whatever, even if it makes the entire system less effective.
Well, the following is highly speculative, but if many neurons die early at an early age, and there is variation within the individual neurons then the neurons that survive to adulthood will be selected for actions that reinforce their own survival. That said, I’d be surprised if there’s enough variation in neurons for anything like this to happen.
[...] and there is variation within the individual neurons
Well known fact.
That said, I’d be surprised if there’s enough variation in neurons for anything like this to happen.
Neurons do vary considerably. Natural selection over an individual lifetime is one reason to expect neurons to act against individual best interests. However, selfish neuron models are of limited use—due to the relatively small quantity of selection acting on neurons over an individual’s lifespan. Probably the main thing they explain is some types of brain cancer.
Selfish memes and selfish synapses seem like more important cases of agent-based modeling doing useful work in the brain. Selfish memes actually do explain why brains sometimes come to oppose the interests of the genes that constructed them. In both cases there’s a lot more selection going on than happens with neurons.
And how does that sort of anthropomorphization explain anything?
The answer to this is a major theme of Dennett’s book “Intuition Pumps...” Which isn’t to say that I understood the answer when he wrote the book, but I did get the impression that a large number of dynamic systems, especially living or computationally driven systems, are more effectively predicted by theories using what they “want” than by theories that do not use such abstractions.
If neurons sometimes aim for individual advantage that doesn’t serve the brain/person, rather than cooperating reliably, it’s worth understanding.
And if that’s an important part of how brains operate, then you’d want to know about it if you’re simulating brains.
I don’t see anything in the quoted passage that suggests that individual neurons do something in their own interest to the detriment of the brain/person. But much more importantly, neurons don’t aim for anything. They’re not, you know, agents.
So this is why I’m objecting: the anthropomorphization is singularly unhelpful in understanding any of this, because whatever the mechanism behind what’s going on, goal-directed intentional behavior is very far from it.
You don’t know that.
We don’t know how neurons work. There are huge networks of transcription processes going on every time a neuron fires, and much of it is uncharted. We don’t know the minimum complexity required for goal-oriented behavior and it could well be below the complexity of the processes going on in neurons.
Bacteria can distinguish between different nutrients available around them, and eat the more yummy ones first. Is that not goal-oriented behavior? Neurons are way more complicated than bacteria.
My personal speculation is that neurons have a very simple goal: they attempt to predict the input they’re going to receive next and correlate it with their own output. If they arrive at a model that, based on their own output, predicts their input fairly reliably, they have achieved a measure of control over their neighboring neurons. Of course this means their neighboring neurons also experience more predictability—many things that know each other become one thing that knows itself. So this group of neurons begins to act in a somewhat coordinated fashion, and it does the only thing neurons know how to do, i.e. attempt to predict its surroundings based on its own output. Escalate this up to the macroscopic level, with neuroanatomical complications along the way, and you have goal-orientation in humans, which is the thing Dennett is really trying to explain.
I think choosing nutrients is plausible. I’m much more dubious about neurons trying to predict, but I might be underestimating the computational abilities of cells.
Okay. Now according to you, is choosing nutrients goal-oriented behavior?
But why would we even suspect that? We expect that genes encode traits which increase the inclusive genetic fitness because there’s a known mechanism which eliminates genes that don’t. So we have a situation in which we know that something (the occurrence of a gene) has distant effects but we don’t necessarily knowing the intervening causal chain. This is analogous to knowing somebody’s goals without understanding their plan, so a careful application of anthropomorphism might help us understand.
I don’t know if there’s a mechanism that would make us expect neurons to behave in ways that give them lots of neuromodulators or whatever, even if it makes the entire system less effective.
Well, the following is highly speculative, but if many neurons die early at an early age, and there is variation within the individual neurons then the neurons that survive to adulthood will be selected for actions that reinforce their own survival. That said, I’d be surprised if there’s enough variation in neurons for anything like this to happen.
Well known fact.
Well known fact.
Neurons do vary considerably. Natural selection over an individual lifetime is one reason to expect neurons to act against individual best interests. However, selfish neuron models are of limited use—due to the relatively small quantity of selection acting on neurons over an individual’s lifespan. Probably the main thing they explain is some types of brain cancer.
Selfish memes and selfish synapses seem like more important cases of agent-based modeling doing useful work in the brain. Selfish memes actually do explain why brains sometimes come to oppose the interests of the genes that constructed them. In both cases there’s a lot more selection going on than happens with neurons.