But why would we even suspect that? We expect that genes encode traits which increase the inclusive genetic fitness because there’s a known mechanism which eliminates genes that don’t. So we have a situation in which we know that something (the occurrence of a gene) has distant effects but we don’t necessarily knowing the intervening causal chain. This is analogous to knowing somebody’s goals without understanding their plan, so a careful application of anthropomorphism might help us understand.
I don’t know if there’s a mechanism that would make us expect neurons to behave in ways that give them lots of neuromodulators or whatever, even if it makes the entire system less effective.
Well, the following is highly speculative, but if many neurons die early at an early age, and there is variation within the individual neurons then the neurons that survive to adulthood will be selected for actions that reinforce their own survival. That said, I’d be surprised if there’s enough variation in neurons for anything like this to happen.
[...] and there is variation within the individual neurons
Well known fact.
That said, I’d be surprised if there’s enough variation in neurons for anything like this to happen.
Neurons do vary considerably. Natural selection over an individual lifetime is one reason to expect neurons to act against individual best interests. However, selfish neuron models are of limited use—due to the relatively small quantity of selection acting on neurons over an individual’s lifespan. Probably the main thing they explain is some types of brain cancer.
Selfish memes and selfish synapses seem like more important cases of agent-based modeling doing useful work in the brain. Selfish memes actually do explain why brains sometimes come to oppose the interests of the genes that constructed them. In both cases there’s a lot more selection going on than happens with neurons.
But why would we even suspect that? We expect that genes encode traits which increase the inclusive genetic fitness because there’s a known mechanism which eliminates genes that don’t. So we have a situation in which we know that something (the occurrence of a gene) has distant effects but we don’t necessarily knowing the intervening causal chain. This is analogous to knowing somebody’s goals without understanding their plan, so a careful application of anthropomorphism might help us understand.
I don’t know if there’s a mechanism that would make us expect neurons to behave in ways that give them lots of neuromodulators or whatever, even if it makes the entire system less effective.
Well, the following is highly speculative, but if many neurons die early at an early age, and there is variation within the individual neurons then the neurons that survive to adulthood will be selected for actions that reinforce their own survival. That said, I’d be surprised if there’s enough variation in neurons for anything like this to happen.
Well known fact.
Well known fact.
Neurons do vary considerably. Natural selection over an individual lifetime is one reason to expect neurons to act against individual best interests. However, selfish neuron models are of limited use—due to the relatively small quantity of selection acting on neurons over an individual’s lifespan. Probably the main thing they explain is some types of brain cancer.
Selfish memes and selfish synapses seem like more important cases of agent-based modeling doing useful work in the brain. Selfish memes actually do explain why brains sometimes come to oppose the interests of the genes that constructed them. In both cases there’s a lot more selection going on than happens with neurons.