A Thousand Narratives. Theory of Cognitive Morphogenesis
Part 4⁄20. Neural Darwinism
if the problems are the same, it (evolution) often finds the same solution"
- Richard Dawkins, The Blind Watchmaker
Neural Darwinism, also known as the theory of neuronal group selection, is a theory that proposes that the development and organisation of the brain is similar to the process of biological evolution. According to this theory, the brain is composed of a large number of neural networks that compete with each other for resources and survival, much like biological organisms competing for resources in their environment.
The main similarity between Neural Darwinism and evolution is that they both involve a process of variation, selection, and adaptation. In biological evolution, organisms with advantageous traits are more likely to survive and reproduce, passing those traits on to their offspring. Similarly, in Neural Darwinism, neural networks that are better able to compete for resources and perform necessary functions are more likely to be preserved and strengthened, while weaker or less effective networks are pruned away.
The core claims of Neural Darwinism[1]:
Neuronal groups, or populations of neurons that are functionally connected, compete with one another for resources and influence within the brain. Neuronal groups that are better adapted to a particular task or context are more likely to survive and thrive, while those that are less well-adapted are more likely to be eliminated or suppressed. The process of selection and adaptation occurs through a combination of genetic factors and experience-dependent modifications to neural connections. The brain is able to generate highly specific and adaptive responses to a wide range of stimuli through the dynamic interactions of neuronal populations.
Spatiotemporal coordination of the neural activity underlying these selectional events is achieved mainly by a process of reentry. Reentry is the synchronous entrainment of reciprocally connected neuronal groups within sensorimotor maps into ensembles of coherent global activity.
Neural Darwinism proposes that the brain uses degenerate coding, which means that multiple neural populations can respond to the same stimulus, allowing for redundancy and flexibility in neural processing.
The initial population of groups is known as the primary repertoire and developed during prenatal development.
The connections which are modified during development are between neuronal groups, rather than between specific cells.
The primary repertoire and selection is responsible for the creation of a secondary repertoire which will be involved in the subsequent behavior of the organism.
The operation of selection in Neural Darwinism is manifested through the selective stabilisation of neural connections that are relevant to the task at hand. Connections that are not relevant or are redundant are eliminated through a process of competitive interaction, while connections that are relevant are strengthened and stabilised.
The relationship of external events to specific operations of selection in Neural Darwinism is that external events provide the stimuli and experiences that drive the selective stabilisation process. The brain is constantly adapting and modifying its neural connections based on the environmental stimuli and experiences that it encounters. Therefore, the specific operations of selection are driven by the external events that the brain is exposed to.
ND has little to say about how cognitive processes such as decision-making, problem-solving, and other executive functions exactly occur but it provides plausible basis for future developments. It has been mostly accepted (except for the fact that it lacks “units of evolution”, replicators capable of hereditary variation[2]. I personally do not endorse this criticism and will address it in Narrative Theory section) and became a part of fruitful direction of research.
- ^
Neural Darwinism: The theory of neuronal group selection. GM Edelman. https://psycnet.apa.org/record/1987-98537-000
- ^
The Neuronal Replicator Hypothesis. Chrisantha Fernando, Richard Goldstein, Eörs Szathmáry. https://direct.mit.edu/neco/article-abstract/22/11/2809/7586/The-Neuronal-Replicator-Hypothesis
A Thousand Narratives. Theory of Cognitive Morphogenesis
Part 6⁄20. Artificial Neural Networks
Artificial Neural Networks are the face of modern artificial intelligence and the most successful branch of it too. But success unfortunately doesn’t mean biological plausibility. Even though most ML algorithms have been inspired by the aspects of biological neural networks final models end up pretty far from the source material. This makes their usefulness for the quest of reverse engineering the mind questionable. What I mean here is that almost no insights can be directly brought back to neuroscience to help with the research. I’ll explain why so in a bit. (note, this doesn’t mean that they can not serve as an inspiration. This is very much possible and, I’m sure, a good idea.)
There are three main show-stoppers:
(Reason #1) is the use of an implausible learning algorithm (read backpropagation). There were numerous attempts at finding something analogous to the backpropagation but all of them felt short as far as I know. The core objection to the biological plausibility of backpropagation is that weight updates in multi-layered networks require access to information that is non-local (i.e. error signals generated by units many layers downstream) In contrast, plasticity in biological synapses depends primarily on local information (i.e., pre- and post-synaptic neuronal activity)[1].
(Reason #2) is the fact that ANNs are being used to solve “synthetic” problems. The vast majority of ANNs originated from industry, designed to solve some practical real-world problem. For us, this means that the training data used for these models would have almost nothing in common with the human ontogenetic curriculum (or part of it) and hence not allow us to use it for this kind of research.
(Reason #3) is the use of implausible building blocks and morphology of the network, resulting in implausible neural dynamics. (e.g. use of point neurons instead of full-blown multi-compartment neurons, the use of all types of neural interaction instead of just STDP). We still don’t know crucial those alternative modes are, but the consensus on this matter is “we need more than we use right now”.
However, there are three notable exceptions:
(The first exception) is convolutional neural networks and their successors. They have been copied from the mammalian visual cortex and are considered sufficiently biologically plausible. The success of convNets is based on the utilization of design principles specific to the visual cortex, specifically shared weights and pooling[2]. The area of applicability of these principles is an open question.
(The second) is highly biologically plausible networks like Izhikevich’s, The Blue Brain project, and others. Izhkevich’s model is built from multi-compartment high-fidelity neurons displaying all the alternative modes of neural/ganglia interaction[3]. Among the results, my personal is “Network exhibits sleeplike oscillations, gamma (40 Hz) rhythms, conversion of firing rates to spike timings, and other interesting regimes. Due to the interplay between the delays and STDP, the spiking neurons spontaneously self-organize into groups and generate patterns of stereotypical polychronous activity. To our surprise, the number of coexisting polychronous groups far exceeds the number of neurons in the network, resulting in an unprecedented memory capacity of the system.”
(The third) is Hierarchical Temporal Memory by Jeff Hawkins. It’s a framework inspired by the principles of the neocortex. It claims that the role of neocortex is to integrate the upstream sensory data and then find patterns within the combined stream of neural activity. It views neocortex as an auto-association machine (the view I at least partially endorse). HTM has been developed almost two decades ago but, to my best knowledge, failed to earn much recognition. Still, it’s the best model of this type, so it is worth considering.
Demis Hassabis. Neuroscience-Inspired Artificial Intelligence.
https://www.sciencedirect.com/science/article/pii/S0896627317305093Y. Lecun, Y. Bengio. Gradient-based learning applied to document recognition. https://ieeexplore.ieee.org/abstract/document/726791
E. Izhikevich. Polychronization: Computation with Spikes. https://direct.mit.edu/neco/article-abstract/18/2/245/7033/Polychronization-Computation-with-Spikes