Thanks for the feedback. To be clear, I also have trouble trying to think of how one might implement certain key brain algorithms (e.g., hierarchical free-energy minimization) using spiking neurons. We might even see the first “neuromorphic AGIs” using analog chips that simulate neural networks with ReLU and sigmoid activation functions rather than spiking events. And these would probably not come until well after the first “software AGIs” have been built and trained. However, I still think it’s way too early to be ruling out neuromorphic hardware, spiking or not. Eventually energy efficiency will become a big enough deal that someone (maybe an AGI?) whose headspace is saturated with thinking about event-based neuromorphic algorithms will create something that outcompetes other forms of AGI. And all the work being done with neuromorphic hardware today will feed into the inspiration for that future design. /speculation
As far as understanding worm vs. human brain key operating principles goes, it’s important to remember that the human brain is hundreds of millions of times larger and more complex than the worm’s whole nervous system. It’s easy to think about (approaching) human intelligence as a bunch of abstract data structures and algorithms, rather than as an astronomically complex causal web of biological implementation details, in part because we are humans. We spend our whole lives using our intelligence and, as social animals, inferring the internal mental processes of other humans. Approaching either the human brain or the worm brain from the perspective of low-level implementation details as being the “key operating principles” is going to result in an investigation vastly more complex and hopeless than approaching either from a more abstract cognitive/behavioral level. And for each perspective separately, the human is vastly more complicated to figure out than the worm. Just to illustrate my point:
Sorry, I guess that was a bit unclear. I meant “key operating principles” as something like “a description that is sufficiently detailed to understand how the system meets a design spec”. Then the trick is that I was comparing two very different types of design specs. One side of the comparison was “worm intelligence”, which (in my mind) is one particular class of worm capabilities. So the “design spec” would be things like “it can learn to modify its rate of reversals and omega and delta turns in response to a conditioned stimulus and eat food and poop and evade predators etc. etc.” Can we give a sufficiently detailed description to understand how the worm brain does those things? Not yet, but I think eventually.
Then the other side of my comparison was “nervous system of the human”. The “design spec” there was (implicitly) “maximize inclusive genetic fitness”, i.e. it includes the entire set of evolutionarily-adaptive behaviors that the human does. And that’s really hard because we don’t even know what those behaviors are! There are astronomically many quirks of the human’s nervous system, and we have basically no way to figure out which of those quirks are related to evolutionarily-adaptive behaviors, because maybe it’s adaptive only in some exotic situation that comes up once every 12 generations, or it’s ever-so-slightly adaptive 50.1% of the time and ever-so-slightly maladaptive 49.9% of the time, etc.
Y’know, some neuron sends out a molecule that incidentally makes some vesicle slightly bigger, which infinitesimally changes the human’s facial expression, which might infinitesimally change how noticeable the human’s cognitive/emotional state is to other humans in a particular social context. So maybe sending out that molecule is an adaptive behavior—a computational output of the nervous system, and we need to include it in our high-level algorithm description. …Or maybe not! That same molecule is also kinda a necessary waste product. So it’s also possibly just an “implementation detail”. And then there are millions more things just like that. How are you ever going to sort it out? It seems hopeless to me.
My point was simply to draw attention to the need to compare apples to apples. It’s more about deconfusing things for future readers of this post than for correcting your actual understanding of the situation.
I still think it’s way too early to be ruling out neuromorphic hardware, spiking or not.
Sure, I wouldn’t say “rule out”, it’s certainly a possibility, especially if we’re talking about the N’th generation of ASICs. I guess I’d assign <10% probability that the first-generation ASIC that can run a “human-level AGI algorithm” is based on spikes. (Well, depending on the exact definitions I guess.) But I wouldn’t feel comfortable saying <1%. Of course that probability is not really based on much, I’m just trying to communicate what I currently think.
draw attention to the need to compare apples to apples
In an apples-to-apples comparison, it’s super duper ridiculously blindingly obvious that a human nervous system is harder to understand than a worm nervous system. In fact I’m somewhat distressed that you thought I was disagreeing with that!!!
I added a paragraph to the article to try to make it more clear—if you found it confusing then it’s a safe bet that other people did too. Thanks!
Thanks for the feedback. To be clear, I also have trouble trying to think of how one might implement certain key brain algorithms (e.g., hierarchical free-energy minimization) using spiking neurons. We might even see the first “neuromorphic AGIs” using analog chips that simulate neural networks with ReLU and sigmoid activation functions rather than spiking events. And these would probably not come until well after the first “software AGIs” have been built and trained. However, I still think it’s way too early to be ruling out neuromorphic hardware, spiking or not. Eventually energy efficiency will become a big enough deal that someone (maybe an AGI?) whose headspace is saturated with thinking about event-based neuromorphic algorithms will create something that outcompetes other forms of AGI. And all the work being done with neuromorphic hardware today will feed into the inspiration for that future design. /speculation
As far as understanding worm vs. human brain key operating principles goes, it’s important to remember that the human brain is hundreds of millions of times larger and more complex than the worm’s whole nervous system. It’s easy to think about (approaching) human intelligence as a bunch of abstract data structures and algorithms, rather than as an astronomically complex causal web of biological implementation details, in part because we are humans. We spend our whole lives using our intelligence and, as social animals, inferring the internal mental processes of other humans. Approaching either the human brain or the worm brain from the perspective of low-level implementation details as being the “key operating principles” is going to result in an investigation vastly more complex and hopeless than approaching either from a more abstract cognitive/behavioral level. And for each perspective separately, the human is vastly more complicated to figure out than the worm. Just to illustrate my point:
My point was simply to draw attention to the need to compare apples to apples. It’s more about deconfusing things for future readers of this post than for correcting your actual understanding of the situation.
Sure, I wouldn’t say “rule out”, it’s certainly a possibility, especially if we’re talking about the N’th generation of ASICs. I guess I’d assign <10% probability that the first-generation ASIC that can run a “human-level AGI algorithm” is based on spikes. (Well, depending on the exact definitions I guess.) But I wouldn’t feel comfortable saying <1%. Of course that probability is not really based on much, I’m just trying to communicate what I currently think.
In an apples-to-apples comparison, it’s super duper ridiculously blindingly obvious that a human nervous system is harder to understand than a worm nervous system. In fact I’m somewhat distressed that you thought I was disagreeing with that!!!
I added a paragraph to the article to try to make it more clear—if you found it confusing then it’s a safe bet that other people did too. Thanks!