No. …I’m just gonna start ranting, sorry for any mischaracterizations…
For one thing, I think the whole experimental concept is terrible. I think that a learning algorithm is a complex and exquisitely-designed machine. While the brain doesn’t do backprop, backprop is still a good example of how “updating a trained model to work better than before” takes a lot more than a big soup of neurons with Hebbian learning. Backprop requires systematically doing a lot of specific calculations and passing the results around in specific ways and so on.
So you look into the cortex, and you can see the layers and the minicolumns and the cortex-thalamus-cortex connections and so on. It seems really obvious to me that there’s a complex genetically-designed machine here. My claim is that it’s a machine that implements a learning algorithm and queries the trained model. So obviously (from my perspective), there are going to be lots of synapses that are written into the genetic blueprint of this learning-and-querying machine, and lots of other synapses that are part of the trained model that this machine is editing and querying.
In ML, it’s really obvious which bits of information and computation are part of the human-created learning algorithm and which bits are part of the trained model, because we wrote the algorithm ourselves. But in the brain, everything is just neurons and synapses, and it’s not obvious what’s what.
Anyway, treating neurons-in-a-dish as evidence for how the brain works at an algorithmic level is like taking a car, totally disassembling it, and putting all the bolts and wheels and sheet-metal etc. into a big dumpster, and shaking it around, and seeing if it can drive. Hey, one of the wheels is rolling around a bit, let’s publish. :-P
(If you’re putting neurons in a dish in order to study some low-level biochemical thing like synaptic vesicles, fine. Likewise, you can legitimately learn about the shape and strength of nuts and bolts by studying a totally-disassembled-car-in-a-dumpster. But you won’t get to see anything like a working car engine!)
The brain has thousands of neuron types. Perhaps it’s true that if you put one type of neuron in a dish then it does (mediocre) reinforcement learning, where a uniform-random 150mV 5Hz stimulation is treated by the neurons as negative reward, and where a nonrandom 75mV 100Hz stimulation is treated by the neurons as positive reward. I don’t think it’s true, but suppose that were the case. Then my take would be: “OK cool, whatever.” If that were true, I would strongly guess that the reason that the two stimulation types had different effects was their different waveforms, which somehow interacts with neuron electrophysiology, as opposed to the fact that one is more “predictable” than the other. And if it turned out that I’m wrong about that (i.e., if many experiments showed that “unpredictability” is really load-bearing), then I would guess that it’s some incidental result that doesn’t generalize to every other neuron type in the brain. And if that guess turned out wrong too, I still wouldn’t care, for the same reason that car engine behavior is quite different when it’s properly assembled versus when all its parts are disconnected in a big pile.
Even putting all that aside, the idea that the brain takes actions to minimize prediction error is transparently false. Just think about everyday life: Sometimes it’s unpleasant to feel confused. But other times it’s delightful to feel confused!—we feel mesmerized and delighted by toys that behave in unintuitive ways, or by stage magic. We seek those things out. Not to mention the Dark Room Problem. …And then the FEP people start talking about how the Dark Room Problem is not actually a problem because “surprise” actually means something different and more complicated then “failing to predict what’s about to happen”, blah blah blah. But as soon as you start adding those elaborations, suddenly the Pong experiment is not supporting the theory anymore! Like, the Pong experiment is supposed to prove that neurons reconfigure to avoid impossible-to-predict stimuli, as a very low-level mechanism. Well, if that’s true, then you can’t turn around and redefine “surprise” to include homeostatic errors.
EDIT: the paper in your last link seems to be a purely semantic criticism of the paper’s usage of words like “sentience” and “intelligence”. They do not provide any analysis at all of the actual experiment performed.
So I think all of this sounds mostly reasonable (and probably based on a bunch of implicit world-model about the brain I don’t have), especially the longest paragraph makes me update.
I think whether I agree with this view really depends heavily on quantitatively how well these brain-in-a-dish systems perform which I don’t know so I’ll look into it more first.
No. …I’m just gonna start ranting, sorry for any mischaracterizations…
For one thing, I think the whole experimental concept is terrible. I think that a learning algorithm is a complex and exquisitely-designed machine. While the brain doesn’t do backprop, backprop is still a good example of how “updating a trained model to work better than before” takes a lot more than a big soup of neurons with Hebbian learning. Backprop requires systematically doing a lot of specific calculations and passing the results around in specific ways and so on.
So you look into the cortex, and you can see the layers and the minicolumns and the cortex-thalamus-cortex connections and so on. It seems really obvious to me that there’s a complex genetically-designed machine here. My claim is that it’s a machine that implements a learning algorithm and queries the trained model. So obviously (from my perspective), there are going to be lots of synapses that are written into the genetic blueprint of this learning-and-querying machine, and lots of other synapses that are part of the trained model that this machine is editing and querying.
In ML, it’s really obvious which bits of information and computation are part of the human-created learning algorithm and which bits are part of the trained model, because we wrote the algorithm ourselves. But in the brain, everything is just neurons and synapses, and it’s not obvious what’s what.
Anyway, treating neurons-in-a-dish as evidence for how the brain works at an algorithmic level is like taking a car, totally disassembling it, and putting all the bolts and wheels and sheet-metal etc. into a big dumpster, and shaking it around, and seeing if it can drive. Hey, one of the wheels is rolling around a bit, let’s publish. :-P
(If you’re putting neurons in a dish in order to study some low-level biochemical thing like synaptic vesicles, fine. Likewise, you can legitimately learn about the shape and strength of nuts and bolts by studying a totally-disassembled-car-in-a-dumpster. But you won’t get to see anything like a working car engine!)
The brain has thousands of neuron types. Perhaps it’s true that if you put one type of neuron in a dish then it does (mediocre) reinforcement learning, where a uniform-random 150mV 5Hz stimulation is treated by the neurons as negative reward, and where a nonrandom 75mV 100Hz stimulation is treated by the neurons as positive reward. I don’t think it’s true, but suppose that were the case. Then my take would be: “OK cool, whatever.” If that were true, I would strongly guess that the reason that the two stimulation types had different effects was their different waveforms, which somehow interacts with neuron electrophysiology, as opposed to the fact that one is more “predictable” than the other. And if it turned out that I’m wrong about that (i.e., if many experiments showed that “unpredictability” is really load-bearing), then I would guess that it’s some incidental result that doesn’t generalize to every other neuron type in the brain. And if that guess turned out wrong too, I still wouldn’t care, for the same reason that car engine behavior is quite different when it’s properly assembled versus when all its parts are disconnected in a big pile.
Even putting all that aside, the idea that the brain takes actions to minimize prediction error is transparently false. Just think about everyday life: Sometimes it’s unpleasant to feel confused. But other times it’s delightful to feel confused!—we feel mesmerized and delighted by toys that behave in unintuitive ways, or by stage magic. We seek those things out. Not to mention the Dark Room Problem. …And then the FEP people start talking about how the Dark Room Problem is not actually a problem because “surprise” actually means something different and more complicated then “failing to predict what’s about to happen”, blah blah blah. But as soon as you start adding those elaborations, suddenly the Pong experiment is not supporting the theory anymore! Like, the Pong experiment is supposed to prove that neurons reconfigure to avoid impossible-to-predict stimuli, as a very low-level mechanism. Well, if that’s true, then you can’t turn around and redefine “surprise” to include homeostatic errors.
Ah, true, thanks.
So I think all of this sounds mostly reasonable (and probably based on a bunch of implicit world-model about the brain I don’t have), especially the longest paragraph makes me update.
I think whether I agree with this view really depends heavily on quantitatively how well these brain-in-a-dish systems perform which I don’t know so I’ll look into it more first.