I think I have identified our core disagreement, you believe a neuron or a small group of neurons are fundamentally computationally simple and I don’t. I guess technically I’m agnostic about it but my intuition is that a real neuron cannot be abstracted to a LIF neuron the way a transistor can be abstracted to a cartoon switch (not that you were suggesting LIF is sufficient, just an example). One of the big questions I have is how error scales from neuron to overall activity. If a neuron model is 90% accurate wrt electrophysiology and the synapses connecting it are 90% accurate to real synapses, does that recover 90% of brain function? Is the last 10% something that is computationally irrelevant and can just be abstracted away, giving you effectively 100% functionality? Is 90% accuracy for single neurons magnified until the real accuracy is like 0.9^(80 billion)? I think it is unlikely that it is that bad, but I really don’t know because of the abject failure to upload anything as you point out. I am bracing myself for a world where we need a lot of data.
Let’s assume for the moment though that HH model with suitable electrical and chemical synapses would be sufficient to capture WBE. What I still really want to see is a paper saying “we look at x,y,z properties of neurons that can be measured post mortem and predict a,b,c properties of those neurons by tuning capacitance and conductance and resting potential in the HH model. Our model is P% accurate when looking at patch clamp experiments.” In parallel with that there should be a project trying to characterize how error tolerant real neurons and neural networks can be so we can find the lower bound of P. I actually tried something like that for synaptic weight (how does performance degrade when adding noise to the weights of a spiking neural network) but I was so disillusioned with the learning rules that I am not confident in my results. I’m not sure if anyone has the ability to answer these kinds of questions because we are still just so bad at emulating anything.
Edit:
Also, I am not sure if you’re proposing we compress multiple neurons down into a simpler computational block, the way a real arrangement of transistors can be abstracted into logic gates or adders or whatever. I am not a fan of that for WBE for philosophical reasons and because I think it is less likely to capture everything we care about especially for individual people.
you believe a neuron or a small group of neurons are fundamentally computationally simple and I don’t
I guess I would phrase it as “there’s a useful thing that neurons are doing to contribute to the brain algorithm, and that thing constitutes a tiny fraction of the full complexity of a real-world neuron”.
(I would say the same thing about MOSFETs. Again, here’s how to model a MOSFET, it’s a horrific mess. Is a MOSFET “fundamentally computationally simple”? Maybe?—I’m not sure exactly what that means. I’d say it does a useful thing in the context of an integrated circuit, and that useful thing is pretty simple.
The trick is, “the useful thing that a neuron is doing to contribute to the brain algorithm” is not something you can figure out by studying the neuron, just as “the useful thing that a MOSFET is doing to contribute to IC function” is not something you can figure out by studying the MOSFET. There’s no such thing as “Our model is P% accurate” if you don’t know what phenomenon you’re trying to capture. If you model the MOSFET as a cartoon switch, that model will be extremely inaccurate along all kinds of axes—for example, its thermal coefficients will be wrong by 100%. But that doesn’t matter because the cartoon switch model is accurate along the one axis that matters for IC functioning.
The brain is generally pretty noise-tolerant. Indeed, if one of your neurons dies altogether, “you are still you” in the ways that matter. But a dead neuron is a 0% accurate model of a live neuron. ¯\_(ツ)_/¯
In parallel with that there should be a project trying to characterize how error tolerant real neurons and neural networks can be so we can find the lower bound of P. I actually tried something like that for synaptic weight (how does performance degrade when adding noise to the weights of a spiking neural network) but I was so disillusioned with the learning rules that I am not confident in my results.
Just because every part of the brain has neurons and synapses doesn’t mean every part of the brain is a “spiking neural network” with the connotation that that term has in ML, i.e. a learning algorithm. The brain also needs (what I call) “business logic”—just as every ML github repository has tons of code that is not the learning algorithm itself. I think that the low-thousands of different neuron types are playing quite different roles in quite different parts of the brain algorithm, and that studying “spiking neural networks” is the wrong starting point.
I apologize for my sloppy language, “computationally simple” was not well defined. You are quite right when you say there is no P% accuracy. I think my offhand remark about spiking neural networks was not helpful to this discussion.
In a practical sense, here is what I mean. Imagine someone makes a brain organoid with ~10 cells. They can directly measure membrane voltage and any other relevant variable they want because this is hypothetical. Then they try and emulate whatever algorithm this organoid has going on, its direct input to output and whatever learning rule changes that it might have. But, to test this they have crappy point neuron models implementing LIF and the synapses are just a constant conductance or something, and then rules on top of that that can adjust parameters (membrane capacitance, resting potential, synaptic conductance, ect.) and it fails to replicate observables. Obviously this is an extreme example, but I just want better neuron models so nothing like this ever has the chance to happen.
Basically, if we can’t model an organoid we could
Fix the electrophysiology which either makes it work or proves something else is the problem
Develop theory via reverse engineering to such a point we just understand what is wrong and home in on it
Fix other things and hope it isn’t electrophysiology
Three is obviously a bad plan. Two is really really hard. One should be relatively easy provided we have a reasonable threshold of what we consider to be accurate electrophysiology. We could have good biophysical models that recreate it or we could have recurrent neural nets modeling the input current → membrane voltage relation of each neuron. It just seems like an easy way to cross of a potential cause of failure (famous last words I’m sure).
As for you business logic point, it is valid but I am worried that black boxing that too much would lead to collateral damage. I am not sure if that’s what you meant when you said spiking neural networks are the wrong starting point. In any case, I would like higher order thinking to stay as a function of spiking neurons even if things like reflexes and basal behavior can be replaced without loss.
Yeah I think “brain organoids” are a bit like throwing 1000 transistors and batteries and capacitors into a bowl, and shaking the bowl around, and then soldering every point where two leads are touching each other, and then doing electrical characterization on the resulting monstrosity. :)
Would you learn anything whatsoever from this activity? Umm, maybe? Or maybe not. Regardless, even if it’s not completely useless, it’s definitely not a central part of understanding or emulating integrated circuits.
(There was a famous paper where it’s claimed that brain organoids can learn to play Pong, but I think it’s p-hacked / cherry-picked.)
There’s just so much structure in which neurons are connected to which in the brain—e.g. the cortex has 6 layers, with specific cell types connected to each other in specific ways, and then there’s cortex-thalamus-cortex connections and on and on. A big ball of randomly-connected neurons is just a totally different thing.
Also, I am not sure if you’re proposing we compress multiple neurons down into a simpler computational block, the way a real arrangement of transistors can be abstracted into logic gates or adders or whatever. I am not a fan of that for WBE for philosophical reasons and because I think it is less likely to capture everything we care about especially for individual people.
Yes and no. My WBE proposal would be to understand the brain algorithm in general, notice that the algorithm has various adjustable parameters (both because of inter-individual variation and within-lifetime learning of memories, desires, etc.), do a brain-scan that records those parameters for a certain individual, and now you can run that algorithm, and it’s a WBE of that individual.
When you run the algorithm, there is no particular reason to expect that the data structures you want to use for that will superficially resemble neurons, like with a 1-to-1 correspondence. Yes you want to run the same algorithm, producing the same output (within tolerance, such that “it’s the same person”), but presumably you’ll be changing the low-level implementation to mesh better with the affordances of the GPU instruction set rather than the affordances of biological neurons.
The “philosophical reasons” are presumably that you think it might not be conscious? If so, I disagree, for reasons briefly summarized in §1.6 here.
“Less likely to capture everything we care about especially for individual people” would be a claim that we didn’t measure the right things or are misunderstanding the algorithm, which is possible, but unrelated to the low-level implementation of the algorithm on our chips.
I definitely am NOT an advocate for things like training a foundation model to match fMRI data and calling it a mediocre WBE. (There do exist people who like that idea, just I’m not one of them.) Whatever the actual information storage is, as used by the brain, e.g. synapses, that’s what we want to be measuring individually and including in the WBE. :)
First of all, I hate analogies in general but that’s a pet peeve, they are useful. But going with your shaken up circuit as an analogy to brain organoids and assuming it is true, I think it is more useful than you give it credit. If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points. If you model resistors as some weird non ohmic entity you’ll probably get the wrong answer because you missed the fact that they behave ohmic in many situations. If you never explicitly write down Ohm’s law but you empirically measure current at a whole bunch of different voltages (analogous to patch clamps but far far from a perfect analogy) you can probably get the right answer. So yeah an organoid would not be perfect but I would be surprised if being able to fully emulate one would be useless. Personally I think it would be quite useful but I am actively tempering my expectations.
But my meta point of
look at small system
try to emulate
cross off obvious things (electrophysiology should be simple for only a few neurons) that could cause it to not be working
repeat and use data to develop overall theory
stands even if organoids in particular are useless. The theory developed with this kind of research loop might be useless for your very abstract representation of the brain’s algorithm but I think it would be just fine, in principle, for the traditional, bottom up approach.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist. Gallons of ink have been spilled over this so suffice it to say, I think the only thing with any hope of preserving my consciousness (or at least a conscious mind that still holds the belief that it was at one point the person writing this) is gradual replacement of my neurons while my current neurons are still firing. I know that is far and away the least likely path of WBE because it requires solving everything else + nanotechnology but hey I dream big.
To be clear, I think your proposed WBE plan has a lot of merit, but it would still result in me experiencing death and then nothing else so I’m not especially interested. Yes, that probably makes me quite selfish.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist.
Ah, but how do you know that the person that went to bed last night wasn’t a different person, who died, and you are the “something else” that woke up with all of that person’s memories? And then you’ll die tonight, and tomorrow morning there will be a new person who acts like you and knows everything you know but “you would be dead and something else would exist”?
…It’s fine if you don’t want to keep talking about this. I just couldn’t resist. :-P
If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points.
I agree that, if you have a full SPICE transistor model, you’ll be able to model any arbitrary crazy configuration of transistors. If you treat a transistor as a cartoon switch, you’ll be able to model integrated circuits perfectly, but not to model transistors in very different weird contexts.
By the same token, if you have a perfect model of every aspect of a neuron, then you’ll be able to model it in any possible context, including the unholy mess that constitutes an organoid. I just think that getting a perfect model of every aspect of a neuron is unnecessary, and unrealistic. And in that framework, successfully simulating an organoid is neither necessary nor sufficient to know that your neuron model is OK.
Yes, I am familiar with the sleep = death argument. I really don’t have any counter, at some point though I think we all just kind of arbitrarily draw a line. I could be a solipsist, I could believe in last thursdayism, I could believe some people are p-zombies, I could believe in the multiverse. I don’t believe in any of these but I don’t have any real arguments for them and I don’t think anyone has any knockdown arguments one way or the other. All I know is that I fear soma style brain upload, I fear star trek style teleportation, but I don’t fear gradual replacement nor do I fear falling asleep.
As for wrapping up our more scientific disagreement, I don’t have much to say other than it was very thought provoking and I’m still going to try what I said in my post. Even if it doesn’t come to complete fruition I hope it will be relevant experience for when I apply to grad school.
I think I have identified our core disagreement, you believe a neuron or a small group of neurons are fundamentally computationally simple and I don’t. I guess technically I’m agnostic about it but my intuition is that a real neuron cannot be abstracted to a LIF neuron the way a transistor can be abstracted to a cartoon switch (not that you were suggesting LIF is sufficient, just an example). One of the big questions I have is how error scales from neuron to overall activity. If a neuron model is 90% accurate wrt electrophysiology and the synapses connecting it are 90% accurate to real synapses, does that recover 90% of brain function? Is the last 10% something that is computationally irrelevant and can just be abstracted away, giving you effectively 100% functionality? Is 90% accuracy for single neurons magnified until the real accuracy is like 0.9^(80 billion)? I think it is unlikely that it is that bad, but I really don’t know because of the abject failure to upload anything as you point out. I am bracing myself for a world where we need a lot of data.
Let’s assume for the moment though that HH model with suitable electrical and chemical synapses would be sufficient to capture WBE. What I still really want to see is a paper saying “we look at x,y,z properties of neurons that can be measured post mortem and predict a,b,c properties of those neurons by tuning capacitance and conductance and resting potential in the HH model. Our model is P% accurate when looking at patch clamp experiments.” In parallel with that there should be a project trying to characterize how error tolerant real neurons and neural networks can be so we can find the lower bound of P. I actually tried something like that for synaptic weight (how does performance degrade when adding noise to the weights of a spiking neural network) but I was so disillusioned with the learning rules that I am not confident in my results. I’m not sure if anyone has the ability to answer these kinds of questions because we are still just so bad at emulating anything.
Edit:
Also, I am not sure if you’re proposing we compress multiple neurons down into a simpler computational block, the way a real arrangement of transistors can be abstracted into logic gates or adders or whatever. I am not a fan of that for WBE for philosophical reasons and because I think it is less likely to capture everything we care about especially for individual people.
I guess I would phrase it as “there’s a useful thing that neurons are doing to contribute to the brain algorithm, and that thing constitutes a tiny fraction of the full complexity of a real-world neuron”.
(I would say the same thing about MOSFETs. Again, here’s how to model a MOSFET, it’s a horrific mess. Is a MOSFET “fundamentally computationally simple”? Maybe?—I’m not sure exactly what that means. I’d say it does a useful thing in the context of an integrated circuit, and that useful thing is pretty simple.
The trick is, “the useful thing that a neuron is doing to contribute to the brain algorithm” is not something you can figure out by studying the neuron, just as “the useful thing that a MOSFET is doing to contribute to IC function” is not something you can figure out by studying the MOSFET. There’s no such thing as “Our model is P% accurate” if you don’t know what phenomenon you’re trying to capture. If you model the MOSFET as a cartoon switch, that model will be extremely inaccurate along all kinds of axes—for example, its thermal coefficients will be wrong by 100%. But that doesn’t matter because the cartoon switch model is accurate along the one axis that matters for IC functioning.
The brain is generally pretty noise-tolerant. Indeed, if one of your neurons dies altogether, “you are still you” in the ways that matter. But a dead neuron is a 0% accurate model of a live neuron. ¯\_(ツ)_/¯
Just because every part of the brain has neurons and synapses doesn’t mean every part of the brain is a “spiking neural network” with the connotation that that term has in ML, i.e. a learning algorithm. The brain also needs (what I call) “business logic”—just as every ML github repository has tons of code that is not the learning algorithm itself. I think that the low-thousands of different neuron types are playing quite different roles in quite different parts of the brain algorithm, and that studying “spiking neural networks” is the wrong starting point.
I apologize for my sloppy language, “computationally simple” was not well defined. You are quite right when you say there is no P% accuracy. I think my offhand remark about spiking neural networks was not helpful to this discussion.
In a practical sense, here is what I mean. Imagine someone makes a brain organoid with ~10 cells. They can directly measure membrane voltage and any other relevant variable they want because this is hypothetical. Then they try and emulate whatever algorithm this organoid has going on, its direct input to output and whatever learning rule changes that it might have. But, to test this they have crappy point neuron models implementing LIF and the synapses are just a constant conductance or something, and then rules on top of that that can adjust parameters (membrane capacitance, resting potential, synaptic conductance, ect.) and it fails to replicate observables. Obviously this is an extreme example, but I just want better neuron models so nothing like this ever has the chance to happen.
Basically, if we can’t model an organoid we could
Fix the electrophysiology which either makes it work or proves something else is the problem
Develop theory via reverse engineering to such a point we just understand what is wrong and home in on it
Fix other things and hope it isn’t electrophysiology
Three is obviously a bad plan. Two is really really hard. One should be relatively easy provided we have a reasonable threshold of what we consider to be accurate electrophysiology. We could have good biophysical models that recreate it or we could have recurrent neural nets modeling the input current → membrane voltage relation of each neuron. It just seems like an easy way to cross of a potential cause of failure (famous last words I’m sure).
As for you business logic point, it is valid but I am worried that black boxing that too much would lead to collateral damage. I am not sure if that’s what you meant when you said spiking neural networks are the wrong starting point. In any case, I would like higher order thinking to stay as a function of spiking neurons even if things like reflexes and basal behavior can be replaced without loss.
Yeah I think “brain organoids” are a bit like throwing 1000 transistors and batteries and capacitors into a bowl, and shaking the bowl around, and then soldering every point where two leads are touching each other, and then doing electrical characterization on the resulting monstrosity. :)
Would you learn anything whatsoever from this activity? Umm, maybe? Or maybe not. Regardless, even if it’s not completely useless, it’s definitely not a central part of understanding or emulating integrated circuits.
(There was a famous paper where it’s claimed that brain organoids can learn to play Pong, but I think it’s p-hacked / cherry-picked.)
There’s just so much structure in which neurons are connected to which in the brain—e.g. the cortex has 6 layers, with specific cell types connected to each other in specific ways, and then there’s cortex-thalamus-cortex connections and on and on. A big ball of randomly-connected neurons is just a totally different thing.
Yes and no. My WBE proposal would be to understand the brain algorithm in general, notice that the algorithm has various adjustable parameters (both because of inter-individual variation and within-lifetime learning of memories, desires, etc.), do a brain-scan that records those parameters for a certain individual, and now you can run that algorithm, and it’s a WBE of that individual.
When you run the algorithm, there is no particular reason to expect that the data structures you want to use for that will superficially resemble neurons, like with a 1-to-1 correspondence. Yes you want to run the same algorithm, producing the same output (within tolerance, such that “it’s the same person”), but presumably you’ll be changing the low-level implementation to mesh better with the affordances of the GPU instruction set rather than the affordances of biological neurons.
The “philosophical reasons” are presumably that you think it might not be conscious? If so, I disagree, for reasons briefly summarized in §1.6 here.
“Less likely to capture everything we care about especially for individual people” would be a claim that we didn’t measure the right things or are misunderstanding the algorithm, which is possible, but unrelated to the low-level implementation of the algorithm on our chips.
I definitely am NOT an advocate for things like training a foundation model to match fMRI data and calling it a mediocre WBE. (There do exist people who like that idea, just I’m not one of them.) Whatever the actual information storage is, as used by the brain, e.g. synapses, that’s what we want to be measuring individually and including in the WBE. :)
First of all, I hate analogies in general but that’s a pet peeve, they are useful. But going with your shaken up circuit as an analogy to brain organoids and assuming it is true, I think it is more useful than you give it credit. If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points. If you model resistors as some weird non ohmic entity you’ll probably get the wrong answer because you missed the fact that they behave ohmic in many situations. If you never explicitly write down Ohm’s law but you empirically measure current at a whole bunch of different voltages (analogous to patch clamps but far far from a perfect analogy) you can probably get the right answer. So yeah an organoid would not be perfect but I would be surprised if being able to fully emulate one would be useless. Personally I think it would be quite useful but I am actively tempering my expectations.
But my meta point of
look at small system
try to emulate
cross off obvious things (electrophysiology should be simple for only a few neurons) that could cause it to not be working
repeat and use data to develop overall theory
stands even if organoids in particular are useless. The theory developed with this kind of research loop might be useless for your very abstract representation of the brain’s algorithm but I think it would be just fine, in principle, for the traditional, bottom up approach.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist. Gallons of ink have been spilled over this so suffice it to say, I think the only thing with any hope of preserving my consciousness (or at least a conscious mind that still holds the belief that it was at one point the person writing this) is gradual replacement of my neurons while my current neurons are still firing. I know that is far and away the least likely path of WBE because it requires solving everything else + nanotechnology but hey I dream big.
To be clear, I think your proposed WBE plan has a lot of merit, but it would still result in me experiencing death and then nothing else so I’m not especially interested. Yes, that probably makes me quite selfish.
Ah, but how do you know that the person that went to bed last night wasn’t a different person, who died, and you are the “something else” that woke up with all of that person’s memories? And then you’ll die tonight, and tomorrow morning there will be a new person who acts like you and knows everything you know but “you would be dead and something else would exist”?
…It’s fine if you don’t want to keep talking about this. I just couldn’t resist. :-P
I agree that, if you have a full SPICE transistor model, you’ll be able to model any arbitrary crazy configuration of transistors. If you treat a transistor as a cartoon switch, you’ll be able to model integrated circuits perfectly, but not to model transistors in very different weird contexts.
By the same token, if you have a perfect model of every aspect of a neuron, then you’ll be able to model it in any possible context, including the unholy mess that constitutes an organoid. I just think that getting a perfect model of every aspect of a neuron is unnecessary, and unrealistic. And in that framework, successfully simulating an organoid is neither necessary nor sufficient to know that your neuron model is OK.
Yes, I am familiar with the sleep = death argument. I really don’t have any counter, at some point though I think we all just kind of arbitrarily draw a line. I could be a solipsist, I could believe in last thursdayism, I could believe some people are p-zombies, I could believe in the multiverse. I don’t believe in any of these but I don’t have any real arguments for them and I don’t think anyone has any knockdown arguments one way or the other. All I know is that I fear soma style brain upload, I fear star trek style teleportation, but I don’t fear gradual replacement nor do I fear falling asleep.
As for wrapping up our more scientific disagreement, I don’t have much to say other than it was very thought provoking and I’m still going to try what I said in my post. Even if it doesn’t come to complete fruition I hope it will be relevant experience for when I apply to grad school.