Thank you for that article, I don’t know how it didn’t come up when I was researching this. Others finding papers I should have been able to find alone is a continuous frustrations of mine.
I would love to live in a world where we have a few thousand template neurons and can just put them together based on a few easily identifiable factors (~3-10 genes, morphology, brain region) but until I find a paper that convincingly recreates the electrophysiology based on those things I have to entertain the idea that somewhere between 10 and 10^5 are relevant. I would be truly shocked if we need 10^5 but I wouldn’t be surprised if we need to measure more expression levels than we can comfortable infer based on some staining method. Having just read your post on pessimism, I am confused as to why you think low thousands of separate neuron models would be sufficient. I agree that characterizing billions of neurons is a very tall order (although I really won’t care how long it takes if I’m dead anyway). But when you say ‘“...information storage in the nucleus doesn’t happen at all, or has such a small effect that we can ignore it and still get the same high-level behavior” (which I don’t believe).’ it sounds to me like an argument in favor of looking at the transcriptome of each cell.
Just to be abundantly clear, my main argument in the post is not “Single cell transcriptomics leading to perfect electrophysiology is essential for whole brain emulation and anything less than that is doomed to fail.” It is closer to “I have not seen a well developed theory that can predict even a single cell’s electrophysiology given things we can measure post mortem, so we should really research that if we care about whole brain emulation. If it already exists, please tell me about it.”
I think you make good points when you point out failures of c. elegans uploading and other computational neuroscience failures. To me, it makes a lot of sense to copy single cells as close as possible and then start modeling learning rules and synaptic conductance and what not. If we find out later a certain feature of a neuron model can be abstracted away, that’s great. But a lot of what I see right now is people running to study learning rules and they use homogenous leaky integrate and fire neurons. In my mind they are doing machine learning on spiking neural networks, not computational neuroscience. I don’t know how relevant that particular critique is but it has been a frustration of mine for a while.
I am still very new to this whole field, I hope that cleared things up. If it did not, I apologize.
Having just read your post on pessimism, I am confused as to why you think low thousands of separate neuron models would be sufficient. I agree that characterizing billions of neurons is a very tall order (although I really won’t care how long it takes if I’m dead anyway). But when you say ‘“...information storage in the nucleus doesn’t happen at all, or has such a small effect that we can ignore it and still get the same high-level behavior” (which I don’t believe).’ it sounds to me like an argument in favor of looking at the transcriptome of each cell.
I think the genome builds a brain algorithm, and the brain algorithm (like practically every algorithm in your CS textbook) includes a number of persistent variables that are occasionally updated in such-and-such way under such-and-such circumstance. Those variables correspond to what the neuro people call plasticity—synaptic plasticity, gene expression plasticity, whatever. Some such occasionally-updated variables are parameters in within-lifetime learning algorithms that are part of the brain algorithm (akin to ML weights). Other such variables are not, instead they’re just essentially counter variables or whatever (see §2.3.3 here). The “understanding the brain algorithm” research program would be figuring out what the brain algorithm is, how and why it works, and thus (as a special case) what are the exact set of “persistent variables that are occasionally updated”, and how are they stored in the brain. If you complete this research program, you get brain-like AGI, but you can’t upload any particular adult human. Then a different research program is: take an adult human brain, and go in with your microtome etc. and actually measure all those “persistent variables that are occasionally updated”, which comprise a person’s unique memories, beliefs, desires, etc.
I think the first research program (understanding the brain algorithm) doesn’t require a thorough understanding of neuron electrophysiology. For example (copying from §3.1 here), suppose that I want to model a translator (specifically, a MOSFET). And suppose that my model only needs to be sufficient to emulate the calculations done by a CMOS integrated circuit. Then my model can be extremely simple—it can just treat the transistor as a cartoon switch. Next, again suppose that I want to model a transistor. But this time, I want my model to accurately capture all measurable details of the transistor. Then my model needs to be mind-bogglingly complex, involving many dozens of obscure SPICE modeling parameters. The point is: I’m suggesting an analogy between this transistor and a neuron with synapses, dendritic spikes, etc. The latter system is mind-bogglingly complex when you study it in detail—no doubt about it! But that doesn’t mean that the neuron’s essential algorithmic role is equally complicated. The latter might just amount to a little cartoon diagram with some ANDs and ORs and IF-THENs or whatever. Or maybe not, but we should at least keep that possibility in mind.
In the “understanding the brain algorithm” research program, you’re triangulating between knowledge of algorithms in general, knowledge of what actual brains actually do (including lesion studies, stimulation studies, etc.), knowledge of evolution and ecology, and measurements of neurons. The first three can add so much information that it seems possible to pin down the fourth without all that much measurements, or even with no measurements at all beyond the connectome. Probably gene expression stuff will be involved in the implementations in certain cases, but we don’t really care, and don’t necessarily need to be measuring that. At least, that’s my guess.
In the “take the adult brain and measure all the ‘persistent variables that are occasionally updated’ research program, yes it’s possible that some of those persistent variables are stored in gene expressions, but my guess is very few, and if we know where they are and how they work then we can just measure the exact relevant RNA in the exact relevant cells.
…To be clear, I think working on the “understanding the brain algorithm” research program is very bad and dangerous when it focuses on the cortex and thalamus and basal ganglia, but good when it focuses on the hypothalamus and brainstem, and it’s sad that people in neuroscience, especially AI-adjacent people with a knack for algorithms, are overwhelmingly are working on the exact worst possible thing :( But I think doing it in the right order (cortex last, long after deeply understanding everything about the hypothalamus & brainstem) is probably good, and I think that there’s realistically no way to get WBE without completing the “understanding the brain algorithm” research program somewhere along the way.
I think I have identified our core disagreement, you believe a neuron or a small group of neurons are fundamentally computationally simple and I don’t. I guess technically I’m agnostic about it but my intuition is that a real neuron cannot be abstracted to a LIF neuron the way a transistor can be abstracted to a cartoon switch (not that you were suggesting LIF is sufficient, just an example). One of the big questions I have is how error scales from neuron to overall activity. If a neuron model is 90% accurate wrt electrophysiology and the synapses connecting it are 90% accurate to real synapses, does that recover 90% of brain function? Is the last 10% something that is computationally irrelevant and can just be abstracted away, giving you effectively 100% functionality? Is 90% accuracy for single neurons magnified until the real accuracy is like 0.9^(80 billion)? I think it is unlikely that it is that bad, but I really don’t know because of the abject failure to upload anything as you point out. I am bracing myself for a world where we need a lot of data.
Let’s assume for the moment though that HH model with suitable electrical and chemical synapses would be sufficient to capture WBE. What I still really want to see is a paper saying “we look at x,y,z properties of neurons that can be measured post mortem and predict a,b,c properties of those neurons by tuning capacitance and conductance and resting potential in the HH model. Our model is P% accurate when looking at patch clamp experiments.” In parallel with that there should be a project trying to characterize how error tolerant real neurons and neural networks can be so we can find the lower bound of P. I actually tried something like that for synaptic weight (how does performance degrade when adding noise to the weights of a spiking neural network) but I was so disillusioned with the learning rules that I am not confident in my results. I’m not sure if anyone has the ability to answer these kinds of questions because we are still just so bad at emulating anything.
Edit:
Also, I am not sure if you’re proposing we compress multiple neurons down into a simpler computational block, the way a real arrangement of transistors can be abstracted into logic gates or adders or whatever. I am not a fan of that for WBE for philosophical reasons and because I think it is less likely to capture everything we care about especially for individual people.
you believe a neuron or a small group of neurons are fundamentally computationally simple and I don’t
I guess I would phrase it as “there’s a useful thing that neurons are doing to contribute to the brain algorithm, and that thing constitutes a tiny fraction of the full complexity of a real-world neuron”.
(I would say the same thing about MOSFETs. Again, here’s how to model a MOSFET, it’s a horrific mess. Is a MOSFET “fundamentally computationally simple”? Maybe?—I’m not sure exactly what that means. I’d say it does a useful thing in the context of an integrated circuit, and that useful thing is pretty simple.
The trick is, “the useful thing that a neuron is doing to contribute to the brain algorithm” is not something you can figure out by studying the neuron, just as “the useful thing that a MOSFET is doing to contribute to IC function” is not something you can figure out by studying the MOSFET. There’s no such thing as “Our model is P% accurate” if you don’t know what phenomenon you’re trying to capture. If you model the MOSFET as a cartoon switch, that model will be extremely inaccurate along all kinds of axes—for example, its thermal coefficients will be wrong by 100%. But that doesn’t matter because the cartoon switch model is accurate along the one axis that matters for IC functioning.
The brain is generally pretty noise-tolerant. Indeed, if one of your neurons dies altogether, “you are still you” in the ways that matter. But a dead neuron is a 0% accurate model of a live neuron. ¯\_(ツ)_/¯
In parallel with that there should be a project trying to characterize how error tolerant real neurons and neural networks can be so we can find the lower bound of P. I actually tried something like that for synaptic weight (how does performance degrade when adding noise to the weights of a spiking neural network) but I was so disillusioned with the learning rules that I am not confident in my results.
Just because every part of the brain has neurons and synapses doesn’t mean every part of the brain is a “spiking neural network” with the connotation that that term has in ML, i.e. a learning algorithm. The brain also needs (what I call) “business logic”—just as every ML github repository has tons of code that is not the learning algorithm itself. I think that the low-thousands of different neuron types are playing quite different roles in quite different parts of the brain algorithm, and that studying “spiking neural networks” is the wrong starting point.
I apologize for my sloppy language, “computationally simple” was not well defined. You are quite right when you say there is no P% accuracy. I think my offhand remark about spiking neural networks was not helpful to this discussion.
In a practical sense, here is what I mean. Imagine someone makes a brain organoid with ~10 cells. They can directly measure membrane voltage and any other relevant variable they want because this is hypothetical. Then they try and emulate whatever algorithm this organoid has going on, its direct input to output and whatever learning rule changes that it might have. But, to test this they have crappy point neuron models implementing LIF and the synapses are just a constant conductance or something, and then rules on top of that that can adjust parameters (membrane capacitance, resting potential, synaptic conductance, ect.) and it fails to replicate observables. Obviously this is an extreme example, but I just want better neuron models so nothing like this ever has the chance to happen.
Basically, if we can’t model an organoid we could
Fix the electrophysiology which either makes it work or proves something else is the problem
Develop theory via reverse engineering to such a point we just understand what is wrong and home in on it
Fix other things and hope it isn’t electrophysiology
Three is obviously a bad plan. Two is really really hard. One should be relatively easy provided we have a reasonable threshold of what we consider to be accurate electrophysiology. We could have good biophysical models that recreate it or we could have recurrent neural nets modeling the input current → membrane voltage relation of each neuron. It just seems like an easy way to cross of a potential cause of failure (famous last words I’m sure).
As for you business logic point, it is valid but I am worried that black boxing that too much would lead to collateral damage. I am not sure if that’s what you meant when you said spiking neural networks are the wrong starting point. In any case, I would like higher order thinking to stay as a function of spiking neurons even if things like reflexes and basal behavior can be replaced without loss.
Yeah I think “brain organoids” are a bit like throwing 1000 transistors and batteries and capacitors into a bowl, and shaking the bowl around, and then soldering every point where two leads are touching each other, and then doing electrical characterization on the resulting monstrosity. :)
Would you learn anything whatsoever from this activity? Umm, maybe? Or maybe not. Regardless, even if it’s not completely useless, it’s definitely not a central part of understanding or emulating integrated circuits.
(There was a famous paper where it’s claimed that brain organoids can learn to play Pong, but I think it’s p-hacked / cherry-picked.)
There’s just so much structure in which neurons are connected to which in the brain—e.g. the cortex has 6 layers, with specific cell types connected to each other in specific ways, and then there’s cortex-thalamus-cortex connections and on and on. A big ball of randomly-connected neurons is just a totally different thing.
Also, I am not sure if you’re proposing we compress multiple neurons down into a simpler computational block, the way a real arrangement of transistors can be abstracted into logic gates or adders or whatever. I am not a fan of that for WBE for philosophical reasons and because I think it is less likely to capture everything we care about especially for individual people.
Yes and no. My WBE proposal would be to understand the brain algorithm in general, notice that the algorithm has various adjustable parameters (both because of inter-individual variation and within-lifetime learning of memories, desires, etc.), do a brain-scan that records those parameters for a certain individual, and now you can run that algorithm, and it’s a WBE of that individual.
When you run the algorithm, there is no particular reason to expect that the data structures you want to use for that will superficially resemble neurons, like with a 1-to-1 correspondence. Yes you want to run the same algorithm, producing the same output (within tolerance, such that “it’s the same person”), but presumably you’ll be changing the low-level implementation to mesh better with the affordances of the GPU instruction set rather than the affordances of biological neurons.
The “philosophical reasons” are presumably that you think it might not be conscious? If so, I disagree, for reasons briefly summarized in §1.6 here.
“Less likely to capture everything we care about especially for individual people” would be a claim that we didn’t measure the right things or are misunderstanding the algorithm, which is possible, but unrelated to the low-level implementation of the algorithm on our chips.
I definitely am NOT an advocate for things like training a foundation model to match fMRI data and calling it a mediocre WBE. (There do exist people who like that idea, just I’m not one of them.) Whatever the actual information storage is, as used by the brain, e.g. synapses, that’s what we want to be measuring individually and including in the WBE. :)
First of all, I hate analogies in general but that’s a pet peeve, they are useful. But going with your shaken up circuit as an analogy to brain organoids and assuming it is true, I think it is more useful than you give it credit. If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points. If you model resistors as some weird non ohmic entity you’ll probably get the wrong answer because you missed the fact that they behave ohmic in many situations. If you never explicitly write down Ohm’s law but you empirically measure current at a whole bunch of different voltages (analogous to patch clamps but far far from a perfect analogy) you can probably get the right answer. So yeah an organoid would not be perfect but I would be surprised if being able to fully emulate one would be useless. Personally I think it would be quite useful but I am actively tempering my expectations.
But my meta point of
look at small system
try to emulate
cross off obvious things (electrophysiology should be simple for only a few neurons) that could cause it to not be working
repeat and use data to develop overall theory
stands even if organoids in particular are useless. The theory developed with this kind of research loop might be useless for your very abstract representation of the brain’s algorithm but I think it would be just fine, in principle, for the traditional, bottom up approach.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist. Gallons of ink have been spilled over this so suffice it to say, I think the only thing with any hope of preserving my consciousness (or at least a conscious mind that still holds the belief that it was at one point the person writing this) is gradual replacement of my neurons while my current neurons are still firing. I know that is far and away the least likely path of WBE because it requires solving everything else + nanotechnology but hey I dream big.
To be clear, I think your proposed WBE plan has a lot of merit, but it would still result in me experiencing death and then nothing else so I’m not especially interested. Yes, that probably makes me quite selfish.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist.
Ah, but how do you know that the person that went to bed last night wasn’t a different person, who died, and you are the “something else” that woke up with all of that person’s memories? And then you’ll die tonight, and tomorrow morning there will be a new person who acts like you and knows everything you know but “you would be dead and something else would exist”?
…It’s fine if you don’t want to keep talking about this. I just couldn’t resist. :-P
If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points.
I agree that, if you have a full SPICE transistor model, you’ll be able to model any arbitrary crazy configuration of transistors. If you treat a transistor as a cartoon switch, you’ll be able to model integrated circuits perfectly, but not to model transistors in very different weird contexts.
By the same token, if you have a perfect model of every aspect of a neuron, then you’ll be able to model it in any possible context, including the unholy mess that constitutes an organoid. I just think that getting a perfect model of every aspect of a neuron is unnecessary, and unrealistic. And in that framework, successfully simulating an organoid is neither necessary nor sufficient to know that your neuron model is OK.
Yes, I am familiar with the sleep = death argument. I really don’t have any counter, at some point though I think we all just kind of arbitrarily draw a line. I could be a solipsist, I could believe in last thursdayism, I could believe some people are p-zombies, I could believe in the multiverse. I don’t believe in any of these but I don’t have any real arguments for them and I don’t think anyone has any knockdown arguments one way or the other. All I know is that I fear soma style brain upload, I fear star trek style teleportation, but I don’t fear gradual replacement nor do I fear falling asleep.
As for wrapping up our more scientific disagreement, I don’t have much to say other than it was very thought provoking and I’m still going to try what I said in my post. Even if it doesn’t come to complete fruition I hope it will be relevant experience for when I apply to grad school.
Thank you for that article, I don’t know how it didn’t come up when I was researching this. Others finding papers I should have been able to find alone is a continuous frustrations of mine.
I would love to live in a world where we have a few thousand template neurons and can just put them together based on a few easily identifiable factors (~3-10 genes, morphology, brain region) but until I find a paper that convincingly recreates the electrophysiology based on those things I have to entertain the idea that somewhere between 10 and 10^5 are relevant. I would be truly shocked if we need 10^5 but I wouldn’t be surprised if we need to measure more expression levels than we can comfortable infer based on some staining method. Having just read your post on pessimism, I am confused as to why you think low thousands of separate neuron models would be sufficient. I agree that characterizing billions of neurons is a very tall order (although I really won’t care how long it takes if I’m dead anyway). But when you say ‘“...information storage in the nucleus doesn’t happen at all, or has such a small effect that we can ignore it and still get the same high-level behavior” (which I don’t believe).’ it sounds to me like an argument in favor of looking at the transcriptome of each cell.
Just to be abundantly clear, my main argument in the post is not “Single cell transcriptomics leading to perfect electrophysiology is essential for whole brain emulation and anything less than that is doomed to fail.” It is closer to “I have not seen a well developed theory that can predict even a single cell’s electrophysiology given things we can measure post mortem, so we should really research that if we care about whole brain emulation. If it already exists, please tell me about it.”
I think you make good points when you point out failures of c. elegans uploading and other computational neuroscience failures. To me, it makes a lot of sense to copy single cells as close as possible and then start modeling learning rules and synaptic conductance and what not. If we find out later a certain feature of a neuron model can be abstracted away, that’s great. But a lot of what I see right now is people running to study learning rules and they use homogenous leaky integrate and fire neurons. In my mind they are doing machine learning on spiking neural networks, not computational neuroscience. I don’t know how relevant that particular critique is but it has been a frustration of mine for a while.
I am still very new to this whole field, I hope that cleared things up. If it did not, I apologize.
I think the genome builds a brain algorithm, and the brain algorithm (like practically every algorithm in your CS textbook) includes a number of persistent variables that are occasionally updated in such-and-such way under such-and-such circumstance. Those variables correspond to what the neuro people call plasticity—synaptic plasticity, gene expression plasticity, whatever. Some such occasionally-updated variables are parameters in within-lifetime learning algorithms that are part of the brain algorithm (akin to ML weights). Other such variables are not, instead they’re just essentially counter variables or whatever (see §2.3.3 here). The “understanding the brain algorithm” research program would be figuring out what the brain algorithm is, how and why it works, and thus (as a special case) what are the exact set of “persistent variables that are occasionally updated”, and how are they stored in the brain. If you complete this research program, you get brain-like AGI, but you can’t upload any particular adult human. Then a different research program is: take an adult human brain, and go in with your microtome etc. and actually measure all those “persistent variables that are occasionally updated”, which comprise a person’s unique memories, beliefs, desires, etc.
I think the first research program (understanding the brain algorithm) doesn’t require a thorough understanding of neuron electrophysiology. For example (copying from §3.1 here), suppose that I want to model a translator (specifically, a MOSFET). And suppose that my model only needs to be sufficient to emulate the calculations done by a CMOS integrated circuit. Then my model can be extremely simple—it can just treat the transistor as a cartoon switch. Next, again suppose that I want to model a transistor. But this time, I want my model to accurately capture all measurable details of the transistor. Then my model needs to be mind-bogglingly complex, involving many dozens of obscure SPICE modeling parameters. The point is: I’m suggesting an analogy between this transistor and a neuron with synapses, dendritic spikes, etc. The latter system is mind-bogglingly complex when you study it in detail—no doubt about it! But that doesn’t mean that the neuron’s essential algorithmic role is equally complicated. The latter might just amount to a little cartoon diagram with some ANDs and ORs and IF-THENs or whatever. Or maybe not, but we should at least keep that possibility in mind.
In the “understanding the brain algorithm” research program, you’re triangulating between knowledge of algorithms in general, knowledge of what actual brains actually do (including lesion studies, stimulation studies, etc.), knowledge of evolution and ecology, and measurements of neurons. The first three can add so much information that it seems possible to pin down the fourth without all that much measurements, or even with no measurements at all beyond the connectome. Probably gene expression stuff will be involved in the implementations in certain cases, but we don’t really care, and don’t necessarily need to be measuring that. At least, that’s my guess.
In the “take the adult brain and measure all the ‘persistent variables that are occasionally updated’ research program, yes it’s possible that some of those persistent variables are stored in gene expressions, but my guess is very few, and if we know where they are and how they work then we can just measure the exact relevant RNA in the exact relevant cells.
…To be clear, I think working on the “understanding the brain algorithm” research program is very bad and dangerous when it focuses on the cortex and thalamus and basal ganglia, but good when it focuses on the hypothalamus and brainstem, and it’s sad that people in neuroscience, especially AI-adjacent people with a knack for algorithms, are overwhelmingly are working on the exact worst possible thing :( But I think doing it in the right order (cortex last, long after deeply understanding everything about the hypothalamus & brainstem) is probably good, and I think that there’s realistically no way to get WBE without completing the “understanding the brain algorithm” research program somewhere along the way.
I think I have identified our core disagreement, you believe a neuron or a small group of neurons are fundamentally computationally simple and I don’t. I guess technically I’m agnostic about it but my intuition is that a real neuron cannot be abstracted to a LIF neuron the way a transistor can be abstracted to a cartoon switch (not that you were suggesting LIF is sufficient, just an example). One of the big questions I have is how error scales from neuron to overall activity. If a neuron model is 90% accurate wrt electrophysiology and the synapses connecting it are 90% accurate to real synapses, does that recover 90% of brain function? Is the last 10% something that is computationally irrelevant and can just be abstracted away, giving you effectively 100% functionality? Is 90% accuracy for single neurons magnified until the real accuracy is like 0.9^(80 billion)? I think it is unlikely that it is that bad, but I really don’t know because of the abject failure to upload anything as you point out. I am bracing myself for a world where we need a lot of data.
Let’s assume for the moment though that HH model with suitable electrical and chemical synapses would be sufficient to capture WBE. What I still really want to see is a paper saying “we look at x,y,z properties of neurons that can be measured post mortem and predict a,b,c properties of those neurons by tuning capacitance and conductance and resting potential in the HH model. Our model is P% accurate when looking at patch clamp experiments.” In parallel with that there should be a project trying to characterize how error tolerant real neurons and neural networks can be so we can find the lower bound of P. I actually tried something like that for synaptic weight (how does performance degrade when adding noise to the weights of a spiking neural network) but I was so disillusioned with the learning rules that I am not confident in my results. I’m not sure if anyone has the ability to answer these kinds of questions because we are still just so bad at emulating anything.
Edit:
Also, I am not sure if you’re proposing we compress multiple neurons down into a simpler computational block, the way a real arrangement of transistors can be abstracted into logic gates or adders or whatever. I am not a fan of that for WBE for philosophical reasons and because I think it is less likely to capture everything we care about especially for individual people.
I guess I would phrase it as “there’s a useful thing that neurons are doing to contribute to the brain algorithm, and that thing constitutes a tiny fraction of the full complexity of a real-world neuron”.
(I would say the same thing about MOSFETs. Again, here’s how to model a MOSFET, it’s a horrific mess. Is a MOSFET “fundamentally computationally simple”? Maybe?—I’m not sure exactly what that means. I’d say it does a useful thing in the context of an integrated circuit, and that useful thing is pretty simple.
The trick is, “the useful thing that a neuron is doing to contribute to the brain algorithm” is not something you can figure out by studying the neuron, just as “the useful thing that a MOSFET is doing to contribute to IC function” is not something you can figure out by studying the MOSFET. There’s no such thing as “Our model is P% accurate” if you don’t know what phenomenon you’re trying to capture. If you model the MOSFET as a cartoon switch, that model will be extremely inaccurate along all kinds of axes—for example, its thermal coefficients will be wrong by 100%. But that doesn’t matter because the cartoon switch model is accurate along the one axis that matters for IC functioning.
The brain is generally pretty noise-tolerant. Indeed, if one of your neurons dies altogether, “you are still you” in the ways that matter. But a dead neuron is a 0% accurate model of a live neuron. ¯\_(ツ)_/¯
Just because every part of the brain has neurons and synapses doesn’t mean every part of the brain is a “spiking neural network” with the connotation that that term has in ML, i.e. a learning algorithm. The brain also needs (what I call) “business logic”—just as every ML github repository has tons of code that is not the learning algorithm itself. I think that the low-thousands of different neuron types are playing quite different roles in quite different parts of the brain algorithm, and that studying “spiking neural networks” is the wrong starting point.
I apologize for my sloppy language, “computationally simple” was not well defined. You are quite right when you say there is no P% accuracy. I think my offhand remark about spiking neural networks was not helpful to this discussion.
In a practical sense, here is what I mean. Imagine someone makes a brain organoid with ~10 cells. They can directly measure membrane voltage and any other relevant variable they want because this is hypothetical. Then they try and emulate whatever algorithm this organoid has going on, its direct input to output and whatever learning rule changes that it might have. But, to test this they have crappy point neuron models implementing LIF and the synapses are just a constant conductance or something, and then rules on top of that that can adjust parameters (membrane capacitance, resting potential, synaptic conductance, ect.) and it fails to replicate observables. Obviously this is an extreme example, but I just want better neuron models so nothing like this ever has the chance to happen.
Basically, if we can’t model an organoid we could
Fix the electrophysiology which either makes it work or proves something else is the problem
Develop theory via reverse engineering to such a point we just understand what is wrong and home in on it
Fix other things and hope it isn’t electrophysiology
Three is obviously a bad plan. Two is really really hard. One should be relatively easy provided we have a reasonable threshold of what we consider to be accurate electrophysiology. We could have good biophysical models that recreate it or we could have recurrent neural nets modeling the input current → membrane voltage relation of each neuron. It just seems like an easy way to cross of a potential cause of failure (famous last words I’m sure).
As for you business logic point, it is valid but I am worried that black boxing that too much would lead to collateral damage. I am not sure if that’s what you meant when you said spiking neural networks are the wrong starting point. In any case, I would like higher order thinking to stay as a function of spiking neurons even if things like reflexes and basal behavior can be replaced without loss.
Yeah I think “brain organoids” are a bit like throwing 1000 transistors and batteries and capacitors into a bowl, and shaking the bowl around, and then soldering every point where two leads are touching each other, and then doing electrical characterization on the resulting monstrosity. :)
Would you learn anything whatsoever from this activity? Umm, maybe? Or maybe not. Regardless, even if it’s not completely useless, it’s definitely not a central part of understanding or emulating integrated circuits.
(There was a famous paper where it’s claimed that brain organoids can learn to play Pong, but I think it’s p-hacked / cherry-picked.)
There’s just so much structure in which neurons are connected to which in the brain—e.g. the cortex has 6 layers, with specific cell types connected to each other in specific ways, and then there’s cortex-thalamus-cortex connections and on and on. A big ball of randomly-connected neurons is just a totally different thing.
Yes and no. My WBE proposal would be to understand the brain algorithm in general, notice that the algorithm has various adjustable parameters (both because of inter-individual variation and within-lifetime learning of memories, desires, etc.), do a brain-scan that records those parameters for a certain individual, and now you can run that algorithm, and it’s a WBE of that individual.
When you run the algorithm, there is no particular reason to expect that the data structures you want to use for that will superficially resemble neurons, like with a 1-to-1 correspondence. Yes you want to run the same algorithm, producing the same output (within tolerance, such that “it’s the same person”), but presumably you’ll be changing the low-level implementation to mesh better with the affordances of the GPU instruction set rather than the affordances of biological neurons.
The “philosophical reasons” are presumably that you think it might not be conscious? If so, I disagree, for reasons briefly summarized in §1.6 here.
“Less likely to capture everything we care about especially for individual people” would be a claim that we didn’t measure the right things or are misunderstanding the algorithm, which is possible, but unrelated to the low-level implementation of the algorithm on our chips.
I definitely am NOT an advocate for things like training a foundation model to match fMRI data and calling it a mediocre WBE. (There do exist people who like that idea, just I’m not one of them.) Whatever the actual information storage is, as used by the brain, e.g. synapses, that’s what we want to be measuring individually and including in the WBE. :)
First of all, I hate analogies in general but that’s a pet peeve, they are useful. But going with your shaken up circuit as an analogy to brain organoids and assuming it is true, I think it is more useful than you give it credit. If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points. If you model resistors as some weird non ohmic entity you’ll probably get the wrong answer because you missed the fact that they behave ohmic in many situations. If you never explicitly write down Ohm’s law but you empirically measure current at a whole bunch of different voltages (analogous to patch clamps but far far from a perfect analogy) you can probably get the right answer. So yeah an organoid would not be perfect but I would be surprised if being able to fully emulate one would be useless. Personally I think it would be quite useful but I am actively tempering my expectations.
But my meta point of
look at small system
try to emulate
cross off obvious things (electrophysiology should be simple for only a few neurons) that could cause it to not be working
repeat and use data to develop overall theory
stands even if organoids in particular are useless. The theory developed with this kind of research loop might be useless for your very abstract representation of the brain’s algorithm but I think it would be just fine, in principle, for the traditional, bottom up approach.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist. Gallons of ink have been spilled over this so suffice it to say, I think the only thing with any hope of preserving my consciousness (or at least a conscious mind that still holds the belief that it was at one point the person writing this) is gradual replacement of my neurons while my current neurons are still firing. I know that is far and away the least likely path of WBE because it requires solving everything else + nanotechnology but hey I dream big.
To be clear, I think your proposed WBE plan has a lot of merit, but it would still result in me experiencing death and then nothing else so I’m not especially interested. Yes, that probably makes me quite selfish.
Ah, but how do you know that the person that went to bed last night wasn’t a different person, who died, and you are the “something else” that woke up with all of that person’s memories? And then you’ll die tonight, and tomorrow morning there will be a new person who acts like you and knows everything you know but “you would be dead and something else would exist”?
…It’s fine if you don’t want to keep talking about this. I just couldn’t resist. :-P
I agree that, if you have a full SPICE transistor model, you’ll be able to model any arbitrary crazy configuration of transistors. If you treat a transistor as a cartoon switch, you’ll be able to model integrated circuits perfectly, but not to model transistors in very different weird contexts.
By the same token, if you have a perfect model of every aspect of a neuron, then you’ll be able to model it in any possible context, including the unholy mess that constitutes an organoid. I just think that getting a perfect model of every aspect of a neuron is unnecessary, and unrealistic. And in that framework, successfully simulating an organoid is neither necessary nor sufficient to know that your neuron model is OK.
Yes, I am familiar with the sleep = death argument. I really don’t have any counter, at some point though I think we all just kind of arbitrarily draw a line. I could be a solipsist, I could believe in last thursdayism, I could believe some people are p-zombies, I could believe in the multiverse. I don’t believe in any of these but I don’t have any real arguments for them and I don’t think anyone has any knockdown arguments one way or the other. All I know is that I fear soma style brain upload, I fear star trek style teleportation, but I don’t fear gradual replacement nor do I fear falling asleep.
As for wrapping up our more scientific disagreement, I don’t have much to say other than it was very thought provoking and I’m still going to try what I said in my post. Even if it doesn’t come to complete fruition I hope it will be relevant experience for when I apply to grad school.