First of all, I hate analogies in general but that’s a pet peeve, they are useful. But going with your shaken up circuit as an analogy to brain organoids and assuming it is true, I think it is more useful than you give it credit. If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points. If you model resistors as some weird non ohmic entity you’ll probably get the wrong answer because you missed the fact that they behave ohmic in many situations. If you never explicitly write down Ohm’s law but you empirically measure current at a whole bunch of different voltages (analogous to patch clamps but far far from a perfect analogy) you can probably get the right answer. So yeah an organoid would not be perfect but I would be surprised if being able to fully emulate one would be useless. Personally I think it would be quite useful but I am actively tempering my expectations.
But my meta point of
look at small system
try to emulate
cross off obvious things (electrophysiology should be simple for only a few neurons) that could cause it to not be working
repeat and use data to develop overall theory
stands even if organoids in particular are useless. The theory developed with this kind of research loop might be useless for your very abstract representation of the brain’s algorithm but I think it would be just fine, in principle, for the traditional, bottom up approach.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist. Gallons of ink have been spilled over this so suffice it to say, I think the only thing with any hope of preserving my consciousness (or at least a conscious mind that still holds the belief that it was at one point the person writing this) is gradual replacement of my neurons while my current neurons are still firing. I know that is far and away the least likely path of WBE because it requires solving everything else + nanotechnology but hey I dream big.
To be clear, I think your proposed WBE plan has a lot of merit, but it would still result in me experiencing death and then nothing else so I’m not especially interested. Yes, that probably makes me quite selfish.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist.
Ah, but how do you know that the person that went to bed last night wasn’t a different person, who died, and you are the “something else” that woke up with all of that person’s memories? And then you’ll die tonight, and tomorrow morning there will be a new person who acts like you and knows everything you know but “you would be dead and something else would exist”?
…It’s fine if you don’t want to keep talking about this. I just couldn’t resist. :-P
If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points.
I agree that, if you have a full SPICE transistor model, you’ll be able to model any arbitrary crazy configuration of transistors. If you treat a transistor as a cartoon switch, you’ll be able to model integrated circuits perfectly, but not to model transistors in very different weird contexts.
By the same token, if you have a perfect model of every aspect of a neuron, then you’ll be able to model it in any possible context, including the unholy mess that constitutes an organoid. I just think that getting a perfect model of every aspect of a neuron is unnecessary, and unrealistic. And in that framework, successfully simulating an organoid is neither necessary nor sufficient to know that your neuron model is OK.
Yes, I am familiar with the sleep = death argument. I really don’t have any counter, at some point though I think we all just kind of arbitrarily draw a line. I could be a solipsist, I could believe in last thursdayism, I could believe some people are p-zombies, I could believe in the multiverse. I don’t believe in any of these but I don’t have any real arguments for them and I don’t think anyone has any knockdown arguments one way or the other. All I know is that I fear soma style brain upload, I fear star trek style teleportation, but I don’t fear gradual replacement nor do I fear falling asleep.
As for wrapping up our more scientific disagreement, I don’t have much to say other than it was very thought provoking and I’m still going to try what I said in my post. Even if it doesn’t come to complete fruition I hope it will be relevant experience for when I apply to grad school.
First of all, I hate analogies in general but that’s a pet peeve, they are useful. But going with your shaken up circuit as an analogy to brain organoids and assuming it is true, I think it is more useful than you give it credit. If you have a good theory of what all those components are individually you would still be able to predict something like voltage between two arbitrary points. If you model resistors as some weird non ohmic entity you’ll probably get the wrong answer because you missed the fact that they behave ohmic in many situations. If you never explicitly write down Ohm’s law but you empirically measure current at a whole bunch of different voltages (analogous to patch clamps but far far from a perfect analogy) you can probably get the right answer. So yeah an organoid would not be perfect but I would be surprised if being able to fully emulate one would be useless. Personally I think it would be quite useful but I am actively tempering my expectations.
But my meta point of
look at small system
try to emulate
cross off obvious things (electrophysiology should be simple for only a few neurons) that could cause it to not be working
repeat and use data to develop overall theory
stands even if organoids in particular are useless. The theory developed with this kind of research loop might be useless for your very abstract representation of the brain’s algorithm but I think it would be just fine, in principle, for the traditional, bottom up approach.
As for the philosophical objections, it is more that whatever wakes up won’t be me if we do it your way. It might act like me and know everything I know but it seems like I would be dead and something else would exist. Gallons of ink have been spilled over this so suffice it to say, I think the only thing with any hope of preserving my consciousness (or at least a conscious mind that still holds the belief that it was at one point the person writing this) is gradual replacement of my neurons while my current neurons are still firing. I know that is far and away the least likely path of WBE because it requires solving everything else + nanotechnology but hey I dream big.
To be clear, I think your proposed WBE plan has a lot of merit, but it would still result in me experiencing death and then nothing else so I’m not especially interested. Yes, that probably makes me quite selfish.
Ah, but how do you know that the person that went to bed last night wasn’t a different person, who died, and you are the “something else” that woke up with all of that person’s memories? And then you’ll die tonight, and tomorrow morning there will be a new person who acts like you and knows everything you know but “you would be dead and something else would exist”?
…It’s fine if you don’t want to keep talking about this. I just couldn’t resist. :-P
I agree that, if you have a full SPICE transistor model, you’ll be able to model any arbitrary crazy configuration of transistors. If you treat a transistor as a cartoon switch, you’ll be able to model integrated circuits perfectly, but not to model transistors in very different weird contexts.
By the same token, if you have a perfect model of every aspect of a neuron, then you’ll be able to model it in any possible context, including the unholy mess that constitutes an organoid. I just think that getting a perfect model of every aspect of a neuron is unnecessary, and unrealistic. And in that framework, successfully simulating an organoid is neither necessary nor sufficient to know that your neuron model is OK.
Yes, I am familiar with the sleep = death argument. I really don’t have any counter, at some point though I think we all just kind of arbitrarily draw a line. I could be a solipsist, I could believe in last thursdayism, I could believe some people are p-zombies, I could believe in the multiverse. I don’t believe in any of these but I don’t have any real arguments for them and I don’t think anyone has any knockdown arguments one way or the other. All I know is that I fear soma style brain upload, I fear star trek style teleportation, but I don’t fear gradual replacement nor do I fear falling asleep.
As for wrapping up our more scientific disagreement, I don’t have much to say other than it was very thought provoking and I’m still going to try what I said in my post. Even if it doesn’t come to complete fruition I hope it will be relevant experience for when I apply to grad school.