This is kinda helpful but I also think people in your (1) group would agree with all three of: (A) the sequence of thoughts that you think directly correspond to something about the evolving state of activity in your brain, (B) random noise has nonzero influence on the evolving state of activity in your brain, (C) random noise cannot be faithfully reproduced in a practical simulation.
And I think that they would not see anything self-contradictory about believing all of those things. (And I also don’t see anything self-contradictory about that, even granting your (1).)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
So in that regard: my mental image of computational functionalists in your group (1) would also say things like (D) “If I start 5 executions of my brain algorithm, on 5 different computers, each with a different RNG seed, then they are all conscious (they are all exuding consciousness-stuff, or whatever), and they all have equal claim to being “me”, and of course they all will eventually start having different trains of thought. Over the months and years they might gradually diverge in beliefs, memories, goals, etc. Oh well, personal identity is a fuzzy thing anyway. Didn’t you read Parfit?”
But I haven’t read as much of the literature as you, so maybe I’m putting words in people’s mouths.
Hmm. I think that none of this refutes the point I was making, which is that practical CF as defined by OP is a position that many people actually hold,[1] hence OP’s argument isn’t just a strawman/missing the point. (Whether or not the argument succeeds is a different question.)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
I don’t think you have to bring identity into this. (And if you don’t have to, I’d strongly advise leaving it out because identity is another huge rabbit hole.) There’s three claims with strictly increasing strength here: C1 digital simulations can be conscious, C2 a digital simulation of a brain exhibits similar consciousness to that brain, and C3 if a simulation of my brain is created, then that simulation is me. I think only C3 is about identity, and OP’s post is arguing against C2. (All three claims are talking about realist consciousness.)
This is also why I don’t think noise matters. Granting all of (A)-(D) doesn’t really affect C2; a practical simulation could work with similar noise and be pseudo-nondeterministic in the same way that the brain is. I think it’s pretty coherent to just ask about how similar the consciousness is, under a realist framework (i.e., asking C2), without stepping onto the identity hornets nest.
a caveat here is that it’s actually quite hard to write down any philosophical position (except illusionism) such that a lot of people give blanket endorsements (again because everyone has slightly different ideas of what different terms mean), but I think OP has done a pretty good job, definitely better than most, in formulating an opinion that at least a good number of people would probably endorse.
This is kinda helpful but I also think people in your (1) group would agree with all three of: (A) the sequence of thoughts that you think directly correspond to something about the evolving state of activity in your brain, (B) random noise has nonzero influence on the evolving state of activity in your brain, (C) random noise cannot be faithfully reproduced in a practical simulation.
And I think that they would not see anything self-contradictory about believing all of those things. (And I also don’t see anything self-contradictory about that, even granting your (1).)
Well, I guess this discussion should really be focused more on personal identity than consciousness (OP wrote: “Whether or not a simulation can have consciousness at all is a broader discussion I’m saving for later in the sequence, and is relevant to a weaker version of CF.”).
So in that regard: my mental image of computational functionalists in your group (1) would also say things like (D) “If I start 5 executions of my brain algorithm, on 5 different computers, each with a different RNG seed, then they are all conscious (they are all exuding consciousness-stuff, or whatever), and they all have equal claim to being “me”, and of course they all will eventually start having different trains of thought. Over the months and years they might gradually diverge in beliefs, memories, goals, etc. Oh well, personal identity is a fuzzy thing anyway. Didn’t you read Parfit?”
But I haven’t read as much of the literature as you, so maybe I’m putting words in people’s mouths.
Hmm. I think that none of this refutes the point I was making, which is that practical CF as defined by OP is a position that many people actually hold,[1] hence OP’s argument isn’t just a strawman/missing the point. (Whether or not the argument succeeds is a different question.)
I don’t think you have to bring identity into this. (And if you don’t have to, I’d strongly advise leaving it out because identity is another huge rabbit hole.) There’s three claims with strictly increasing strength here: C1 digital simulations can be conscious, C2 a digital simulation of a brain exhibits similar consciousness to that brain, and C3 if a simulation of my brain is created, then that simulation is me. I think only C3 is about identity, and OP’s post is arguing against C2. (All three claims are talking about realist consciousness.)
This is also why I don’t think noise matters. Granting all of (A)-(D) doesn’t really affect C2; a practical simulation could work with similar noise and be pseudo-nondeterministic in the same way that the brain is. I think it’s pretty coherent to just ask about how similar the consciousness is, under a realist framework (i.e., asking C2), without stepping onto the identity hornets nest.
a caveat here is that it’s actually quite hard to write down any philosophical position (except illusionism) such that a lot of people give blanket endorsements (again because everyone has slightly different ideas of what different terms mean), but I think OP has done a pretty good job, definitely better than most, in formulating an opinion that at least a good number of people would probably endorse.