Computationalism is a bad theory of identity , in the sense of “why am I a unique individual”, because computations are so easy to clone—computational states are more cloneable than physical states.
First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly, like conventional quantum computers (the no-cloning theorem breaks if we allow closed-timelike curves ala Deutschian CTCs, but we won’t focus on that), so this is more or less a non-issue.
Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies, so computationalism doesn’t matter that much in the general case, and thus you need to focus on more specific classes of computations.
Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.
Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicalist answer for specialized questions.
Other abstractions are available, though. Its still a bad theory of consciousness-qua-awareness , because, as an abstraction, it has fewer resource than physicalism to explain phenomenal experience. It’s still an ok explanation of consciousness-qua-function, but not obviously the best. It’s still the case that if you answer one of these four questions, you don’t get answers to the other three automatically.
My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly,
Thats a rather small nit. The vast majority of computationalists are talking about classical computation.
Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies,
That’s not much of a boast: pure logic can’t solve metaphysical problems about consciousness, time, space, identity, and so on. That’s why they are still problems. There’s a simple logical theory of identity, but it doesn’t answer the metaphysical problems, what I have called the synchronic and diachronic problems.
Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.
Physicalism doesn’t answer the problems. You need some extra information about how similar or different physical things are in order to answer questions about whether they are the same or different individuals. At least, if you want to avoid the implications of raw physicalism—along the lines of “if one atom changes, you’re a different person”. An abstraction would be useful—but it needs to be the right one.
Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicaliskt answer for specialized questions.
You seem to be saying that consciousness is nothing but having a self model, and whatever the self believes about itself is the last word...that there are no inconvenient objective facts that could trump a self assessment (“No you’re not the original Duncan Idaho, you’re ghola number 476. You think you’re the one and only Duncan because you’re brain state is a clone of the original Duncan’s”). That makes things rather easy. But the rationalist approach to the problem of identity generally relies on bullet biting about whatever solution is appealing—if computationalism is is correct, you can be cloned, and the you really are on two places at once.
My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
Well, how? If you could predict qualia from self control, you’d have a solution—not a dissolution—to the HP.
Another reason why the hard problem seems hard is that way too many philosophers are disinclined to gather any data on the phenomenon of interest at all, because they don’t have backgrounds in neuroscience, and instead want to purely define consciousness without reference to any empirical reality.
Granting that “empirical” means “outer empirical” …. not including introspection.
I don’t think there is much evidence for the “purely”. Chalmers doesn’t disbelieve in the easy problem aspects of conscious.
First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly, like conventional quantum computers (the no-cloning theorem breaks if we allow closed-timelike curves ala Deutschian CTCs, but we won’t focus on that), so this is more or less a non-issue.
Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies, so computationalism doesn’t matter that much in the general case, and thus you need to focus on more specific classes of computations.
More below:
https://en.wikipedia.org/wiki/No-cloning_theorem
https://en.wikipedia.org/wiki/No-broadcasting_theorem
Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.
Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicalist answer for specialized questions.
My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
Thats a rather small nit. The vast majority of computationalists are talking about classical computation.
That’s not much of a boast: pure logic can’t solve metaphysical problems about consciousness, time, space, identity, and so on. That’s why they are still problems. There’s a simple logical theory of identity, but it doesn’t answer the metaphysical problems, what I have called the synchronic and diachronic problems.
Physicalism doesn’t answer the problems. You need some extra information about how similar or different physical things are in order to answer questions about whether they are the same or different individuals. At least, if you want to avoid the implications of raw physicalism—along the lines of “if one atom changes, you’re a different person”. An abstraction would be useful—but it needs to be the right one.
You seem to be saying that consciousness is nothing but having a self model, and whatever the self believes about itself is the last word...that there are no inconvenient objective facts that could trump a self assessment (“No you’re not the original Duncan Idaho, you’re ghola number 476. You think you’re the one and only Duncan because you’re brain state is a clone of the original Duncan’s”). That makes things rather easy. But the rationalist approach to the problem of identity generally relies on bullet biting about whatever solution is appealing—if computationalism is is correct, you can be cloned, and the you really are on two places at once.
Well, how? If you could predict qualia from self control, you’d have a solution—not a dissolution—to the HP.
Granting that “empirical” means “outer empirical” …. not including introspection.
I don’t think there is much evidence for the “purely”. Chalmers doesn’t disbelieve in the easy problem aspects of conscious.