I’ve answered a question on this discussion, and my short answer is that I basically agree with the post, mostly because I think computationalism is closest to accurate as a model of identity in the general case, with physicalism being a special case of the general case (with caveats) but I definitely think you were pretty epistemically terrible during your interactions, and I don’t blame @andeslodes and @sunwillrise for disagreeing with the post, and the way you handled disagreements here does not make me confident that LW thought leaders will reliably go in truth-tracking directions.
Computationalism is a bad theory of synchronic non-identity (in the sense of “why am I a unique individual, even though I have an identical twin”), because computations are so easy to clone—computational states are more cloneable than physical states.
Computationalism might be a better theory of diachronic identity (in the sense of “why am I still the same person, even though I have physically changed”), since it’s abstract, and so avoids the “one atom has changed” problem of naive physicalism. Other abstractions are available, though. “Having the same memories” is a traditional one unadulterated computation.
Its still a bad theory of consciousness-qua-awareness (phenomenal consciousness , qualia, hard problem stuff) because, being an abstraction, it has fewer resources than physicalism to explain phenomenal experience. There is no computational theory of qualia whatsoever, no algorithm for seeRed().
It’s still an ok explanation of consciousness-qua-function (easy problem stuff), but not obviously the best.
Most importantly: it’s still the case that, if you answer one of these four questions, you don’t get answers to the other three automatically.
I believe computationalism is a very general way to look at effectively everything,
Computation is an abstraction, and its not guaranteed to be the best.
This also answers andeslodes’s point around physicalism, as the physicalist ontology is recoverable as a special case of the computationalist ontology
A perfect map has the same structure as the territory, but still is not the territory. The on-the-metalness is lacking. Flight simulators don’t fly. You can grow potatoes in a map, not even a 1:1 one.
...also hears that the largest map considered really useful would be six inches to the mile; although his country had learnt map-making from his host Nation, it had carried it much further, having gone through maps that are six feet to the mile, then six yards to the mile, next a hundred yards to the mile—finally, a mile to the mile (the farmers said that if such a map was to be spread out, it would block out the sun and crops would fail, so the project was abandoned).
my biggest view on what consciousness actually is, in that it’s essentially a special case of modeling the world, where in order to give your own body at one time alive, you need to have a model of the body and brain, and that’s what consciousness basically is, a model of ourselves
So..it’s nothing to do with qualia/phenomenality/HP stuff? Can’t self modelling and phenomenality be separate questions?
Computationalism is a bad theory of identity , in the sense of “why am I a unique individual”, because computations are so easy to clone—computational states are more cloneable than physical states.
First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly, like conventional quantum computers (the no-cloning theorem breaks if we allow closed-timelike curves ala Deutschian CTCs, but we won’t focus on that), so this is more or less a non-issue.
Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies, so computationalism doesn’t matter that much in the general case, and thus you need to focus on more specific classes of computations.
Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.
Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicalist answer for specialized questions.
Other abstractions are available, though. Its still a bad theory of consciousness-qua-awareness , because, as an abstraction, it has fewer resource than physicalism to explain phenomenal experience. It’s still an ok explanation of consciousness-qua-function, but not obviously the best. It’s still the case that if you answer one of these four questions, you don’t get answers to the other three automatically.
My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly,
Thats a rather small nit. The vast majority of computationalists are talking about classical computation.
Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies,
That’s not much of a boast: pure logic can’t solve metaphysical problems about consciousness, time, space, identity, and so on. That’s why they are still problems. There’s a simple logical theory of identity, but it doesn’t answer the metaphysical problems, what I have called the synchronic and diachronic problems.
Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.
Physicalism doesn’t answer the problems. You need some extra information about how similar or different physical things are in order to answer questions about whether they are the same or different individuals. At least, if you want to avoid the implications of raw physicalism—along the lines of “if one atom changes, you’re a different person”. An abstraction would be useful—but it needs to be the right one.
Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicaliskt answer for specialized questions.
You seem to be saying that consciousness is nothing but having a self model, and whatever the self believes about itself is the last word...that there are no inconvenient objective facts that could trump a self assessment (“No you’re not the original Duncan Idaho, you’re ghola number 476. You think you’re the one and only Duncan because you’re brain state is a clone of the original Duncan’s”). That makes things rather easy. But the rationalist approach to the problem of identity generally relies on bullet biting about whatever solution is appealing—if computationalism is is correct, you can be cloned, and the you really are on two places at once.
My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
Well, how? If you could predict qualia from self control, you’d have a solution—not a dissolution—to the HP.
Another reason why the hard problem seems hard is that way too many philosophers are disinclined to gather any data on the phenomenon of interest at all, because they don’t have backgrounds in neuroscience, and instead want to purely define consciousness without reference to any empirical reality.
Granting that “empirical” means “outer empirical” …. not including introspection.
I don’t think there is much evidence for the “purely”. Chalmers doesn’t disbelieve in the easy problem aspects of conscious.
I’ve answered a question on this discussion, and my short answer is that I basically agree with the post, mostly because I think computationalism is closest to accurate as a model of identity in the general case, with physicalism being a special case of the general case (with caveats) but I definitely think you were pretty epistemically terrible during your interactions, and I don’t blame @andeslodes and @sunwillrise for disagreeing with the post, and the way you handled disagreements here does not make me confident that LW thought leaders will reliably go in truth-tracking directions.
Answer is below:
https://www.lesswrong.com/posts/yoAhc7ZhQZfGqrzif/what-are-the-actual-arguments-in-favor-of-computationalism#KTWgPbomupmwE2TFb
General comments on consciousness:
https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness#FaMEMcpa6mXTybarG
https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness#WEmbycP2ppDjuHAH2
Computationalism is a bad theory of synchronic non-identity (in the sense of “why am I a unique individual, even though I have an identical twin”), because computations are so easy to clone—computational states are more cloneable than physical states.
Computationalism might be a better theory of diachronic identity (in the sense of “why am I still the same person, even though I have physically changed”), since it’s abstract, and so avoids the “one atom has changed” problem of naive physicalism. Other abstractions are available, though. “Having the same memories” is a traditional one unadulterated computation.
Its still a bad theory of consciousness-qua-awareness (phenomenal consciousness , qualia, hard problem stuff) because, being an abstraction, it has fewer resources than physicalism to explain phenomenal experience. There is no computational theory of qualia whatsoever, no algorithm for seeRed().
It’s still an ok explanation of consciousness-qua-function (easy problem stuff), but not obviously the best.
Most importantly: it’s still the case that, if you answer one of these four questions, you don’t get answers to the other three automatically.
Computation is an abstraction, and its not guaranteed to be the best.
A perfect map has the same structure as the territory, but still is not the territory. The on-the-metalness is lacking. Flight simulators don’t fly. You can grow potatoes in a map, not even a 1:1 one.
https://en.m.wikipedia.org/wiki/Sylvie_and_Bruno
So..it’s nothing to do with qualia/phenomenality/HP stuff? Can’t self modelling and phenomenality be separate questions?
First, computationalism doesn’t automatically imply that, without other assumptions, and indeed there are situations where you can’t clone data perfectly, like conventional quantum computers (the no-cloning theorem breaks if we allow closed-timelike curves ala Deutschian CTCs, but we won’t focus on that), so this is more or less a non-issue.
Indeed, I was basically trying to say that computationalism is so general that it cannot predict any result that doesn’t follow from pure logic/tautologies, so computationalism doesn’t matter that much in the general case, and thus you need to focus on more specific classes of computations.
More below:
https://en.wikipedia.org/wiki/No-cloning_theorem
https://en.wikipedia.org/wiki/No-broadcasting_theorem
Secondly, one could semi-reasonably argue that the inability to clone physical states is an artifact of our technological immaturity, and that in the far-future, it will be way easier to clone physical states to a level of fidelity that is way closer to the level of copyability of computer programs.
Third, I gave a somewhat more specific theory of identity in my linked answer, and it’s compatible with both computationalism and physicalism as presented, I just prefer the computationalist account for the general case and the physicalist answer for specialized questions.
My main non-trivial claim here is that the sense of a phenomenal experience/awareness fundamentally comes down to the fact that the brain needs to control the body, and vice-versa, so you need a self-model of yourself, which becomes a big part of why we say we have consciousness, because we are referring to our self models when we do that.
Thats a rather small nit. The vast majority of computationalists are talking about classical computation.
That’s not much of a boast: pure logic can’t solve metaphysical problems about consciousness, time, space, identity, and so on. That’s why they are still problems. There’s a simple logical theory of identity, but it doesn’t answer the metaphysical problems, what I have called the synchronic and diachronic problems.
Physicalism doesn’t answer the problems. You need some extra information about how similar or different physical things are in order to answer questions about whether they are the same or different individuals. At least, if you want to avoid the implications of raw physicalism—along the lines of “if one atom changes, you’re a different person”. An abstraction would be useful—but it needs to be the right one.
You seem to be saying that consciousness is nothing but having a self model, and whatever the self believes about itself is the last word...that there are no inconvenient objective facts that could trump a self assessment (“No you’re not the original Duncan Idaho, you’re ghola number 476. You think you’re the one and only Duncan because you’re brain state is a clone of the original Duncan’s”). That makes things rather easy. But the rationalist approach to the problem of identity generally relies on bullet biting about whatever solution is appealing—if computationalism is is correct, you can be cloned, and the you really are on two places at once.
Well, how? If you could predict qualia from self control, you’d have a solution—not a dissolution—to the HP.
Granting that “empirical” means “outer empirical” …. not including introspection.
I don’t think there is much evidence for the “purely”. Chalmers doesn’t disbelieve in the easy problem aspects of conscious.