I’m going to give an argument in favor of computationalism as a theory of identity, but first I’m going to set up some of the stuff that I take as background, but others may not, and will hopefully illuminate related issues.
I believe computationalism is a very general way to look at effectively everything, because a model of computation can be extremely expressive, and a big reason for why philosophical debates go nowhere is that they don’t realize that expressive enough models of computation can essentially trivialize any problem they set up, solely through the expressiveness of the model, so it’s very important to realize that philosophical problems end up trivial if we allow ourselves to be unconstrained.
This also answers andeslodes’s point around physicalism, as the physicalist ontology is recoverable as a special case of the computationalist ontology, where we care about each instantiation, and we also care about low-level things that are always simulatable by powerful models of computation.
However, I do think there’s an actually non-trivial question here, about why can we be confident that the brain is copyable by a realistic classical computer with reasonable fidelity, when in principle it could be using quantum mechanics for a non-trivial portion of the brain’s operations, which would complicate mind uploading massively?
My general answer here is that quantum computation in warm, wet environments is very hard to error-correct at scale, and the biologically plausible ways to do such a thing fundamentally cannot scale to make a quantum computer in our brains, and anything that is fragile in the presence of noise/error is basically unevolvable by default, because there’s no incremental way to make a quantum computer inside a human brain.
That’s why we can reasonably use classical models of the brain without it leaving a lot out.
As far as my own theory of identity/consciousness goes, I think the gooder regulator theorem gives am answer on why we would want to have a self-model of ourselves, and I’d argue this is the seed from which consciousness/identity grows, so computations meeting the requirements in the gooder regulator theorem can certainly have identities, which means that I do think Rob Bensinger is more or less right about this topic, modulo caveats.
The main functions of consciousness/identity is to give a consistent abstraction across time so that cooperation and deal making become possible, at least for causal decision theories, at least without stuff like provability logic/program equilibrium, which isn’t really evolvable (for other theories like logical decision theory, cooperation in one-shot PD is very possible, but that comes about because they view their identities as closer to an isomorphism class of a program, and consider all instances combined as 1 program, so long as they are functionally equivalent), combined with the necessity of self-modeling yourself in order to get a good outcome, though the self-model here is the more important part for me on consciousness, and the abstraction to a consistent identity comes later.
So for my purposes, I think Rob Bensinger is mostly right about the philosophical underpinnings of identity/consciousness, but I don’t blame TAG, @andeslodes and @sunwillrise for questioning the consensus, since the consensus was frankly poorly argued, and @Rob Bensinger was acting far too much like a soldier rather than a scout in the comments section.
I’m going to give an argument in favor of computationalism as a theory of identity, but first I’m going to set up some of the stuff that I take as background, but others may not, and will hopefully illuminate related issues.
I believe computationalism is a very general way to look at effectively everything, because a model of computation can be extremely expressive, and a big reason for why philosophical debates go nowhere is that they don’t realize that expressive enough models of computation can essentially trivialize any problem they set up, solely through the expressiveness of the model, so it’s very important to realize that philosophical problems end up trivial if we allow ourselves to be unconstrained.
More here:
http://www.amirrorclear.net/academic/ideas/simulation/index.html
http://www.amirrorclear.net/academic/research-topics/other-topics/hypercomputation.html
https://arxiv.org/abs/1806.08747
This also answers andeslodes’s point around physicalism, as the physicalist ontology is recoverable as a special case of the computationalist ontology, where we care about each instantiation, and we also care about low-level things that are always simulatable by powerful models of computation.
However, I do think there’s an actually non-trivial question here, about why can we be confident that the brain is copyable by a realistic classical computer with reasonable fidelity, when in principle it could be using quantum mechanics for a non-trivial portion of the brain’s operations, which would complicate mind uploading massively?
My general answer here is that quantum computation in warm, wet environments is very hard to error-correct at scale, and the biologically plausible ways to do such a thing fundamentally cannot scale to make a quantum computer in our brains, and anything that is fragile in the presence of noise/error is basically unevolvable by default, because there’s no incremental way to make a quantum computer inside a human brain.
That’s why we can reasonably use classical models of the brain without it leaving a lot out.
As far as my own theory of identity/consciousness goes, I think the gooder regulator theorem gives am answer on why we would want to have a self-model of ourselves, and I’d argue this is the seed from which consciousness/identity grows, so computations meeting the requirements in the gooder regulator theorem can certainly have identities, which means that I do think Rob Bensinger is more or less right about this topic, modulo caveats.
I’m inspired by @Ape in the coat here on these points:
https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness#zADHLLzykE5hqdgnY
https://www.lesswrong.com/posts/JYsSbtGd2MoGbHdat/book-review-being-you-by-anil-seth#sSjKjjdADWT7feSTc
And my own comments on the topic:
https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness#FaMEMcpa6mXTybarG
https://www.lesswrong.com/posts/TkahaFu3kb6NhZRue/quick-general-thoughts-on-suffering-and-consciousness#WEmbycP2ppDjuHAH2
The caveats are given below:
https://www.lesswrong.com/posts/Dx9LoqsEh3gHNJMDk/fixing-the-good-regulator-theorem#Takeaway
The main functions of consciousness/identity is to give a consistent abstraction across time so that cooperation and deal making become possible, at least for causal decision theories, at least without stuff like provability logic/program equilibrium, which isn’t really evolvable (for other theories like logical decision theory, cooperation in one-shot PD is very possible, but that comes about because they view their identities as closer to an isomorphism class of a program, and consider all instances combined as 1 program, so long as they are functionally equivalent), combined with the necessity of self-modeling yourself in order to get a good outcome, though the self-model here is the more important part for me on consciousness, and the abstraction to a consistent identity comes later.
So for my purposes, I think Rob Bensinger is mostly right about the philosophical underpinnings of identity/consciousness, but I don’t blame TAG, @andeslodes and @sunwillrise for questioning the consensus, since the consensus was frankly poorly argued, and @Rob Bensinger was acting far too much like a soldier rather than a scout in the comments section.