Wouldn’t physicalist theories gain complexity because they have to explain phenomenology for every new substrate differently? Suppose that in a post-utopian nanotech world, I am constantly changing substrates (but the abstract Turing machine implementing me doesn’t change). Then a physicalist theory would grow in complexity very fast, having to connect complex conscious phenomena to different physics, while a computationalist theory would only have to connect the implementation. Though I might be wrong about the complexity: SI could just find this loophole of building physicalist phenomenological bridges using the same Turing machine with different implementations, saving on complexity, but wouldn’t it count as computationalist at that point?
This is my inclination, but a physicalist either predicts that the phenomenology would in fact change, or perhaps asserts that you’re deluded about your phenomenal experience when you think that the experience is the same despite substrate shifts. My understanding of cube_flipper’s position is that they anticipate changes in the substrate to change the qualia.
From a physicalist’s perspective, you’re essentially making predictions based on your theory of phenomenal consciousness, and then arguing that we should already update on those predictions ahead of time, since they’re so firm. I’m personally sympathetic to this line of argument, but it obviously depends on some assumptions which need to be articulated, and which the physicalist would probably not be happy to make.
I guess I am a bit confused about the process of encoding phenomenological data into bits. If a physicalist is doing it, they might include (from a computationalist perspective) unnecessary detail about the movement of subatomic particles. If a computationalist is doing it, they might (from a physicalist perspective) exclude important detail about EM fields that affect qualia. Or is there a common ground on which both perspectives can agree?
Trying to answer my own question: the obvious way is to have everything encoded, down to every quantum fluctuation. In that case, the computationalist hypothesis has to explain all of the thermal noise in addition to consciousness, which seems unfair to me, since it is a theory of consciousness, not of physics.
Wouldn’t physicalist theories gain complexity because they have to explain phenomenology for every new substrate differently? Suppose that in a post-utopian nanotech world, I am constantly changing substrates (but the abstract Turing machine implementing me doesn’t change). Then a physicalist theory would grow in complexity very fast, having to connect complex conscious phenomena to different physics, while a computationalist theory would only have to connect the implementation. Though I might be wrong about the complexity: SI could just find this loophole of building physicalist phenomenological bridges using the same Turing machine with different implementations, saving on complexity, but wouldn’t it count as computationalist at that point?
This is my inclination, but a physicalist either predicts that the phenomenology would in fact change, or perhaps asserts that you’re deluded about your phenomenal experience when you think that the experience is the same despite substrate shifts. My understanding of cube_flipper’s position is that they anticipate changes in the substrate to change the qualia.
From a physicalist’s perspective, you’re essentially making predictions based on your theory of phenomenal consciousness, and then arguing that we should already update on those predictions ahead of time, since they’re so firm. I’m personally sympathetic to this line of argument, but it obviously depends on some assumptions which need to be articulated, and which the physicalist would probably not be happy to make.
I guess I am a bit confused about the process of encoding phenomenological data into bits. If a physicalist is doing it, they might include (from a computationalist perspective) unnecessary detail about the movement of subatomic particles. If a computationalist is doing it, they might (from a physicalist perspective) exclude important detail about EM fields that affect qualia. Or is there a common ground on which both perspectives can agree?
Trying to answer my own question: the obvious way is to have everything encoded, down to every quantum fluctuation. In that case, the computationalist hypothesis has to explain all of the thermal noise in addition to consciousness, which seems unfair to me, since it is a theory of consciousness, not of physics.