For the sake of this comment, I’ll take physicalist theories to mean identity to some physical process (including the specific substance or fields).
Premise 3: Computationalist phenomenal bridges are complex relative to physicalist phenomenal bridges.
They’re not—mapping a microstate (or a pure state) to qualia isn’t more complex than mapping computations (or aspects of computations—like functional states) to qualia. (Computations (or functional states) contain less information, because they’re implicitly contained in the complete physical description, but the complete physical description isn’t contained in them.)
Even if it were the case, it wouldn’t matter, because a phenomenal bridge is in the map, not in the territory. (Semantics can be arbitrarily complex, as long as it is justified… which, in the case of physicalist theories of consciousness, isn’t.)
Premise 4: Physical phenomenal bridges are at least as compatible with the data of experience as computationalist phenomenal bridges.
That is literally true, but connotatively false. Physicalist phenomenal bridges are heavily disadvantaged by Occam’s razor (even if it were the case that specific physics was needed, we would have no way of knowing that, since our thoughts are fully determined by the computational state) and the specific physics of the implementation of the computational state is causally inert relatively to our mental processes (those are fully determined by the computational implementation).
We should judge theories of consciousness in the same way that we judge theories of physics, IE, by balancing predictive accuracy with simplicity of the theory, as stipulated by SI.
We shouldn’t—all viable theories of the ontology of consciousness, when applied, will have infinite predictive accuracy, because those physical/computational/functional differences are amendable to the correct kind of empirical research. But even with infinite empirical research, we still don’t know if it is the perfectly predictive physicalist, functional or computational elements of the system that are identical to conscious states. That is why the question is hard—it’s not enough to perfectly explain the agent’s utterances, we also have to find the correct answer to the question of the correct ontology, and predictive accuracy can’t help us there (at least not with respect to these three different theories).
A physicalist bridge needs to be able to pick out some physical phenomenon, such as patterns in the EM field. A computational bridge needs to do that as well, to parse the physical model
I see the argument—to map a computational state to qualia, we need to first map the physical state to a computational state, and then the computational state to qualia, and that’s (arguably) more complex than mapping the physical state to qualia directly. While correct, this isn’t relevant, because the bridge is in the map, and also because that would actually be an argument for functionalism. In the process of mapping a physical state to a conscious state, we need to compute its functional state (otherwise we don’t know what causal role that state plays in a conscious being with respect to its qualia in the first place).
We could get around this by precomputing the mapping between physical states/physical processes and conscious states/processes (so that we don’t have to compute the functional states in the processes)… but at that point, we might as well precompute the computational states as well, which, again, would render the argument moot.
The computationalist theory of phenomenal consciousness doesn’t care about how many implementation layers are stacked on top of each other.
This is a feature, not a bug—every physical system implements astronomically high number of computations. The conscious computation is the one we’re currently talking to, which is just one (ignoring details on the lower levels of abstraction which aren’t individuative of that computation).
If I understand correctly, cube_flipper welcomes such an experiment (save for the fact that it seems far beyond our current technology), and anticipates having a different experience due to the modified physical field.
I agree that it’s unlikely we could ever be 100% confident in some proposition here. But, with sufficiently high fidelity neurostimulation protocols (likely invasive ones) coupled with direct reports, I remain optimistic we could develop something like asymptotic confidence in a given theory of consciousness.
I doubt we could ever keep the armchair philosophers happy. But for the consciousness engineers (i.e. the people who actually want to do something with a theory of consciousness), I think this should suffice.
I’m not saying it’s not possible for us to be confident in some specific proposition.
I am saying that the state of having different qualia while preserving the same functional state is impossible in principle and that, due to Chalmers, it is impossible in the case of the human brain in this specific case.
I doubt we could ever keep the armchair philosophers happy. But for the consciousness engineers
Without philosophers, or without someone who isn’t a philosopher but does correct philosophy, you can’t arrive at the correct ontology of consciousness.
Keep in mind the theory of consciousness can’t make any falsifiable empirical predictions—the biological theory of consciousness, computationalism and other kinds of functionalism all make identical empirical predictions.
If you want to distinguish the physicalist theory of consciousness from some other, you can’t do it by making empirical predictions and comparing them to empirical results.
You can do it by non-empirical reasoning, but all those attempts fail for the reasons I explained in my comment (they are actually arguments against the physicalist theories of consciousness).
For the sake of this comment, I’ll take physicalist theories to mean identity to some physical process (including the specific substance or fields).
They’re not—mapping a microstate (or a pure state) to qualia isn’t more complex than mapping computations (or aspects of computations—like functional states) to qualia. (Computations (or functional states) contain less information, because they’re implicitly contained in the complete physical description, but the complete physical description isn’t contained in them.)
Even if it were the case, it wouldn’t matter, because a phenomenal bridge is in the map, not in the territory. (Semantics can be arbitrarily complex, as long as it is justified… which, in the case of physicalist theories of consciousness, isn’t.)
That is literally true, but connotatively false. Physicalist phenomenal bridges are heavily disadvantaged by Occam’s razor (even if it were the case that specific physics was needed, we would have no way of knowing that, since our thoughts are fully determined by the computational state) and the specific physics of the implementation of the computational state is causally inert relatively to our mental processes (those are fully determined by the computational implementation).
We shouldn’t—all viable theories of the ontology of consciousness, when applied, will have infinite predictive accuracy, because those physical/computational/functional differences are amendable to the correct kind of empirical research. But even with infinite empirical research, we still don’t know if it is the perfectly predictive physicalist, functional or computational elements of the system that are identical to conscious states. That is why the question is hard—it’s not enough to perfectly explain the agent’s utterances, we also have to find the correct answer to the question of the correct ontology, and predictive accuracy can’t help us there (at least not with respect to these three different theories).
I see the argument—to map a computational state to qualia, we need to first map the physical state to a computational state, and then the computational state to qualia, and that’s (arguably) more complex than mapping the physical state to qualia directly. While correct, this isn’t relevant, because the bridge is in the map, and also because that would actually be an argument for functionalism. In the process of mapping a physical state to a conscious state, we need to compute its functional state (otherwise we don’t know what causal role that state plays in a conscious being with respect to its qualia in the first place).
We could get around this by precomputing the mapping between physical states/physical processes and conscious states/processes (so that we don’t have to compute the functional states in the processes)… but at that point, we might as well precompute the computational states as well, which, again, would render the argument moot.
This is a feature, not a bug—every physical system implements astronomically high number of computations. The conscious computation is the one we’re currently talking to, which is just one (ignoring details on the lower levels of abstraction which aren’t individuative of that computation).
That’s not possible, due to Chalmers.
I agree that it’s unlikely we could ever be 100% confident in some proposition here. But, with sufficiently high fidelity neurostimulation protocols (likely invasive ones) coupled with direct reports, I remain optimistic we could develop something like asymptotic confidence in a given theory of consciousness.
I doubt we could ever keep the armchair philosophers happy. But for the consciousness engineers (i.e. the people who actually want to do something with a theory of consciousness), I think this should suffice.
I’m not saying it’s not possible for us to be confident in some specific proposition.
I am saying that the state of having different qualia while preserving the same functional state is impossible in principle and that, due to Chalmers, it is impossible in the case of the human brain in this specific case.
Without philosophers, or without someone who isn’t a philosopher but does correct philosophy, you can’t arrive at the correct ontology of consciousness.
Keep in mind the theory of consciousness can’t make any falsifiable empirical predictions—the biological theory of consciousness, computationalism and other kinds of functionalism all make identical empirical predictions.
If you want to distinguish the physicalist theory of consciousness from some other, you can’t do it by making empirical predictions and comparing them to empirical results.
You can do it by non-empirical reasoning, but all those attempts fail for the reasons I explained in my comment (they are actually arguments against the physicalist theories of consciousness).