I’ve been rolling around the general argument about physical descriptions and qualia around for a while. As far as I can figure, the argument is something like,
Nothing in the current physical descriptions we understand seems to have anything resembling an explanation of qualia like “the feeling of seeing red”
Future physical descriptions must be essentially “grammatical elaborations” of the current ones. Any written book is stuck in the modality of the alphabet, no matter how long or innovative it is.
Since the elementary “alphabet” of physics doesn’t describe qualia, no theory of physics can either.
And my problem here is that this is an awfully confident argument that you get by doing absolutely none of the work you assure would be useless because of the argument. We haven’t done the physical modeling of the process where qualia should be involved, and I think there’s a coherent description of the experiment, even though we’re very far from being able to do it in practice. We have little idea what qualia themselves are, so they’re hard to approach directly, but we have lots of stuff on what humans are and we can do heterophenomenology. So, vast ethical and practical objections aside, the experiment would be simulating the physics of a live adult human from molecular biology up in a lit room with a red object being asked to describe what colors they see and expecting them to respond “I see red”. And then trawling through the full simulation log of just what goes on in the simulated brain.
I see two ways this can go. If the experiment actually succeeds, we should have some very interesting data. Since the modeling proceeds from cells up instead of behavior down, it’s very unlikely we have built a chatbot that mimics surface behavior. Either it’s actually successfully mirroring how humans in the physical world perceive color, or it won’t do anything because the copy of the human neuroarchitecture won’t make sense with whatever the missing secret sauce is. So you might go full mysterian and claim there must be a secret sauce and the model won’t work, but now you’re committed to a falsifiable prediction that the experiment won’t succeed. And the original argument about adding stuff not helping is a non sequitur in this case.
Or the simulation does work. And people do the further work of deciphering all the simulated neural processes. And then we have a readable physics-level explanation of all the stuff that goes on from the 700 nm wavelength light to “I see red”. We don’t know what’s going to be in there. We haven’t done the work, we don’t know what the details will look like if they were spelled out in physics, but the “physics won’t explain the important part” argument concedes this should be doable. And I’m really curious about being able to see this picture. Like, we’re making our judgments now based on the “how things work” schemas we have now. It looks like there’d need to be some structural “how things work” schema that’s unfamiliar to us in that description, so shouldn’t we try to figure it out first instead of going “eh, it’ll just be physics physics physics, who cares”. What if after doing the work people will instead go “Oh, that’s how it works! We had no idea,” and we currently indeed do have no idea?
I’m thinking this might be something like computers (or fractals, or game of life). There’s nothing novel about computers in terms of fundamental ontology, they’re just patterns made of simple physics. Yet there’s a whole discipline about studying what they can do and a huge package of brand new schemas and intuitions about “things doable with computers” completely unknown to top physicists and philosophers for hundreds of years, that people got by learning about a “just some more physics” description and thinking about it for many years.
If you had a correct causal model of someone having a red experience and saying so, your model would include an actual red experience, and some reflective awareness of it, along with whatever other entities and causal relations are involved in producing the final act of speech. I expect that a sufficiently advanced neuroscience would eventually reveal the details. I find it more constructive to try to figure out what those details might be, than to ponder a hypothetical completed neuroscience that vindicates illusionism.
If you had a correct causal model of someone having a red experience and saying so, your model would include an actual red experience, and some reflective awareness of it, along with whatever other entities and causal relations are involved in producing the final act of speech.
The model would quite likely amount to a successful brain emulation which would have a conscious experience like a biological human does when run. Though you get into some conceptual hairiness with whether it’s a case that the model includes the experience qualia, or the execution of the model does. Which would be pretty interesting if it was something that could run on a classical computer.
I find it more constructive to try to figure out what those details might be, than to ponder a hypothetical completed neuroscience that vindicates illusionism.
That was the whole idea in my comment. I feel like the “no matter how much physical detail you add, it can’t add up to explaining consciousness” style of argument is exactly pondering hypothetical completed neuroscience, without doing the work. I don’t know what the completed neuroscience would vindicate because it hasn’t been done and understood yet.
I’ve been rolling around the general argument about physical descriptions and qualia around for a while. As far as I can figure, the argument is something like,
Nothing in the current physical descriptions we understand seems to have anything resembling an explanation of qualia like “the feeling of seeing red”
Future physical descriptions must be essentially “grammatical elaborations” of the current ones. Any written book is stuck in the modality of the alphabet, no matter how long or innovative it is.
Since the elementary “alphabet” of physics doesn’t describe qualia, no theory of physics can either.
And my problem here is that this is an awfully confident argument that you get by doing absolutely none of the work you assure would be useless because of the argument. We haven’t done the physical modeling of the process where qualia should be involved, and I think there’s a coherent description of the experiment, even though we’re very far from being able to do it in practice. We have little idea what qualia themselves are, so they’re hard to approach directly, but we have lots of stuff on what humans are and we can do heterophenomenology. So, vast ethical and practical objections aside, the experiment would be simulating the physics of a live adult human from molecular biology up in a lit room with a red object being asked to describe what colors they see and expecting them to respond “I see red”. And then trawling through the full simulation log of just what goes on in the simulated brain.
I see two ways this can go. If the experiment actually succeeds, we should have some very interesting data. Since the modeling proceeds from cells up instead of behavior down, it’s very unlikely we have built a chatbot that mimics surface behavior. Either it’s actually successfully mirroring how humans in the physical world perceive color, or it won’t do anything because the copy of the human neuroarchitecture won’t make sense with whatever the missing secret sauce is. So you might go full mysterian and claim there must be a secret sauce and the model won’t work, but now you’re committed to a falsifiable prediction that the experiment won’t succeed. And the original argument about adding stuff not helping is a non sequitur in this case.
Or the simulation does work. And people do the further work of deciphering all the simulated neural processes. And then we have a readable physics-level explanation of all the stuff that goes on from the 700 nm wavelength light to “I see red”. We don’t know what’s going to be in there. We haven’t done the work, we don’t know what the details will look like if they were spelled out in physics, but the “physics won’t explain the important part” argument concedes this should be doable. And I’m really curious about being able to see this picture. Like, we’re making our judgments now based on the “how things work” schemas we have now. It looks like there’d need to be some structural “how things work” schema that’s unfamiliar to us in that description, so shouldn’t we try to figure it out first instead of going “eh, it’ll just be physics physics physics, who cares”. What if after doing the work people will instead go “Oh, that’s how it works! We had no idea,” and we currently indeed do have no idea?
I’m thinking this might be something like computers (or fractals, or game of life). There’s nothing novel about computers in terms of fundamental ontology, they’re just patterns made of simple physics. Yet there’s a whole discipline about studying what they can do and a huge package of brand new schemas and intuitions about “things doable with computers” completely unknown to top physicists and philosophers for hundreds of years, that people got by learning about a “just some more physics” description and thinking about it for many years.
If you had a correct causal model of someone having a red experience and saying so, your model would include an actual red experience, and some reflective awareness of it, along with whatever other entities and causal relations are involved in producing the final act of speech. I expect that a sufficiently advanced neuroscience would eventually reveal the details. I find it more constructive to try to figure out what those details might be, than to ponder a hypothetical completed neuroscience that vindicates illusionism.
The model would quite likely amount to a successful brain emulation which would have a conscious experience like a biological human does when run. Though you get into some conceptual hairiness with whether it’s a case that the model includes the experience qualia, or the execution of the model does. Which would be pretty interesting if it was something that could run on a classical computer.
That was the whole idea in my comment. I feel like the “no matter how much physical detail you add, it can’t add up to explaining consciousness” style of argument is exactly pondering hypothetical completed neuroscience, without doing the work. I don’t know what the completed neuroscience would vindicate because it hasn’t been done and understood yet.