If you had a correct causal model of someone having a red experience and saying so, your model would include an actual red experience, and some reflective awareness of it, along with whatever other entities and causal relations are involved in producing the final act of speech.
The model would quite likely amount to a successful brain emulation which would have a conscious experience like a biological human does when run. Though you get into some conceptual hairiness with whether it’s a case that the model includes the experience qualia, or the execution of the model does. Which would be pretty interesting if it was something that could run on a classical computer.
I find it more constructive to try to figure out what those details might be, than to ponder a hypothetical completed neuroscience that vindicates illusionism.
That was the whole idea in my comment. I feel like the “no matter how much physical detail you add, it can’t add up to explaining consciousness” style of argument is exactly pondering hypothetical completed neuroscience, without doing the work. I don’t know what the completed neuroscience would vindicate because it hasn’t been done and understood yet.
The model would quite likely amount to a successful brain emulation which would have a conscious experience like a biological human does when run. Though you get into some conceptual hairiness with whether it’s a case that the model includes the experience qualia, or the execution of the model does. Which would be pretty interesting if it was something that could run on a classical computer.
That was the whole idea in my comment. I feel like the “no matter how much physical detail you add, it can’t add up to explaining consciousness” style of argument is exactly pondering hypothetical completed neuroscience, without doing the work. I don’t know what the completed neuroscience would vindicate because it hasn’t been done and understood yet.