The present state of evidence strongly suggests (p > 0.99)* that consciousness, like combustion, is a high-level phenomenon which, in theory, can be completely described by an explanation not in terms of consciousness.
You can get away with modeling consciousness as a high-level phenomenon, if you disregard subjective experience as unimportant. If there’s even a small probability to the contrary, a “high-level” theory will blow your complexity budget.
The overwhelming majority of physicalists active on LessWrong deny that “emergence” is a sufficient explanation of consciousness.
They can deny this to their heart’s content, but the mind treats words as nodes in a Bayesian causal graph. Using words such as “emergent” is enough to shift the frame of the debate from “let’s explain consciousness!” to “let’s explain emergence! er, um… never mind that”. This seems extremely pernicious.
I am at a point where I can see little useful to say. First: I disagree with every sentence in your comment that is not (a) “See also my reply to thomblake above” or (b) a direct quote from me. Second, it appears to me that there is a large inferential distance between us—large enough that I would expect an entire sequence would be required to bridge it. (I would have expected the MAtMQ sequence to do so, but there is evidently something else not addressed there.)
Do you want to continue the discussion, knowing that the only models we can expect to improve are our models of each other?
(See also my reply to thomblake above)
You can get away with modeling consciousness as a high-level phenomenon, if you disregard subjective experience as unimportant. If there’s even a small probability to the contrary, a “high-level” theory will blow your complexity budget.
They can deny this to their heart’s content, but the mind treats words as nodes in a Bayesian causal graph. Using words such as “emergent” is enough to shift the frame of the debate from “let’s explain consciousness!” to “let’s explain emergence! er, um… never mind that”. This seems extremely pernicious.
I am at a point where I can see little useful to say. First: I disagree with every sentence in your comment that is not (a) “See also my reply to thomblake above” or (b) a direct quote from me. Second, it appears to me that there is a large inferential distance between us—large enough that I would expect an entire sequence would be required to bridge it. (I would have expected the MAtMQ sequence to do so, but there is evidently something else not addressed there.)
Do you want to continue the discussion, knowing that the only models we can expect to improve are our models of each other?