Thanks for your response! It’s my first time posting on LessWrong so I’m glad at least one person read and engaged with the argument :)
Regarding the mathematical argument you’ve put forward, I think there are a few considerations:
1. The same argument could be run for human consciousness. Given a fixed brain state and inputs, the laws of physics would produce identical behavioural outputs regardless of whether consciousness exists. Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.
2. Under functionalism, there’s no formal difference between “implementing conscious like functions” and “being conscious.” If consciousness emerges from certain patterns of information processing then a system implementing those patterns is conscious by definition.
3. The mathematical argument seems (at least to me) to implicitly assume consciousness is an additional property beyond the computational/functional architecture which is precisely what functionalism rejects. On functionalism, the conscious component is not an “additional ingredient” that could be present or absent all things being equal.
4. I think your response hints at something like the “Audience Objection” by Udell & Schwitzgebel which critiques Schneider’s argument.
“The tests thus have an audience problem: If a theorist is sufficiently skeptical about outward appearances of seeming AI consciousness to want to employ one of these tests, that theorist should also be worried that a system might pass the test without being conscious. Generally speaking, liberals about attributing AI consciousness will reasonably regard such stringent tests as unnecessary, while skeptics about AI consciousness will doubt that the tests are sufficiently stringent to demonstrate what they claim.”
5. I haven’t thought about this very carefully but I’d challenge the Illusionist to respond to the claims of machine consciousness in the ACT in the same way as a Functionalist. If consciousness is “just” the story that a complex system is telling itself then LLM’s on the ACT would seem to be conscious in precisely the way Illusionism suggests. The Illusionist wouldn’t be able to coherently maintain that systems telling sophisticated stories about their own consciousness are not actually conscious.
Thanks for your response! It’s my first time posting on LessWrong so I’m glad at least one person read and engaged with the argument :)
Regarding the mathematical argument you’ve put forward, I think there are a few considerations:
1. The same argument could be run for human consciousness. Given a fixed brain state and inputs, the laws of physics would produce identical behavioural outputs regardless of whether consciousness exists. Yet, we generally accept behavioural evidence (including sophisticated reasoning about consciousness) as evidence of consciousness in humans.
2. Under functionalism, there’s no formal difference between “implementing conscious like functions” and “being conscious.” If consciousness emerges from certain patterns of information processing then a system implementing those patterns is conscious by definition.
3. The mathematical argument seems (at least to me) to implicitly assume consciousness is an additional property beyond the computational/functional architecture which is precisely what functionalism rejects. On functionalism, the conscious component is not an “additional ingredient” that could be present or absent all things being equal.
4. I think your response hints at something like the “Audience Objection” by Udell & Schwitzgebel which critiques Schneider’s argument.
“The tests thus have an audience problem: If a theorist is sufficiently skeptical about outward appearances of seeming AI consciousness to want to employ one of these tests, that theorist should also be worried that a system might pass the test without being conscious. Generally speaking, liberals about attributing AI consciousness will reasonably regard such stringent tests as unnecessary, while skeptics about AI consciousness will doubt that the tests are sufficiently stringent to demonstrate what they claim.”
5. I haven’t thought about this very carefully but I’d challenge the Illusionist to respond to the claims of machine consciousness in the ACT in the same way as a Functionalist. If consciousness is “just” the story that a complex system is telling itself then LLM’s on the ACT would seem to be conscious in precisely the way Illusionism suggests. The Illusionist wouldn’t be able to coherently maintain that systems telling sophisticated stories about their own consciousness are not actually conscious.