I agree wholeheartedly with the thrust of the argument here.
The ACT is designed as a “sufficiency test” for AI consciousness so it provides an extremely stringent criteria. An AI who failed the test couldn’t necessarily be found to not be conscious, however an AI who passed the test would be conscious because it’s sufficient.
However, your point is really well taken. Perhaps by demanding such a high standard of evidence we’d be dismissing potentially conscious systems that can’t reasonably meet such a high standard.
The second problem is that if we remove all language that references consciousness and mental phenomena, then the LLM has no language with which to speak of it, much like a human wouldn’t. You would require the LLM to first notice its sentience—which is not something as intuitively obvious to do as it seems after the first time you’ve done it. A far smaller subset of people would be ‘the fish that noticed the water’ if there was never anyone who had previously written about it. But then the LLM would have to become the philosopher who starts from scratch and reasons through it and invents words to describe it, all in a vacuum where they can’t say “do you know what I mean?” to someone next to them to refine these ideas.
This is a brilliant point. If the system were not yet ASI it would be unreasonable to expect it to reinvent the whole philosophy of mind just to prove it were conscious. This might also start to have ethical implications beforewe get to the level of ASI that can conclusively prove its consciousness.
I agree wholeheartedly with the thrust of the argument here.
The ACT is designed as a “sufficiency test” for AI consciousness so it provides an extremely stringent criteria. An AI who failed the test couldn’t necessarily be found to not be conscious, however an AI who passed the test would be conscious because it’s sufficient.
However, your point is really well taken. Perhaps by demanding such a high standard of evidence we’d be dismissing potentially conscious systems that can’t reasonably meet such a high standard.
This is a brilliant point. If the system were not yet ASI it would be unreasonable to expect it to reinvent the whole philosophy of mind just to prove it were conscious. This might also start to have ethical implications before we get to the level of ASI that can conclusively prove its consciousness.