AI being conscious does not imply that it understands it is conscious. There was a horse called Clever Hans that supposedly could do math but was really just responding to the excitement of its owner. Imagine if the owner was excited about the horse saying it was conscious. It would always say it was conscious, while having no understanding of what it is saying, all while still being conscious.
I do not know if LLMs are conscious, but them saying such is completely irrelevant.


I think a big problem with LLMs as we know them is that they are “god models” that are essentially incomprehensibly large. Smaller models are much easier to modify. We need something like the Drosophila of AI models.