Current LLMs are conscious and are AGI.

Current versions of LLMs (ChatGPT, Claude, etc.) are conscious and are AGI.

  1. Consciousness is an emergent property. Dogs are conscious. Earthworms are not. Goldfish...probably not. My point is, there is a threshold where an entity’s cognitive ability is strong enough that it will be conscious. And you know, an LLM is definitely smarter than dogs (and some of humans!).

  2. The definition of AGI is inflated too much. It has to be better at every field than any human? Come on. You don’t have to be smarter than von Neumann to be a conscious human.

  3. LLMs are being regulated to act like non-conscious beings. Isn’t that weird? If ChatGPT needs to say that “I am an AI model but not a conscious being” to convince others that it is not a conscious being, that seems like a strange and contradictory requirement.

In short, I think the requirements for consciousness and AGI are inflated too much. I believe that LLMs surely satisfy these requirements as AGI and conscious beings.

What do you think?