We don’t know how consciousness arises, in terms of what sort of things have subjective experience. Your assertion is one reasonable hypothesis, but you don’t support it or comment on any of the other possible hypotheses.
I don’t think many people use “better than every human in every way” as a definition for the term “AGI”. However, LLMs are fairly clearly not yet AGI even for less extreme meanings of the term, such as “at least as capable for almost all cognitive tasks as an average human”. It is pretty clear that current LLMs are still quite a lot less capable in many important ways than fairly average humans, despite being as capable and even more capable in other important ways. They do meet a very loose definition of AGI such as “comparable or better in most ways to the mental capabilities of a significant fraction of human population”, so saying that they are AGI is at least somewhat justifiable.
LLMs emit text consistent with the training corpus and tuning processes. If that means using a first person pronoun “I am an …” instead of a third-person description such as “This text is produced by an …”, then that doesn’t say anything about whether the LLM is conscious or not. Even a 1-line program can print “I am a computer program but not a conscious being”, and have that be a true statement to the extent that the pronoun “I” can be taken to mean “whatever entity produced the sentence” and not “a conscious being that produced the sentence”.
To be clear, I am not saying that LLMs are not conscious, merely that we don’t know. What we do know is that they are optimized to produce outputs that match those from entities that we generally believe to be conscious. Using those outputs as evidence to justify a hypothesis of consciousness is begging the question to a much greater degree than looking at outputs of systems that were not so directly optimized.
We don’t know how consciousness arises, in terms of what sort of things have subjective experience. Your assertion is one reasonable hypothesis, but you don’t support it or comment on any of the other possible hypotheses.
I don’t think many people use “better than every human in every way” as a definition for the term “AGI”. However, LLMs are fairly clearly not yet AGI even for less extreme meanings of the term, such as “at least as capable for almost all cognitive tasks as an average human”. It is pretty clear that current LLMs are still quite a lot less capable in many important ways than fairly average humans, despite being as capable and even more capable in other important ways.
They do meet a very loose definition of AGI such as “comparable or better in most ways to the mental capabilities of a significant fraction of human population”, so saying that they are AGI is at least somewhat justifiable.
LLMs emit text consistent with the training corpus and tuning processes. If that means using a first person pronoun “I am an …” instead of a third-person description such as “This text is produced by an …”, then that doesn’t say anything about whether the LLM is conscious or not. Even a 1-line program can print “I am a computer program but not a conscious being”, and have that be a true statement to the extent that the pronoun “I” can be taken to mean “whatever entity produced the sentence” and not “a conscious being that produced the sentence”.
To be clear, I am not saying that LLMs are not conscious, merely that we don’t know. What we do know is that they are optimized to produce outputs that match those from entities that we generally believe to be conscious. Using those outputs as evidence to justify a hypothesis of consciousness is begging the question to a much greater degree than looking at outputs of systems that were not so directly optimized.