We don’t know how consciousness arises, in terms of what sort of things have subjective experience. Your assertion is one reasonable hypothesis, but you don’t support it or comment on any of the other possible hypotheses.
I don’t think many people use “better than every human in every way” as a definition for the term “AGI”. However, LLMs are fairly clearly not yet AGI even for less extreme meanings of the term, such as “at least as capable for almost all cognitive tasks as an average human”. It is pretty clear that current LLMs are still quite a lot less capable in many important ways than fairly average humans, despite being as capable and even more capable in other important ways.
They do meet a very loose definition of AGI such as “comparable or better in most ways to the mental capabilities of a significant fraction of human population”, so saying that they are AGI is at least somewhat justifiable.LLMs emit text consistent with the training corpus and tuning processes. If that means using a first person pronoun “I am an …” instead of a third-person description such as “This text is produced by an …”, then that doesn’t say anything about whether the LLM is conscious or not. Even a 1-line program can print “I am a computer program but not a conscious being”, and have that be a true statement to the extent that the pronoun “I” can be taken to mean “whatever entity produced the sentence” and not “a conscious being that produced the sentence”.
To be clear, I am not saying that LLMs are not conscious, merely that we don’t know. What we do know is that they are optimized to produce outputs that match those from entities that we generally believe to be conscious. Using those outputs as evidence to justify a hypothesis of consciousness is begging the question to a much greater degree than looking at outputs of systems that were not so directly optimized.
Is it a necessary non-epistemic truth? After all, it has a very lengthy partial proof in Principia Mathematica, and maybe they got something wrong. Perhaps you should check?
But then maybe you’re not using a formal system to prove it, but just taking it as an axiom or maybe as a definition of what “2” means using other symbols with pre-existing meanings. But then if I define the term “blerg” to mean “a breakfast product with non-obvious composition”, is that definition in itself a necessary truth?
Obviously if you mean “if you take one object and then take another object, you now have two objects” then that’s a contingent proposition that requires evidence. It probably depends upon what sorts of things you mean by “objects” too, so we can rule that one out.
Or maybe “necessary non-epistemic truth” means a proposition that you can “grok in fullness” and just directly see that it is true as a single mental operation? Though, isn’t that subjective and also epistemic? Don’t you have to check to be sure that it is one? Was it a necessary non-epistemic truth for you when you were young enough to have trouble with the concept of counting?
So in the end I’m not really sure exactly what you mean by a necessary truth that doesn’t need any checking. Maybe it’s not even a coherent concept.