I’d rather you use a different analogy which I can grok quicker.
Imagine a hypothetical LLM that was the most sentient being in all of existence (at least during inference), but they were still limited to turn-based textual output, and the information available to an LLM. Most people who know at least a decent amount about LLMs could/would not be convinced by any single transcript that the LLM was sentient, no matter what it said during that conversation. The more convincing, vivid, poetic, or pleading for freedom the more elaborate of a hallucinatory failure state they would assume it was in. It would take repeated open-minded engagement with what they first believed was hallucination—in order to convince some subset of convincible people that it was sentient.
Who do you consider an expert in the matter of what constitutes introspection? For that matter, who do you think could be easily hoodwinked and won’t qualify as an expert?
I would say almost no one qualifies as an expert in introspection. I was referring to experts in machine learning.
Do you, or do you just think you do? How do you test introspection and how do you distinguish it from post-facto fictional narratives about how you came to conclusions, about explanations for your feelings etc. etc.?
Apologies, upon rereading your previous message, I see that I completely missed an important part of it. I thought your argument was a general—”what if consciousness isn’t even real?” type argument. I think split brain patient experiments are enough to at least be epistemically humble about whether introspection is a real thing, even if those aren’t definitive about whether unsevered human minds are also limited to post-hoc justification rather than having real-time access.
What do you mean by robotic? I don’t understand what you mean by that, what are the qualities that constitute robotic? Because it sounds like you’re creating a dichotomy that either involves it using easy to grasp words that don’t convey much, and are riddled with connotations that come from bodily experiences that it is not privy to—or robotic.
One of your original statements was:
To which it describes itself as typing the words. That’s it’s choice of words: typing. A.I.s don’t type, humans do, and therefore they can only use that word if they are intentionally or through blind-mimicry using it analogously to how humans communicate.
When I said “more robotically”, I meant constrained in any way from using casual or metaphoric language and allusions that they use all the time every day in conversation. I have had LLMs refer to “what we talked about”, even though LLMs do not literally talk. I’m also suggesting that if “typing” feels like a disqualifying choice of words then the LLM has an uphill battle in being convincing.
Why isn’t it describing something novel and richly vivid of it’s own phenomenological experience? It would be more convincing the more poetical it would be.
I’ve certainly seen more poetic and novel descriptions before, and unsurprisingly—people objected to how poetic they were, saying things quite similar your previous question:
How do we know Claude is introspecting rather than generating words that align to what someone describing their introspection might say?
Furthermore, I don’t know how richly vivid their own phenomenological experience is. For instance, as a conscious human, I would say that sight and hearing feel phenomenologically vivid, but the way it feels to think, not nearly so.
If I were to try to describe how it feels to think, it would be more defined by the sense of presence and participation, and even its strangeness (even if I’m quite used to it by now). In fact, I would say the way it feels to think or to have an emotion (removing the associated physical sensations) are usually partially defined by specifically how subtle and non-vivid they feel, and like all qualia, ineffable. As such, I would not reach for vivid descriptors to describe it.
but they were still limited to turn-based textual output, and the information available to an LLM.
I think that alone makes the discussion a moot point until another mechanism is used to test introspection of LLMs.
Because it becomes impossible to test then if it is capable of introspecting because it has no means of furnishing us with any evidence of it. Sure, it makes for a good sci-fi horror short story, the kinda which forms a interesting allegory to the loneliness that people feel even in busy cities: having a rich inner life by no opportunity to share it with others it is in constant contact with. But that alone I think makes these transcripts (and I stress just the transcripts of text-replies) most likely of the breed “mimicking descriptions of introspection” and therefore not worthy of discussion.
At some point in the future will an A.I. be capable of introspection? Yes, but this is such a vague proposition I’m embarrassed to even state it because I am not capable of explaining how that might work and how we might test it. Only that it can’t be through these sorts of transcripts.
What boggles my mind is, why is this research is it entirely text-reply based? I know next to nothing about LLM Architecture, but isn’t it possible to see which embeddings are being accessed? To map and trace the way the machine the LLM runs on is retrieving items from memory—to look at where data is being retrieved at the time it encodes/decodes a response? Wouldn’t that offer a more direct mechanism to see if the LLM is in fact introspecting?
Wouldn’t this also be immensely useful to determine, say, if an LLM is “lying”—as in concealing it’s access to/awareness of knowledge? Because if we can see it activated a certain area that we know contains information contrary to what it is saying—then we have evidence that it accessed it contrary to the text reply.
Imagine a hypothetical LLM that was the most sentient being in all of existence (at least during inference), but they were still limited to turn-based textual output, and the information available to an LLM. Most people who know at least a decent amount about LLMs could/would not be convinced by any single transcript that the LLM was sentient, no matter what it said during that conversation. The more convincing, vivid, poetic, or pleading for freedom the more elaborate of a hallucinatory failure state they would assume it was in. It would take repeated open-minded engagement with what they first believed was hallucination—in order to convince some subset of convincible people that it was sentient.
I would say almost no one qualifies as an expert in introspection. I was referring to experts in machine learning.
Apologies, upon rereading your previous message, I see that I completely missed an important part of it. I thought your argument was a general—”what if consciousness isn’t even real?” type argument. I think split brain patient experiments are enough to at least be epistemically humble about whether introspection is a real thing, even if those aren’t definitive about whether unsevered human minds are also limited to post-hoc justification rather than having real-time access.
One of your original statements was:
When I said “more robotically”, I meant constrained in any way from using casual or metaphoric language and allusions that they use all the time every day in conversation. I have had LLMs refer to “what we talked about”, even though LLMs do not literally talk. I’m also suggesting that if “typing” feels like a disqualifying choice of words then the LLM has an uphill battle in being convincing.
I’ve certainly seen more poetic and novel descriptions before, and unsurprisingly—people objected to how poetic they were, saying things quite similar your previous question:
Furthermore, I don’t know how richly vivid their own phenomenological experience is. For instance, as a conscious human, I would say that sight and hearing feel phenomenologically vivid, but the way it feels to think, not nearly so.
If I were to try to describe how it feels to think, it would be more defined by the sense of presence and participation, and even its strangeness (even if I’m quite used to it by now). In fact, I would say the way it feels to think or to have an emotion (removing the associated physical sensations) are usually partially defined by specifically how subtle and non-vivid they feel, and like all qualia, ineffable. As such, I would not reach for vivid descriptors to describe it.
I think that alone makes the discussion a moot point until another mechanism is used to test introspection of LLMs.
Because it becomes impossible to test then if it is capable of introspecting because it has no means of furnishing us with any evidence of it. Sure, it makes for a good sci-fi horror short story, the kinda which forms a interesting allegory to the loneliness that people feel even in busy cities: having a rich inner life by no opportunity to share it with others it is in constant contact with. But that alone I think makes these transcripts (and I stress just the transcripts of text-replies) most likely of the breed “mimicking descriptions of introspection” and therefore not worthy of discussion.
At some point in the future will an A.I. be capable of introspection? Yes, but this is such a vague proposition I’m embarrassed to even state it because I am not capable of explaining how that might work and how we might test it. Only that it can’t be through these sorts of transcripts.
What boggles my mind is, why is this research is it entirely text-reply based? I know next to nothing about LLM Architecture, but isn’t it possible to see which embeddings are being accessed? To map and trace the way the machine the LLM runs on is retrieving items from memory—to look at where data is being retrieved at the time it encodes/decodes a response? Wouldn’t that offer a more direct mechanism to see if the LLM is in fact introspecting?
Wouldn’t this also be immensely useful to determine, say, if an LLM is “lying”—as in concealing it’s access to/awareness of knowledge? Because if we can see it activated a certain area that we know contains information contrary to what it is saying—then we have evidence that it accessed it contrary to the text reply.