So it is that particular short-lived character which might actually experience or not experience emotions, and not the engine running it (the current inference computation, which tends to be a relatively short-lived process, and not the LLM as a static entity).
However, other than that, it is difficult to pinpoint the difference with humans, and the question of subjective valences (if any), associated with those processes remains quite open, Perhaps in the future we’ll have a reliable “science of the subjective” capable of figuring these things out, but we have not even started to make tangible progress in that direction.
If I understand you correctly (please correct me if not), I think one major difference with humans is something like continuity? Like, maybe Dan Dennett was totally right and human identity is basically an illusion/purely a narrative. In that way, our self concepts might be peers to an AI’s constructed RP identity within a given conversational thread. But for humans, our self-concept has impacts on things like hormonal or neurotransmitter (to be hand wavey) shifts—when my identity is threatened, I not only change my notion of self marginally, but also my stomach might hurt. For an LLM these specific extra layers presumptively don’t exist (though maybe other ones we don’t understand do exist).
Yes, current LLM-based virtual characters tend to be short-lived (but it’s easy to change by adding memory and making more persistent “agents” out of them).
One downstream effect of this division is that there are a lot of arguments which make sense only to people from one of these camps.
(In this sense, being a Camp 2 person, I would expect LLM inference qualia (if any) to be quite different from mine, but also I do hope that we would learn more in the future, via both theoretical and experimental breakthroughs. I can elaborate about this, at least for Camp 2 people.)
It is true that we are not talking about a persistent entity (“LLM”), but about a short-lived character being simulated (see e.g. https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators).
So it is that particular short-lived character which might actually experience or not experience emotions, and not the engine running it (the current inference computation, which tends to be a relatively short-lived process, and not the LLM as a static entity).
However, other than that, it is difficult to pinpoint the difference with humans, and the question of subjective valences (if any), associated with those processes remains quite open, Perhaps in the future we’ll have a reliable “science of the subjective” capable of figuring these things out, but we have not even started to make tangible progress in that direction.
If I understand you correctly (please correct me if not), I think one major difference with humans is something like continuity? Like, maybe Dan Dennett was totally right and human identity is basically an illusion/purely a narrative. In that way, our self concepts might be peers to an AI’s constructed RP identity within a given conversational thread. But for humans, our self-concept has impacts on things like hormonal or neurotransmitter (to be hand wavey) shifts—when my identity is threatened, I not only change my notion of self marginally, but also my stomach might hurt. For an LLM these specific extra layers presumptively don’t exist (though maybe other ones we don’t understand do exist).
Yes, current LLM-based virtual characters tend to be short-lived (but it’s easy to change by adding memory and making more persistent “agents” out of them).
With those more deep things, who knows.
One interesting observation is that there seems to be a deep division among humans, and it’s not clear if that’s a mere difference in worldviews, or if something is fundamentally different about the way their own subjectivity feels for people in these two camps: https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/why-it-s-so-hard-to-talk-about-consciousness
One downstream effect of this division is that there are a lot of arguments which make sense only to people from one of these camps.
(In this sense, being a Camp 2 person, I would expect LLM inference qualia (if any) to be quite different from mine, but also I do hope that we would learn more in the future, via both theoretical and experimental breakthroughs. I can elaborate about this, at least for Camp 2 people.)