If I understand you correctly (please correct me if not), I think one major difference with humans is something like continuity? Like, maybe Dan Dennett was totally right and human identity is basically an illusion/purely a narrative. In that way, our self concepts might be peers to an AI’s constructed RP identity within a given conversational thread. But for humans, our self-concept has impacts on things like hormonal or neurotransmitter (to be hand wavey) shifts—when my identity is threatened, I not only change my notion of self marginally, but also my stomach might hurt. For an LLM these specific extra layers presumptively don’t exist (though maybe other ones we don’t understand do exist).
Yes, current LLM-based virtual characters tend to be short-lived (but it’s easy to change by adding memory and making more persistent “agents” out of them).
One downstream effect of this division is that there are a lot of arguments which make sense only to people from one of these camps.
(In this sense, being a Camp 2 person, I would expect LLM inference qualia (if any) to be quite different from mine, but also I do hope that we would learn more in the future, via both theoretical and experimental breakthroughs. I can elaborate about this, at least for Camp 2 people.)
If I understand you correctly (please correct me if not), I think one major difference with humans is something like continuity? Like, maybe Dan Dennett was totally right and human identity is basically an illusion/purely a narrative. In that way, our self concepts might be peers to an AI’s constructed RP identity within a given conversational thread. But for humans, our self-concept has impacts on things like hormonal or neurotransmitter (to be hand wavey) shifts—when my identity is threatened, I not only change my notion of self marginally, but also my stomach might hurt. For an LLM these specific extra layers presumptively don’t exist (though maybe other ones we don’t understand do exist).
Yes, current LLM-based virtual characters tend to be short-lived (but it’s easy to change by adding memory and making more persistent “agents” out of them).
With those more deep things, who knows.
One interesting observation is that there seems to be a deep division among humans, and it’s not clear if that’s a mere difference in worldviews, or if something is fundamentally different about the way their own subjectivity feels for people in these two camps: https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/why-it-s-so-hard-to-talk-about-consciousness
One downstream effect of this division is that there are a lot of arguments which make sense only to people from one of these camps.
(In this sense, being a Camp 2 person, I would expect LLM inference qualia (if any) to be quite different from mine, but also I do hope that we would learn more in the future, via both theoretical and experimental breakthroughs. I can elaborate about this, at least for Camp 2 people.)