Yes, current LLM-based virtual characters tend to be short-lived (but it’s easy to change by adding memory and making more persistent “agents” out of them).
One downstream effect of this division is that there are a lot of arguments which make sense only to people from one of these camps.
(In this sense, being a Camp 2 person, I would expect LLM inference qualia (if any) to be quite different from mine, but also I do hope that we would learn more in the future, via both theoretical and experimental breakthroughs. I can elaborate about this, at least for Camp 2 people.)
Yes, current LLM-based virtual characters tend to be short-lived (but it’s easy to change by adding memory and making more persistent “agents” out of them).
With those more deep things, who knows.
One interesting observation is that there seems to be a deep division among humans, and it’s not clear if that’s a mere difference in worldviews, or if something is fundamentally different about the way their own subjectivity feels for people in these two camps: https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/why-it-s-so-hard-to-talk-about-consciousness
One downstream effect of this division is that there are a lot of arguments which make sense only to people from one of these camps.
(In this sense, being a Camp 2 person, I would expect LLM inference qualia (if any) to be quite different from mine, but also I do hope that we would learn more in the future, via both theoretical and experimental breakthroughs. I can elaborate about this, at least for Camp 2 people.)