I am a Camp 2 “qualia realist” (so I don’t think it’s “non-physical”, I think this is “undiscovered physics”, although it is possible that we need to add new primitives to our overall “picture of the world”, just like electric charge or mass are primitives; I don’t think we can be sure we have discovered all primitives already; it might be or not be the case).
But when Camp 2 people talk about whether AIs are conscious or not, they mean the question whether they are “sentient”, i.e. whether there is presence of “qualia”, of “subjective reality”, without implying a particular nature of that reality. (Conditionally on saying “yes”, one would like to also figure out “what it is like to be a computational process running an LLM inference”, another typical Camp 2 question.)
Now, there is also a functional question (is their cognition similar to human cognition), and this is more-or-less Camp 1/Camp 2 neutral, and in this sense one could make further improvements to the model architecture, but they are pretty similar to people in many respects already, so it’s not surprising that they behave similarly. That’s not a “hard problem”, they do behave more or less as if they are already conscious, because their architecture is already pretty similar to ours (hierarchy of attention processes and all that). But that’s orthogonal to our Camp 2 concerns.
If our experience of qualia reflect some poorly understood phenomenon in physics, it could be part of a cluster of related phenomena, not all of which manifest in human cognition. We don’t have as precise an understanding of qualia as we do of electrons; we just try to gesture at it, and we mostly figure out what each other is talking about. If some related phenomenon manifests in computers when they run large language models, which has some things in common with what we know as qualia but also some stark differences from any such phenomen manifesting in human brains, the things we have said about what we mean when we say “qualia” might not be sufficient to determine whether said phenomenon counts as qualia or not.
If our experience of qualia reflect some poorly understood phenomenon in physics, it could be part of a cluster of related phenomena, not all of which manifest in human cognition.
Right.
We don’t have as precise an understanding of qualia as we do of electrons
It’s a big understatement; we are still at a “pre-Galilean stage” in that “field of science”. I do hope this will change sooner rather later, but the current state of our understanding of qualia is dismal.
the things we have said about what we mean when we say “qualia” might not be sufficient to determine whether said phenomenon counts as qualia or not.
Oh, yes, we are absolutely not ready to tackle this. This does not mean that the question is unimportant, but it does mean that to the extent the question is important, we are in a really bad situation.
My hope is that the need to figure out “AI subjectivity” would push us to try to move faster on understanding the nature of qualia, understanding the space of possible qualia, and all other related questions.
I am a Camp 2 “qualia realist” (so I don’t think it’s “non-physical”, I think this is “undiscovered physics”, although it is possible that we need to add new primitives to our overall “picture of the world”, just like electric charge or mass are primitives; I don’t think we can be sure we have discovered all primitives already; it might be or not be the case).
But when Camp 2 people talk about whether AIs are conscious or not, they mean the question whether they are “sentient”, i.e. whether there is presence of “qualia”, of “subjective reality”, without implying a particular nature of that reality. (Conditionally on saying “yes”, one would like to also figure out “what it is like to be a computational process running an LLM inference”, another typical Camp 2 question.)
Now, there is also a functional question (is their cognition similar to human cognition), and this is more-or-less Camp 1/Camp 2 neutral, and in this sense one could make further improvements to the model architecture, but they are pretty similar to people in many respects already, so it’s not surprising that they behave similarly. That’s not a “hard problem”, they do behave more or less as if they are already conscious, because their architecture is already pretty similar to ours (hierarchy of attention processes and all that). But that’s orthogonal to our Camp 2 concerns.
If our experience of qualia reflect some poorly understood phenomenon in physics, it could be part of a cluster of related phenomena, not all of which manifest in human cognition. We don’t have as precise an understanding of qualia as we do of electrons; we just try to gesture at it, and we mostly figure out what each other is talking about. If some related phenomenon manifests in computers when they run large language models, which has some things in common with what we know as qualia but also some stark differences from any such phenomen manifesting in human brains, the things we have said about what we mean when we say “qualia” might not be sufficient to determine whether said phenomenon counts as qualia or not.
Right.
It’s a big understatement; we are still at a “pre-Galilean stage” in that “field of science”. I do hope this will change sooner rather later, but the current state of our understanding of qualia is dismal.
Oh, yes, we are absolutely not ready to tackle this. This does not mean that the question is unimportant, but it does mean that to the extent the question is important, we are in a really bad situation.
My hope is that the need to figure out “AI subjectivity” would push us to try to move faster on understanding the nature of qualia, understanding the space of possible qualia, and all other related questions.