I think the kind of sensible goalpost-moving you are describing should be understood as run-of-the-mill conceptual fragmentation, which is ubiquitous in science. As scientific communities learn more about the structure of complex domains (often in parallel across disciplinary boundaries), numerous distinct (but related) concepts become associated with particular conceptual labels (this is just a special case of how polysemy works generally). This has already happened with scientific concepts like gene, species, memory, health, attention and many more.
In this case, it is clear to me that there are important senses of the term “general” which modern AI satisfies the criteria for. You made that point persuasively in this post. However, it is also clear that there are important senses of the term “general” which modern AI does not satisfy the criteria for. Steven Byrnes made that point persuasively in his response. So far as I can tell you will agree with this.
If we all agree with the above, the most important thing is to disambiguate the sense of the term being invoked when applying it in reasoning about AI. Then, we can figure out whether the source of our disagreements is about semantics (which label we prefer for a shared concept) or substance (which concept is actually appropriate for supporting the inferences we are making).
What are good discourse norms for disambiguation? An intuitively appealing option is to coin new terms for variants of umbrella concepts. This may work in academic settings, but the familiar terms are always going to have a kind of magnetic pull in informal discourse. As such, I think communities like this one should rather strive to define terms wherever possible and approach discussions with a pluralistic stance.
Really interesting stuff, thanks for sharing it!
I’m afraid I’m sceptical that you methodology licenses the conclusions you draw. You state that you pushed people away from “using common near-synonyms like awareness or experience” and “asked them to instead describe the structure of the consciousness process, in terms of moving parts and/or subprocesses”. You end up concluding, on the basis of people’s radically divergent responses when so prompted, that they are referring to different things with the term ‘consciousness’.
The problem I see is that the near-synonyms you ruled out are the most succinct and theoretically-neutral ways of pointing at what consciousness is. We mostly lack other ways of gesturing towards what is shared by most (not all) people’s conception of consciousness. That we are aware. That we experience things. That there is something it like to be us. These are the minimal notions of consciousness for which there may be a non-conflationary alliance. when you push people away from using those notions, they are left grasping at poorly evidenced claims about moving parts and sub-processes. That there is no convergence here does not surprise me in the slightest. Of course people differ with respect to intuitions about the structure of consciousness. But the structure is not the typical referent of the word ‘conscious’, the first-person, phenomenal character of experience itself is.